Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
TensorFlow - Help Protect the Great Barrier Reef | Trust CV -- 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Near but not the same. combine all labels and predictions from all folds, treat it as a single output, then calculate metric on it.</p>
<p><a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175614" target="_blank">https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175614</a></p>
<p>maybe this post can help you.</p>
<p><a href="https://www.kaggle.com/outrunner" target="_blank">@outrunner</a> <br>
哈哈,也恭喜你的another solo gold!<br>
从你的这个角度的话,我一般只压CV这边。因为如果客户想要迁移能力的话他应该从一开始就提出来,比如 Google Landmark 以及 Bengali 这样的,那我也会有特殊的处理办法。<br>
另外我有一条消息想与你分享,请查收一下邮件。</p>
<p>In fact, compared to the #1 question, the other questions are less important because they are only parameter tuning works or try and error things right?</p>
<p>So for the #1 question, it should be clarified that it's not a conclution, but how things should be done.<br>
Imagine you are doing a realistic machine learning task. You only have a training data, so what would you optimize your model based on? Only the cv score. I'm not willing to do something that will lower the cv score. That's it. If we're still at 150th when the pvt LB comes out (we had been prepared for this), I'm perfectly fine with that because I did what I thought was right. </p>
<p>Of course I have no problem with many people willing to probe public LB, as long as the rules still allow them to do so.</p>
<p>Hi, actually the CV stands for cross validation, and the key word is <strong>cross</strong> so the local score of 1 fold we don't call it CV.</p>
<p>when you combine your predicted result from all the folds and calculate validation score on it, it can be called CV score.</p>
<p>When we train model on all data, we don't do evaluation. Just set a proper number of epoch and run it.</p>
<p>OK, simply put,</p>
<ul>
<li>infrerence resizing - same size is <strong>best on local cv</strong>.</li>
<li>classifier - convnext work <strong>best on local cv</strong>.</li>
<li>albumentations - hflip/vflip/transpose, 45° random rotation, all p=0.5 <strong>work best for us on local cv</strong>.</li>
<li>training procedure - for yolo, all data. for cls, 3-fold models. </li>
</ul>
<p>Is it helpful to you?</p>
<p>Anyway, if you guys finished 34th without a GPU, that's an amazing achievement. Congratulations to you guys.</p>
<p>PS: Your score is less than 0.02 from the gold zone, please don't think your solution is bad. It is good enough, maybe you guys just lack some luck.</p>
<p>Thank you and congratulations to you, too!</p>
<ol>
<li>It was a gut feeling, I used this method very directly when writing the code and didn't try any other method else.</li>
<li>avg the output of 7 bins.</li>
<li>compared to most of the competitions, this dataset label noise is not that serious, so we chose not to deal with it.</li>
</ol>
<p>Thanks to the organizers and congrats to all the winners and my wonderful teammates <a href="https://www.kaggle.com/nvnnghia" target="_blank">@nvnnghia</a> and <a href="https://www.kaggle.com/steamedsheep" target="_blank">@steamedsheep</a> </p>
<p>This is really very unexpected for us. Because we don't have any NEW THING, we just keep optimizing cross validation F2 of our pipeline locally from the beginning till the end.</p>
<h1>Summary</h1>
<p>We designed a 2-stage pipeline. object detection -> classification re-score. Then a post-processing method follows.</p>
<p>Validation strategy: 3-fold cross validation split by video_id.</p>
<h2>Object detection</h2>
<ul>
<li>6 yolov5 models, 3 trained on 3648 images and 3 trained on 1536 image patches (described below)</li>
<li>image patches: we cut original image (1280x720) into many patches (512x320), removed boxes near boundary, then only train yolov5 on those patches with cots.</li>
<li>modified some yolo hyper-parameters based on default: <code>box=0.2</code>, <code>iou_t=0.3</code></li>
<li>augmentations: based on yolov5 default augmentations, we added: rotation, mixup and albumentations.Transpose, then removed HSV.</li>
<li>after the optimization was completed by cross validation, we trained the final models with all the data.</li>
<li>all models are inferred using the same image size as trained.</li>
<li>ensemble these 6 yolov5 models gives us CV0.716, in addition the best one is CV0.676.</li>
</ul>
<h2>Classification re-score:</h2>
<ul>
<li>crop out all predicted boxes (3-fold OOF) into squares wich conf > 0.01. The side length of the square is <code>max(length, width)</code> of the predicted boxes, then extended by 20%.</li>
<li>we calculate the iou as the maximum of the iou values of each predicted box and GT boxes of this image.</li>
<li>classification target of each cropped box: iou>0.3, iou>0.4, iou>0.5, iou>0.6, iou>0.7, iou>0.8 and iou>0.9 Simply put, the iou is divided into 7 bins. e.g.: <code>[1,1,1,0,0,0,0]</code> indicates the iou is between 0.5 and 0.6.</li>
<li>during inference we average 7 bin outputs as classification score.</li>
<li>then we use BCELoss to train those cropped boxes by size 256x256 or 224x224.</li>
<li>a very high dropout_rate or drop_path_rate can help a lot to improve the performance of the classification model. We use <code>dropout_rate=0.7</code> and <code>drop_path_rate=0.5</code></li>
<li>augmentations: hflip, vflip, transpose, 45° rotation and cutout.<br>
The best classification model can boost out CV to 0.727</li>
<li>after ensemble some classification models, our CV comes to 0.73+</li>
</ul>
<h2>Post-processing</h2>
<p>Finally, we use a simple post-processing to further boost our CV to 0.74+.<br>
For example, the model has predicted some boxes B at #N frame, select the boxes from B which has a high confidence, these boxes are marked as "attention area".<br>
in the #N+1, #N+2, #N+3 frame, for the predicted boxes with conf > 0.01, if it has an IoU with the "attention area" larger than 0, boost the score of these boxes with <code>score += confidence * IOU</code></p>
<p>We also tried the tracking method, which gives us a CV of +0.002. However, it introduces two additional hyperparameters. We therefore chose not to use it.</p>
<h1>Little story</h1>
<p>At the beginning of the competition, we used different F2 algorithms for each of the three members of our team, and later we found that for the same oof, we did not calculate the same score.<br>
For example, nvnn shared an OOF file with <code>F2=0.62</code>, and sheep calculated <code>F2=0.66</code>, while I calculated <code>F2=0.68</code>.<br>
We finally chose to use the F2 algorithm with the lowest score from nvnn to evaluate all our models.</p>
<p><a href="https://www.kaggle.com/haqishen/f2-evaluation/script" target="_blank">https://www.kaggle.com/haqishen/f2-evaluation/script</a></p>
<p>Here's our final F2 algorithm, if you are interested you can use this algorithm to compare your CV with ours!</p>
<h1>Acknowledge</h1>
<p>As usual, I trained many models in this competition using Z by HP Z8G4 Workstation with dual A6000 GPU. The large memory of 48G for a single GPU allowed me to train large resolution images with ease. Thanks to Z by HP for sponsoring!</p>
<p>For me, manually tune by intuition.</p> |
TensorFlow - Help Protect the Great Barrier Reef | YOLOX on steroids solution (6th on Public LB / 16th on Private LB) [Placeholder] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Ladies and gents, this post is a placeholder for my solution using a heavily modified YOLOX. I will post the detailed solution once I've performed some benchmarks to quantify how much each addition adds to the private LB.</p>
<p>Solution outline:</p>
<p><strong><em>Model:</em></strong></p>
<p><strong>YOLOX</strong> (heavily modified)</p>
<p><strong>Double mosaic mixup</strong></p>
<ul>
<li>vanilla YOLOX uses a mixup of two images: a 2x2 mosaic (affine with rotation, scaling, translation) overlaid on top of a non-augmented mixup (with scaling only)</li>
<li>removing mixup and mosaic impacts mAP</li>
<li>what if we add more mosaic by overlaying two 2x2 mosaic images (8 images in total)</li>
</ul>
<p><strong>Albumentations integration into YOLOX</strong></p>
<p><strong>Augmentations pre-mosaic</strong></p>
<ul>
<li>current implementation only applies augmentation after mosaic, and not to the other mixup image</li>
</ul>
<p><strong>Frozen backbone</strong> for final epoch fine-tuning; only train regression head (responsible for obj / bbox outputs). Benefits of this include:</p>
<ul>
<li>~80% reduction of GPU memory cost when Darknet53 backbone is frozen. Allows 5x batch size increase, or training at even higher resolutions (I opted for 5x batch size increase)</li>
<li>Prevents loss of Darknet53 backbone learning from previous epochs that utilise mosaic/mixup.</li>
</ul>
<p><strong><em>Post-inference processing:</em></strong></p>
<p><strong>MemBox</strong> (my tracking solution to video object detection)<br>
Concept: </p>
<ul>
<li>bounding boxes of one frame is likely to overlap that of the next frame</li>
<li>when the video is panning, bounding boxes have predictable x/y velocities. This velocity can be used to predict the bbox of the next frame</li>
<li>MemBox matches predicted bboxes against its velocity-based tracked bboxes<br>
MemBox outputs:</li>
<li>model-inference / unmatched bboxes: CONF is not changed</li>
<li>MemBox-predicted tracked / unmatched bboxes: CONF of tracked bbox is reduced by 0.2</li>
<li>matched bboxes: CONF is increased by 20% CONF from current frame<br>
Outcomes:</li>
<li>model inference of the same object across multiple frames will increase its CONF, compared with unmatched inference or tracked bboxes</li>
</ul>
<p><strong>WBF Ensemble using max conf</strong> (modified WBF)<br>
Issue:</p>
<ul>
<li>Current WBF uses a weighted average of CONF of individual models</li>
<li>This may bias against predictions that are missing in some sub-models<br>
Solution:</li>
<li>Use "max" CONF: this takes the maximum CONF of each sub-model.</li>
<li>[WIP] it seems that using vanilla WBF benefits private LB whereas Max CONF benefits public LB. Will do some benchmarks to verify this</li>
</ul>
<p><strong>BBox scaling</strong></p>
<ul>
<li>Using a 0.9x bbox scaling greatly enhances public LB (but damages private LB)</li>
<li>As discussed in my other post (<a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/307607" target="_blank">https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/307607</a>) this must have something to do with how the COTS are annotated (i.e. tight or loose bboxes around the COTS)</li>
</ul>
<p>[ WIP - will post some benchmarks of public / private LB once these are available ]</p>
<p>Thanks <a href="https://www.kaggle.com/lukaszborecki" target="_blank">@lukaszborecki</a> . I intend to write up MemBox as a Github repo so it will be useful to others in future Kaggle competitions.</p> |
TensorFlow 2.0 Question Answering | 17th Place solution [bert-disjoint] kernel + all utility scripts | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all winners!</p>
<p>I've made available my solution (public 0.65 private 0.67).
<a href="https://www.kaggle.com/siriuself/tf-qa-wwm-verifier-forked">https://www.kaggle.com/siriuself/tf-qa-wwm-verifier-forked</a></p>
<p><a href="https://www.kaggle.com/siriuself/bert-disjoint-fn-builder">https://www.kaggle.com/siriuself/bert-disjoint-fn-builder</a>
<a href="https://www.kaggle.com/siriuself/bert-disjoint-modeling">https://www.kaggle.com/siriuself/bert-disjoint-modeling</a>
<a href="https://www.kaggle.com/siriuself/bert-disjoint-utils">https://www.kaggle.com/siriuself/bert-disjoint-utils</a>
<a href="https://www.kaggle.com/siriuself/albert-yes-no-fn-builder">https://www.kaggle.com/siriuself/albert-yes-no-fn-builder</a>
<a href="https://www.kaggle.com/siriuself/albert-yes-no-modeling">https://www.kaggle.com/siriuself/albert-yes-no-modeling</a>
<a href="https://www.kaggle.com/siriuself/albert-yes-no-utils">https://www.kaggle.com/siriuself/albert-yes-no-utils</a>
<a href="https://www.kaggle.com/siriuself/tokenization">https://www.kaggle.com/siriuself/tokenization</a>
<a href="https://www.kaggle.com/siriuself/create-submission">https://www.kaggle.com/siriuself/create-submission</a></p>
<p>My model is simple, BERT large whole-word-masking uncased, retrained using the start/end logit loss only without the answer type loss. Using provided nq train tf record with the following setting:
<strong>batch_size</strong>: 32
<strong>epoch</strong>: 2
<strong>alpha</strong>: 2e-5, but use ckpt-15000 (so stop at around 1 epoch)
Note that because of learning rate warmup/decay, this is different from training with 2e-5 for 1 epoch</p>
<p>Then totally disregard answer type classification (since I don't have it), and rely on threshold setting for long and short questions, tuned on the dev set with nq_eval. And yes I let go all the YES/NO questions.</p>
<p>I did try adding an ALBERT xxlarge yes/no verifier after the BERT stage, which seemed to improve for like 1 pt on dev, but apparently not on the LBs somehow. My kernel includes the ALBERT part too. There isn't much insight in the utility scripts, except the modifications I made in order to restore the checkpoint into some contrib layers forced to be re-written in keras (e.g. LayerNormalization). It was a nightmare..</p>
<p><em><strong></strong></em><strong>**<em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>*</strong><em>some reflections/insights</em><strong><em>*</em>**<em>*</em>**<em>*</em>**<em>*</em>**</strong>
I started the competition way too late and was on the wrong ALBERT-taking-too-long direction for quite a while, ending up with little time for tuning. I still have a very strong feeling that the "joint" part of bert-joint might be of little use, since we've already known that:
1) BERT-like structure is poor at passage ranking, and to make it better we need passages at least as many as in MS-MARCO
2) We only have like 1-3% YES/NO in our training data. Very unbalanced.
Based on my inspection and verification experiment this might well be the case, which means the answer type classifier might just be reduced to a question type classifier (a much easier task for BERT to see), which would be a bad indicator of what type of answer the passage contains. It might not be a good idea to include it in training in the first place, and it'd be a disaster if you over-rely on the answer type logits for post-processing.</p>
<p>Let me know if you have similar/opposite findings. It's just my feeling anyway, with a bit of confirmation of my score, obtained by doing nothing other than removing the answer type loss in training. This is almost higher than my ALBERT-xxlarge-joint model too.. I almost felt like doing more experiments and writing a paper on the classification power reduction, but guess it's too trivial.</p>
<p>Lastly, we are hiring intermediate/senior NLP engineer/researcher/scientist, with possibility of sponsorship to come to our Canadian headquarter in Waterloo, Ontario for strong candidates. Bilingualism in English and Mandarin is a plus.</p> |
TensorFlow 2.0 Question Answering | 1st place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>👍 👍 👍 </p>
<blockquote>
<p>I want to clarify if understanding of your ensembling is right: Bert-base to predict "has long answer or not". if at least one of the spans of a example has long answer, then use big models to predict all spans of this example, right?</p>
</blockquote>
<p>The proposal (pre-selection) works on candidates, not examples.
Assume an example has 100 candidates, the bert-base model will predict on all the 100 candidates, rank them with the long answer probabilities, then, choose the most probable 5 candidates for the bert-large to predict on. This way the big models will reduce the workload by 95%.</p>
<p>I'm not sure if there is (label) distribution difference.
Indeed the decision to try lowering the thresholds were driven by my submission public LB scores. Before I submitted the 5-model ensemble, I had a 3-model ensemble (bert-base, bert-large, albert-xxl) which scored 0.68 public LB. The 5-model ensemble had an extra bert-large and an extra albert-xxl, so it must be better than the 3-model ensemble, but only scored 0.66. I double-checked my code and found that the 3-model ensemble used lower thresholds than the 5 model ensemble. That's why I tried lowering thresholds.
The funny part is the 5-model ensemble scored 0.69 in private LB whereas the 3-model ensemble still scored 0.68 private.</p>
<p>It's similar to hard negative mining used in computer vision object detection</p>
<p>Please see my answers here
<a href="https://www.kaggle.com/c/tensorflow2-question-answering/discussion/127551#728519">https://www.kaggle.com/c/tensorflow2-question-answering/discussion/127551#728519</a></p>
<p><strong>I like the "hard negative sampling" idea - making the training harder to get better results. How much of a bump did this provide, if you've kept track?</strong>
I don't, unfortunately.</p>
<p><strong>Can you say some more about your fine tuning process?</strong>
The tuning process of the three types of models are generally same. They were tuning for 3-4 epochs. Early stopping was based on validation performance. The validation set (the dev) is the standard dataset provided in the NQ dataset along with the code for F1 score calculation. </p>
<p><strong>Why no cased models?</strong>
I remember I saw the advise to not use cased model unless there is a strong reason to (maybe in the bert paper?)</p>
<p><strong>How did you fit all those models into the kernel runtime limit? Sounds like a few other competitors got timeouts. Anything special to speed up inference?</strong>
Good question. I used the bert-base model for candidate proposal. From each document, the bert-base only proposed a small set of most probable candidates, and the bigger models only predicted on those candidates. I saw the total kernel inference time was only about an hour.</p>
<p><strong>How did your ensembling work?</strong>
It was all after softmax. Weighted average. Albert has slightly higher weight. The exact weights are not important.
I tried roberta and xlnet. Performance of roberta was much worse, there might be something wrong in my implementation. For Xlnet, I couldn't even run the provided huggingface squad tuning code. I'm sure these two could make even better ensemble together with bert and albert but I just didn't have time and energy to make them work correctly.</p>
<p>I followed the old school rule of thumb for choosing submissions: pick one with best local score and the other one with best LB score💯 </p>
<p><strong>Basically you train a baseline model and use it to predict all candidates in your validation data.</strong>
What I did was I trained the model on the training set and used the trained model to predict on the same training set. </p>
<p>There was no actual thresholds tuning. After I submitted my final ensemble (0.66 public LB) with locally optimized thresholds, I only adjusted the thresholds two times, reducing both long and short thresholds by 0.1 (0.7 public LB) and reducing both long and short thresholds by 0.2 (0.71 public LB). And I went to shopping, came back 30 mins before the competition deadline, and selected the 0.71 as my second selection.😁 </p>
<h1><strong>04/01/2020: Source code is attached below.</strong></h1>
<p>Thanks to the Kaggle and Tensorflow team for holding this competition. I was new to question answering, it took me more than 5 weeks to make my first real submission, and I have learnt a lot during the journey. My initial plan before joining the competition was to learn both QA and TF2.0, but in the end I didn't have time to touch TF2.0, so my solution stays in pure pytorch. Thanks to <a href="/sakami">@sakami</a> for the great kernel <a href="https://www.kaggle.com/sakami/tfqa-pytorch-baseline">https://www.kaggle.com/sakami/tfqa-pytorch-baseline</a>. Your kernel was the starting point of my journey. And of course thanks to huggingface (<a href="https://github.com/huggingface/transformers">https://github.com/huggingface/transformers</a>), NLP finetuning is made much easier. </p>
<p>My solution is described below.</p>
<h3><strong>- Overview</strong></h3>
<p>I trained on the provided candidates instead of sampling from the original documents (examples) as done in the baseline paper (<a href="https://arxiv.org/abs/1901.08634">https://arxiv.org/abs/1901.08634</a>). Since there are a total of 40 million candidates in the training data, for each epoch, I sampled only one negative candidate from each document. For more efficient training, hard negative sampling was used to replace uniform random sampling. The final submission was an ensemble of five models. </p>
<h3><strong>- Sampling Strategy</strong></h3>
<p>Initially, I tried uniform sampling on the negative candidates, but the result was unsatisfactory. The reason might be that most of the negative candidates are "too easy", the model might only need to learn some "basic" patterns for good candidate-level classification performance. But in the testing stage our actual goal is to predict the most probable positive candidate from each document, and this document-level classification is a more difficult task. So I replaced the uniform sampling by hard negative sampling to increase the difficulty of the candidate-level training, as expected, the performance was greatly improved. To perform hard negative sampling in the following models, I firstly trained a model with uniform sampling, and predicted on the whole training data, and stored the answer probability for each negative candidate. The last step was to normalize the probabilities of negative candidates within documents to form a distribution. For the following model training the negative candidates could be sampled from the probability distribution.</p>
<h3><strong>- New Tokens</strong></h3>
<p>According to the baseline paper, I added html tags as new tokens for better model performance. All the 9 tags from the Data Statistics Section of <a href="https://github.com/google-research-datasets/natural-questions">https://github.com/google-research-datasets/natural-questions</a> was added. For html tags that are not in the 9 added tokens, I replaced them with a unique token in the tokenization dictionary or simply addedanother new token to represent them. I did not have time to try adding paragraph or table number similar to what the baseline paper does.</p>
<h3><strong>- Model Architecture, Training and Evaluation</strong></h3>
<p>Overall, the model architecture was the same as the baseline paper (a 5 class classification branch + 2 span classification branch). The five classes was "no_answer", "long_answer_only", "short_answer", "yes", "no". In my case there was no span prediction for answers without a short answer span because I directly used candidates. The loss update of the span prediction branch was simply ignored if no short answer span exist during training. In testing stage, for each document, I used 1.0-prob(no_answer) as the long answer score (confidence) for each candidate, and the candidate with the highest confidence was chosen to represent the document. Short answer spans were forced to be within the highest score long answer candidate (not sure if this is necessary). I used prob(short_answer)+prob(yes)+prob(no) as the short answer score. The exact class of the short answer was determined by the maximum of the three prob values. For span prediction, the output token-level probabilities were mapped to the word-level (white space tokenized) probabilities for easier ensembling of models with different tokenizers. </p>
<h3><strong>- Models and Results</strong></h3>
<p>My final submission was an ensemble of one Bert-base, two Bert-large (WWM), and two Albert-xxl (v2) models, all uncased. The Bert large and Albert models had been tuned on the SQUAD data before training. Below list their validation performance on the dev set using the code <a href="https://github.com/google-research-datasets/natural-questions/blob/master/nq_eval.py">https://github.com/google-research-datasets/natural-questions/blob/master/nq_eval.py</a>. I did not try to implement the competition metric.</p>
<p> long-best-threshold-f1 short-best-threshold-f1
Bert-base 0.618 0.457
Bert-large 0.679 0.541
Albert-xxl 0.700 0.555
ensemble 0.731 0.582</p>
<h3><strong>- Final LB results</strong></h3>
<p>My best ensemble only achieved 0.66 public LB (0.69 private) performance using the optimized thresholds. At that time I had already lost most of my hope to win. In my last 2-3 submission, I arbitrarily played with the thresholds. One of the submissions scored 0.71 (both public and private LB), and I chose it and won the competition. Unbelievable.</p>
<p>Source code is attached.</p> |
TensorFlow 2.0 Question Answering | 21th place solution, puzzlingly shaking from public LB 3th | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thks for ur reply, very insightful idea. We did touch answer type logits at least 3 times on postprocessing stage, will check it. If so, do u think train an extra yes/no model with balanced resampling data is a good idea?</p>
<p><a href="https://github.com/mikelkl/TF2-QA">Code for Solution</a></p>
<p>Thanks kaggle for holding this wonderful competition. Big thanks to my awesome teammate <a href="https://www.kaggle.com/ewrfcas">@ewrfcas</a> and <a href="https://www.kaggle.com/leolemon214">@leolemon214</a>. It's little pity for dropping from public LB 3th place, expect fetching gold medal at future. </p>
<p><strong>We carefully read other top solutions, and still puzzling for the 4% drop of private LB, can someone help to figure out the reason for this shaking?</strong></p>
<p>Below are valid part of our solution, all the following experiments are mainly performed on offline <strong>dev containing 1600 examples</strong>, and some results have been verified in public LB.</p>
<h2>1. Preprocessing</h2>
<p>| No | Technique | Pros | Cons | Effect |
| ---- | --------------------------------- | ------------------------------------------------------------ | --------------------------------- | ------------------------------------- |
| 1 | TF-IDF paragraph selection | Shorten doc resulting faster inference speed and better accuracy | May loss some context information | - dev f1 +1.8%,<br>- public LB f1 -1% |
| 2 | Sample negative features till 1:1 | Balance pos and neg | Cause longer training time | dev f1 +2.248% |
| 3 | Multi-process preprocessing | Accelerate preprocessing, especially on training data | Require multi-core CPU | xN faster (with N processes) |</p>
<h2>2. Modeling</h2>
<p>| No | Model Architecture | Idea | Performance |
| ---- | -------------------------------------------------- | ------------------------------------------------------------ | -------------------- |
| 1 | Roberta-Large joint with long/short span extractor | 1. Jointly model:<br>- answer type<br>- long span<br>- short span<br>2. Output topk start/end logits/index | dev f1 63.986% |
| 2 | Albert-xxlarge joint with short span extractor | Jointly model:<br>- answer type<br>- short span | def short-f1 69.364% |</p>
<p>All of above model architectures were pretrained on SQuAD dataset by ourselves.</p>
<h2>3. Trick</h2>
<p>| No | Trick | Effect |
| ---- | ------------------------------------------------------------ | ---------------------------------- |
| 1 | If answer_type is yes/no, output yes/no rather than short span | public LB f1 +6% |
| 2 | 1. If answer_type is short, output long span and short span<br>2. If answer_type is long, output long span only<br>3. If answer_type is none, output neither long span nor short span | public LB f1 +8% |
| 3 | Choose the best long/short answer pair from topk * topk kind of long/short answer combinations | dev f1 +0.435% |
| 4 | <code>long_score = summary.long_span_score - summary.long_cls_score - summary.answer_type_logits[0]</code><br><code>short_score = summary.short_span_score - summary.short_cls_score - summary.answer_type_logits[0]</code> | - dev f1 +2.12%<br>- public LB +2% |
| 5 | Increase long [CLS] logits multiplier threshold to increase null long answer | dev long-f1 +3.491% |
| 6 | Decrease short answer_type logits divisor threshold to increase null short answer | dev short-f1 ? |</p>
<h2>4. Ensemble</h2>
<p>| No | Idea | Effect |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | For long answer, We vote long answers of 2 <code>Roberta-Large joint with long/short span extractor</code> models | dev long-f1 +3.341% |
| 2 | For short answer, use step 1 result to locate predicted long answer candidate as input, We vote short answers of 2 <code>Roberta-Large joint with long/short span extractor</code> models and 4 <code>Albert-xxlarge joint with short span extractor</code> models | - dev short-f1 +2.842% <br>- dev f1 67.569%, +2.635% <br>- public LB 71%, +5%<br>- private LB 67% |</p>
<p><a href="https://github.com/mikelkl/TF2-QA">Code for Solution</a></p> |
TensorFlow 2.0 Question Answering | 23rd place solution: ensemble, rank passage and predict span | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, thanks Kaggle for great challenging competition, and congrats winners.
This is my first time to handle QA task, so I could learn a lot of things from you.</p>
<p>Secondly, thanks you all kagglers those had many discussions and ideas here, especially <a href="/christofhenkel">@christofhenkel</a>, <a href="/boliu0">@boliu0</a>, <a href="/higepon">@higepon</a> and <a href="/kashnitsky">@kashnitsky</a> .</p>
<p>This is my brief solution. I welcome questions and advises.</p>
<h1>My Solution</h1>
<p>Combining 1 ranker model & 4 span prediction ensemble model.</p>
<h2>Whole Prediction</h2>
<p>1) compute passage score for all long answer candidates on test dataset
2) select top 10 score passages for each record
3) feed selected passage into span prediction models
4) get averaged score by each model</p>
<h2>Ranker Model</h2>
<p>One of problems is that NQ dataset has so many candidates for long answer. These include obviously negative passages and takes much time to predict for all.</p>
<p>I used <code>bert-base-uncased</code> pre-trained and construct binary classification model to predict the passage is including long/short answer or not.
It get abount 0.98 recall@10 score on my validation dataset.
It takes about 5 minutes for public test dataset.
Other settings are same as span prediction model.
By this model, I make <code>ranker-selected</code> dataset for train and test data by selecting top 10 score candidate for each record.</p>
<h2>Span Prediction Model</h2>
<p>I used 4 models for ensemble:</p>
<ol>
<li><code>bert-large-uncased-squad1</code> pre-trained + 1 epoch on NQ dataset</li>
<li><code>bert-large-uncased-squad2</code> pre-trained + 1 epoch on NQ dataset</li>
<li><code>spanbert-large-cased-squad2</code> pre-trained + 1 epoch on NQ dataset</li>
<li><code>bert-large-uncased-squad2</code> pre-trained + 1 epoch on <code>ranker-selected</code> NQ dataset</li>
</ol>
<p>All models are bert-joint based model.</p>
<h3>Training</h3>
<p>For 1st~3rd models, I use whole NQ dataset.
In training, as reported in <a href="https://arxiv.org/abs/1909.05286">Frustratingly Easy Natural Question Answering</a>, I use 196 as stride, different down sampling rate for answerable and non-answerable question (each 0.01, 0.04).
Batch size is 32, max learning rate is 3e-5.
For 4th model, I used <code>ranker-selected</code> NQ training dataset and adjust sampling rate to 0.03 for answerable and 0.12 for unanswerable.</p>
<h3>Inference</h3>
<p>I used only <code>ranker-selected</code> test dataset. This makes slight improvement on val score than predicting all candidates, and what is more important, this makes faster prediction.
It takes about 3 minute for each model prediction.
I get 0.65 private LB score by single model, 0.67 private LB score by ensemble model.</p>
<h1>Trials which didn't works for me</h1>
<ul>
<li>using albert, xlnet didn't improve scores. maybe I need more tuning.</li>
<li>Attention over Attention didn't affect positively. But I don't have confidence for implementation.</li>
<li>BERT layer combination on last 2, 4, 8, 12 layers. It slightly improve but pre-trained by squad was better.</li>
<li>kinds of dropout on dense layer.</li>
<li>label smoothing on start and end position.</li>
<li>combine ranker model score into span prediction lead worse result.</li>
<li>dividing short and long span prediction, or only predict short spans get worse result.</li>
<li>kinds of preprocessing
<ul><li>no special token</li>
<li>partly use special token in BERT-joint</li></ul></li>
<li>kinds of postprocessing
<ul><li>use only max context position as score</li>
<li>get all logits score and obtain top k candidates</li></ul></li>
</ul>
<p>Thanks.</p> |
TensorFlow 2.0 Question Answering | 27th solution with luck and some questions from me | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks kaggle provides this competition and this is a big improvement for me to take an competition solo and reach this place.</p>
<h2><strong>1. Wierd ROBERTA Achitecture</strong></h2>
<p>Base from (<a href="https://github.com/bojone/bert4keras">https://github.com/bojone/bert4keras</a>) My solution is based on a wierd ROBERTA structure inplemented in keras (<a href="https://www.kaggle.com/httpwwwfszyc/bert4keras4nq">https://www.kaggle.com/httpwwwfszyc/bert4keras4nq</a>) as it is different from the huggingface in
1. 2*1024 token_type_id_embedding layer (0 put pretrained weights, 1 set to np.zeros((1,1024)))
2. no padding masking
3. wierd zero masking in attention mask:
```
def build(self):
x_in = Input(shape=(512, ), name='Input-Token')
s_in = Input(shape=(512, ), name='Input-Segment')
x, s = x_in, s_in</p>
<pre><code> sequence_mask = Lambda(lambda x: K.cast(K.greater(x, 0), 'float32'),
name='Sequence-Mask')(x)
# Embedding
x = Embedding(input_dim=self.vocab_size,
output_dim=self.embedding_size,
embeddings_initializer=self.initializer,
name='Embedding-Token')(x)
s = Embedding(input_dim=2, #1 or 2 , 2 finally because roberta need to train it
output_dim=self.embedding_size,
embeddings_initializer=self.initializer,
name='Embedding-Segment')(s)
x = Add(name='Embedding-Token-Segment')([x, s])
if self.max_position_embeddings == 514:
x = RobertaPositionEmbeddings(input_dim=self.max_position_embeddings,
output_dim=self.embedding_size,
merge_mode='add',
embeddings_initializer=self.initializer,
name='Embedding-Position')([x,x_in])
else:
x = PositionEmbedding(input_dim=self.max_position_embeddings,
output_dim=self.embedding_size,
merge_mode='add',
embeddings_initializer=self.initializer,
name='Embedding-Position')(x)
x = LayerNormalization(name='Embedding-Norm')(x)
if self.dropout_rate &gt; 0:
x = Dropout(rate=self.dropout_rate, name='Embedding-Dropout')(x)
if self.embedding_size != self.hidden_size:
x = Dense(units=self.hidden_size,
kernel_initializer=self.initializer,
name='Embedding-Mapping')(x)
layers = None
for i in range(self.num_hidden_layers):
attention_name = 'Encoder-%d-MultiHeadSelfAttention' % (i + 1)
feed_forward_name = 'Encoder-%d-FeedForward' % (i + 1)
x, layers = self.transformer_block(
inputs=x,
sequence_mask=sequence_mask,
attention_mask=self.compute_attention_mask(i, s_in),
attention_name=attention_name,
feed_forward_name=feed_forward_name,
input_layers=layers)
x = self.post_processing(i, x)
if not self.block_sharing:
layers = None
outputs = [x]
</code></pre>
<p>```
I concatenate last 4 layers and put a single linear output for each output head. </p>
<h2><strong>2. data distribution</strong></h2>
<p>Samples-ratio of non-zero with 256 stride vs zero with stride 128 is 1:4.</p>
<h2><strong>3. Training</strong></h2>
<ol>
<li>UseRadam with warmup 0.05 and train 1 epoch</li>
<li>set different weights to match the distribution of dev set (I use 2 dev set for the 135000th to 140000th and for the 302373 to end section). As a result my loss is:</li>
</ol>
<p>*Total_loss = loss_weights1*sample_weights*start_loss+ loss_weights2*sample_weights*end_loss+ loss_weights3*sample_weights*answertype_loss*</p>
<p>loss weights for [start, end, answer_type] is 1:1: (1/sampleweight.mean())</p>
<h2><strong>4. Threshold killing False Positive</strong></h2>
<p>The result of my solution is: CV: 0.478 because there are too many False Negative samples after I fix my metric error 10 days ago. So searching by my 2 devsets I finally choose a safe threshold ( the one slightly smaller than the threshold which reach max CV in order to lower the risk) . If a short answer score less than 0.5 or long answer smaller than 0.1 the answer will be blank.</p>
<p>As I only upload 1 model, I use stride=128 for inference.
My result: CV 0.523, public LB 0.63, private LB 0.65</p>
<h2><strong>5 My question</strong></h2>
<ol>
<li>how to mask padding:
I tried to add a padding mask before embedding layer but which will raise error because layers after does not support mask... So I have to use this wierd ROBERTA architecture.</li>
<li>why my attention mask never works:
Base on time limit for me, when I realise there are some mistake on attention mask because of the code
<code>
sequence_mask = Lambda(lambda x: K.cast(K.greater(x, 0), 'float32'),
name='Sequence-Mask')(x)
</code>
I have no time to test it. So I just replace it by:
<code>
sequence_mask = Lambda(lambda x: K.cast(K.not_equal(x, 1), 'float32'),
name='Sequence-Mask')(x)
</code>
which replace token 1 because 1 represent padding. After TPU training I got 0.53 CV score. But when I plug this to gpu, the public LB only reach 0.48, which is wierd. So my question is: is my replacement of attention_mask really matching what I expect (mask token 1 of value matrix in attention layer)?</li>
</ol>
<h2>Additions:</h2>
<ol>
<li>Because the private data seems close to the public, so my weighting strategy might be not good.</li>
<li>RobertaPositionEmbedding comes from huggingface transformers</li>
</ol>
<p>You are welcome!</p> |
TensorFlow 2.0 Question Answering | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. That's a good question. I should have added a <em>What did not work</em> section. You should be able to use the <code>--model_name_or_path roberta-large</code> flag for the training script. However, the performance was just like <code>bert-large-uncased</code> (i.e. ~0.70 LB). As I saw no gains from using this model I switched back to BERT. I still have no real explanation why Roberta didn't perform a lot better.</p>
<p>haha, I ran out of hyperparameters and ideas 😜</p>
<p>Thanks </p>
<p>> Build a model is very simple, but build a simple model is the hardest thing there is -- Guanshuo Xu</p>
<p>🤓 </p>
<p>1 Yes I do. The scores are (concat F1, short F1, long F1) and local score is:
* With ~3500 empty and ~3000 non-empty validation samples: (0.563, 0.481, 0.615)
* Only ~3000 non-empty: (0.722, 0.576, 0.823)</p>
<p>Maybe check this script if you are using the same metric: <a href="https://github.com/see--/natural-question-answering/blob/master/eval_server.py">https://github.com/see--/natural-question-answering/blob/master/eval_server.py</a></p>
<p>The validation set is here:
<a href="https://github.com/see--/natural-question-answering/blob/master/val_ids.csv">https://github.com/see--/natural-question-answering/blob/master/val_ids.csv</a></p>
<p>2 No, I treat it as softmax classification. Whether the token is the opening HTML tag and predict [CLS] token if there is no long answer.</p>
<p>No. If we ignore <code>YES/NO</code> answers there are 4 (short, long) possibilities: (empty, empty), (empty, span), (span, empty), (span, span). They can all be predicted by the model.</p>
<p>Good evening,</p>
<p>first of all, I'd like to thank Kaggle and the hosts for this awesome challenge! It was really fun to work with TF2.0 and TPUs. What would have taken days on my local 2 x 1080 TI machine just took a couple of hours. For example, the actual training time for my final model (excluding tokenization and post-processing) is just a little more than 2 hours.</p>
<p>Secondly, congrats to all the winners. I locked my submissions a week ago with +0.04 on public but everyone kept improving. Maybe I should have had continued working on this challenged as well.</p>
<p>My solution is just a single TF2.0 model. It uses custom heads and a BERT transformers backbone (large version). For modeling and training I am using the great <a href="https://github.com/huggingface/transformers">transformers</a> library. I think that the <a href="https://github.com/see--/natural-question-answering/blob/master/models.py#L8-L28">following snippet</a> is useful to understand the modeling:
```python
class TFBertForNaturalQuestionAnswering(TFBertPreTrainedModel):
def <strong>init</strong>(self, config, *inputs, **kwargs):
super().<strong>init</strong>(config, *inputs, **kwargs)
self.num_labels = config.num_labels</p>
<pre><code> self.bert = TFBertMainLayer(config, name='bert')
self.initializer = get_initializer(config.initializer_range)
self.qa_outputs = L.Dense(config.num_labels,
kernel_initializer=self.initializer, name='qa_outputs')
self.long_outputs = L.Dense(1, kernel_initializer=self.initializer,
name='long_outputs')
def call(self, inputs, **kwargs):
outputs = self.bert(inputs, **kwargs)
sequence_output = outputs[0]
logits = self.qa_outputs(sequence_output)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, -1)
end_logits = tf.squeeze(end_logits, -1)
long_logits = tf.squeeze(self.long_outputs(sequence_output), -1)
return start_logits, end_logits, long_logits
</code></pre>
<p>```</p>
<p>As you can see, the natural question answering task can be treated like SQUAD-2.0 with an additional head for long answers. Note that we just need a single output: The opening tag of the HTML bounding box. I guess most competitors used a similar modeling so I think what made the difference to most other solutions is the sampling.</p>
<p>I changed the empty answer ratio so that it is similar to the full dataset. I.e. roughly as many empty answers as answers with a long answer. I started with a really low empty answer ratio which I got from the <a href="https://arxiv.org/abs/1901.08634"><code>bert-joint</code> paper</a>, but I couldn't reach a good score. I tuned a few hyper parameters but overall I got good results with a wide range of parameters. Adding HTML tags as custom tokens helped a bit. I also tried different start weights and found that:</p>
<p><code>bert-large-uncased</code> (~0.70 LB) < <code>bert-large-uncased-whole-word-masking</code> (~0.72 LB) < <code>bert-large-uncased-whole-word-masking-finetuned-squad</code> (~0.73 LB).</p>
<p>That's about it. Thanks to <a href="/boliu0">@boliu0</a>, <a href="/christofhenkel">@christofhenkel</a> and <a href="/kentaronakanishi">@kentaronakanishi</a> for fixing and providing the metric!</p>
<p>Please refer to my repository for implementation details and instructions to reproduce:
* <a href="https://github.com/see--/natural-question-answering">https://github.com/see--/natural-question-answering</a></p>
<p>You can find the 2nd place kernel and pretrained weights on Kaggle:
* <a href="https://www.kaggle.com/seesee/submit-full">https://www.kaggle.com/seesee/submit-full</a>
* <a href="https://www.kaggle.com/seesee/nq-bert-uncased-68">https://www.kaggle.com/seesee/nq-bert-uncased-68</a></p>
<p>Feel free to ask questions and / or create GitHub issues.</p>
<blockquote>
<p>Intuitively using separate models for short/long could get even higher?</p>
</blockquote>
<p>Maybe, I prefer a single model and multiple targets. It's easier to tune, requires less code and less training time.</p>
<p>I didn't test it, but I am quite sure that the score won't change. I'd be surprised if it would get better. I checked that the model always predicts one of the opening tags or the CLS token. <code>long_token == cstart</code> should be fine.</p>
<p>Thanks for the kind words 😄 </p>
<p>1)
> Can you explain what the code is doing with crops? Seems like you are creating a text window around the true short answer for training purposes.</p>
<p>Yes, the crop length is set so that we get the desired ratio between empty and non-empty training samples. By using crops we remove text that would result in empty samples.</p>
<p>> Is that better than selecting the relevant long span among the candidates?</p>
<p>This was not tested. I don't know.</p>
<p>2) Thank you for writing this post! It's actually a bug. At least, it's not what I thought I was using as loss. My intention was:
```</p>
<h1>fixed loss</h1>
<p>loss = ((tf.reduce_mean(start_loss) + tf.reduce_mean(end_loss)) / 2.0 +
tf.reduce_mean(long_loss)) / 2.0
<code>``
This is a better approximation of the metric. Long and short answers should be weighted equally! I think it's better than taking the</code>mean` of all 3 loss terms (untested). After retraining with this fixed loss, public and private LB improve (entries are sorted by private LB but truncated, thus both show 0.71):</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F712087%2Fd82197c21fe72cbe2171f7c3a864080f%2Ffix_loss.png?generation=1580200920211234&alt=media" alt=""></p>
<p>> Why did you choose to train for exactly 2 epochs? Would you recommend using validation every X steps as a stopping condition?</p>
<p>[1, 2, 3, 4] were tried and 2 gave the best local LB. Personally, I would not do it. It can help, but there are more important hyper parameters (e.g. sample ratios).</p> |
TensorFlow 2.0 Question Answering | 30th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry, the images were not showing. Fixed the bug! </p>
<p>I hope that you liked.</p>
<p>You can see in their github:<a href="https://github.com/google-research/ALBERT/">ALBERT</a>
In the read.me you can see all the instructions.</p>
<p>Also check the article that the Albert development team did:
<a href="https://arxiv.org/pdf/1909.11942.pdf">Article</a></p>
<p>Hello everyone.</p>
<p>This was my first real experience in kaggle.</p>
<p>Briefly, I want to say that I'm very glad that my team achieved 30th place in this competition. I want to thank my teammates <a href="/xiaokangwang">@xiaokangwang</a> @xiao-xiao for all the help and work during this competition. Also thanks for all the answers given during the competition explaining everything and giving awesome ideas!</p>
<p>Without further ado, the solution mainly focused on three things:
-> Fine-tuning Albert Large
-> DataAugmentation
-> Understanding thresholds.</p>
<p>The model used was <strong>Albert xLarge version 2</strong> trained on SQUAD2.0 and Fine-tuned on the tiny-dev. One of the important things was the <strong>DataAugmentation</strong> used for training, changing the document_text replacing words by synonyms using WordNet corpus.</p>
<p><img src="https://i.imgur.com/IrA2CmI.png" alt="Synonyms Examples"></p>
<p>In here there wasn't no much of magic. We also tried xxLarge only able to run with a huge doc_stride giving bad results. </p>
<p>Removing html tags helped as well.</p>
<p>Understand the data and the output of the model was an important step.
The long_answers appeared 50% of the times
The short_answers appeared 35% of the times.</p>
<p><img src="https://i.imgur.com/eHTopaB.png" alt="Output score"></p>
<p>In this regard, we used threshold values based on the outputs using 50% of all the results for the long_answers e 35% for the short_answers. This was achieve by ordering the list of results, select the number that was in the middle and for the short_answer, 35%. For example, given an output list [1,2,3,4,5] we selected the threshold 3 for the long_answer and 2 for short_answer.</p>
<p>I hope this gives good ideas for future work. </p>
<p>Best Regards,
Pedro Azevedo</p> |
TensorFlow 2.0 Question Answering | 31st solution with custom loss | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Likewise.
The jump was a combination of the custom and down sampling.</p>
<p>I borrowed the idea from <a href="https://web.stanford.edu/class/cs224n/reports/default/15812785.pdf">https://web.stanford.edu/class/cs224n/reports/default/15812785.pdf</a> section 3.2.2😀</p>
<p>Thank you! I didn’t. I just used the original definition of the scores. Maybe I should have done that. Hope some of top teams can share their thoughts on it :)</p>
<p>Thank you Kaggle and Kaggle community for this awesome competition. I learned a lot.
I wasn’t able to do almost anything the last two weeks due to my personal reason, but it has been really fun.</p>
<h2>My model</h2>
<ul>
<li>Public 0.68 Public 0.65</li>
<li>Single PyTorch Bert model</li>
<li>fine-tune bert-large-uncased-whole-word-masking-finetuned-squad for 1 epoch.
<ul><li>2 epochs got better Private 0.68 Public 0.65 but I didn't choose it :(</li></ul></li>
<li>learning rate 3e-5 instead of 5e-5</li>
<li>Down sampled null instance training data.</li>
<li>Penalize training data with answer in stride in loss function.</li>
<li>Simply removed HTML tags</li>
<li>Parameters search using short/long score.</li>
</ul>
<h2>down sampling</h2>
<p><code>
flattened_examples = list(itertools.chain.from_iterable(examples))
null_instances = []
annotated_instances = []
for e in flattened_examples:
if e.class_label == 'unknown':
null_instances.append(e)
else:
annotated_instances.append(e)
len_null = len(null_instances)
len_downsampled = int(len_null / 50) if len_null > 50 else 0
downsampled = random.sample(null_instances, len_downsampled)
logging.info(' down sampling nonnull(%d) null(%d) to null(%d)', len(annotated_instances), len_null, len(downsampled))
self.examples = downsampled + annotated_instanceCan someone share sgse
</code></p>
<h2>loss function</h2>
<p>```
def loss_fn(preds, labels, no_answers):</p>
<pre><code>start_preds, end_preds, class_preds = preds
start_labels, end_labels, class_labels = labels
has_answers = [not x for x in no_answers]
start_preds_no_answer = start_preds[no_answers]
start_preds_has_answer = start_preds[has_answers]
end_preds_no_answer = end_preds[no_answers]
end_preds_has_answer = end_preds[has_answers]
class_preds_no_answer = class_preds[no_answers]
class_preds_has_answer = class_preds[has_answers]
start_labels_no_answer = start_labels[no_answers]
start_labels_has_answer = start_labels[has_answers]
end_labels_no_answer = end_labels[no_answers]
end_labels_has_answer = end_labels[has_answers]
class_labels_no_answer = class_labels[no_answers]
class_labels_has_answer = class_labels[has_answers]
loss_no_answer = 0
loss_has_answer = 0
# has answer
if len(start_preds_has_answer) > 0:
start_loss = nn.CrossEntropyLoss(ignore_index=-1)(start_preds_has_answer, start_labels_has_answer)
end_loss = nn.CrossEntropyLoss(ignore_index=-1)(end_preds_has_answer, end_labels_has_answer)
class_loss = nn.CrossEntropyLoss()(class_preds_has_answer, class_labels_has_answer)
loss_has_answer = start_loss + end_loss + class_loss
if len(start_preds_no_answer) > 0:
start_loss = nn.CrossEntropyLoss(ignore_index=-1)(start_preds_no_answer, start_labels_no_answer)
end_loss = nn.CrossEntropyLoss(ignore_index=-1)(end_preds_no_answer, end_labels_no_answer)
class_loss = nn.CrossEntropyLoss()(class_preds_no_answer, class_labels_no_answer)
loss_no_answer = start_loss + end_loss + class_loss
return loss_has_answer * 2 + loss_no_answer
</code></pre>
<p>```</p>
<h2>What I didn't try</h2>
<ul>
<li>p/table tag annotations</li>
<li>TPU</li>
<li>more post processing</li>
</ul> |
TensorFlow 2.0 Question Answering | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the sponsors and kaggle for hosting such an interesting and challenging competition. Also a big thank you to <a href="/cpmpml">@cpmpml</a> for being my teammate.</p>
<h2>Brief Summay</h2>
<p>As our teamname suggests, we did everything with pytorch. In summary, we used 3 roberta-large models which were ensembled by voting. In general input features of our models are very close to bertjoint baseline. We used a learning rate of 1-e5, a batchsize of 16 and simple Adam optimizer with no schedule. All models were trained for 1 epoch.</p>
<p>Roberta 1:
- initialized with roberta-large weights
- stride 128
- prediction of span & 5 answer types (unknown, yes, no, short , long)</p>
<p>Roberta 2:
- initialized with roberta-large weights, then pretrained on Squad2.0
- stride 192
- prediction of span & 2 answer types (short , long)</p>
<p>Roberta 3:
- initialized with roberta-large weights, then pretrained on Squad2.0
- additional linear layer (768→768 + relu) before predicting start, respectively end token
- stride 192
- prediction of span & 2 answer types (short , long)</p>
<p>We optimized thresholds for each of the models and set predictios below threshold to blank. Then we used majority voting to ensemble the 3 models. Besides some smaller tricks, we predicted test set with a stride of 224 to fit inference of 3 models into the kernel.</p>
<h2>Longer Summary</h2>
<h3>Validation scheme</h3>
<p>As always, I start with setting up a solid validation scheme, which ideally has a high correlation to leaderboard. It turned out harder than anticipated, since organisers did not share enough information on the intended metric as well as implemented it wrongly. This first phase was very frustrating and I spent quite some time reverse engineering their mistake in order to reconcile leaderboard scores. After I figured out the metric and shared in forum, organizers changed the metric. Imagine my face in that moment… and believe it or not, it took me another 6 weeks to figure out the new one. At the end we used the dev set of the original NQ dataset as our validation set and had a very high lb correlation.</p>
<h3>Software</h3>
<p>I reused a lot of preprocessing scripts from bertjoint baseline shared by organisers and did all training with pytorch relying on huggingface for transformer weights and code + pytorch-lightning for writing training pipeline.</p>
<h3>Hardware</h3>
<p>I did all training on my home desktop pc (3 GTX1080Ti) and <a href="/cpmpml">@cpmpml</a> on his pc (2 GTX 1080Ti). Training one epoch took quite a while, hence we did not spent much time on hyper-parameter tuning. The training time for Roberta1 was 35h. Finetuning roberta-large on SQuAD2.0 took 30h and finetuning the resulting model to the data of this competition took about 24h when using a stride of 192.</p>
<h3>Architectures and pretrained models</h3>
<p>I fully agree with <a href="/boliu0">@boliu0</a> that is was frustratingly hard to beat the bertjoint baseline. I did a lot of experiments on different preprocessing as well using different (in my opinion more suited) targets. But 99% of what I did was worse than the baseline. So at the end we kept the preprocessing and only adjusted the answer type targets sightly. I used distilbert for a lot of those experiments because due to its size it helps to iterate fast while giving reasonable indication if an idea works or not. <br>
In general the bertjoint baseline suggest to stride over the full answer with a windowing approach and concatenating those windows with the question in order to find if the short answer is contained in the window. One major interesting question is how to aggregate the resulting predictions. Thats where we spent some time because we saw a lot of room for improvement. So what we did is to map the start and end token predictions of each window back to the original answer and create a answer length x answer length heatmap. We then apply some restrictions, like e.g. short span length should be less than 30 tokens, and get the following result.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1424766%2F15f5fac1de79c9e7d6d67adb3b45b1de%2FScreenshot%20from%202019-12-21%2009-26-43.png?generation=1579778810842946&alt=media" alt=""></p>
<p>The argmax of this matrix then gives start and end token (here 957:973). Nice thing of this approach is that you can easily blend these matrices over different models. So after we figured that out we tried different model architectures, including all popular ones from the huggingface repo (albert, gpt2, bert, roberta, xlnet) as well as less popular ones like Spanbert. For us roberta-large worked best with some distance to the second best which was spanbert. Considering the time of preprocessing we thought that ensembling 2 or 3 versions of the same model type will be better than ensembling different model types as you need to do preprocessing only once. So we continued training slightly different versions of roberta-large, including pretraining it on SquAD2.0 first, while working on probably the most important part of this competition, namely thresholding of when to set a blank prediction.</p>
<h3>Thresholding:</h3>
<p>Thresholding when using f1 is challenging. Its super important for your overall score but at the same time has high variance, and might not relate to test set. We used different schemes and at the end a 4-way thresholding worked best. We build thresholds for long and short answer type as well as logits of start + end tokens. We determined the thresholds by simple 4d grid search, which was improved by <a href="/cpmpml">@cpmpml</a> using scipy.optimize.minimize. Instead of using the thresholds found by fitting on the dev set directly, we also experimented with using the corresponding quantiles. Our best submission uses that approach.</p>
<h3>Ensembling:</h3>
<p>We elaborated different ensemble methods and chose 2 different ones for our final sub:</p>
<ol>
<li>Apply postprocessing and threshold to model prediction and majority vote between the results</li>
<li>blend model predictions and apply thresholding</li>
</ol>
<p>While 2. preformed better on our val set, 1. performed better on public and private LB</p>
<h3>Wrapping things up and putting into kernel:</h3>
<p>We used several things to speed up the final kernel in order to fit the inference of 3 models in.
- use stride of 224 for test data
- convert model to fp16 for predictions
- use multiprocessing for preprocessing and postprocessing</p>
<p>Thanks for reading. </p>
<blockquote>
<p>But I'm a little surprised it took you that long for the new one.</p>
</blockquote>
<p>It took me so long to find out that the dev examples on NQ had multiple annotations and a precise description how to label an example. Without the NQ dev set, say you take a hold out of this competitions training data instead, you won't be able to reconcile the LB score. That fact was hidden from me. Could have been made more transparent by organizers...</p>
<p>I would like to wait for publishing our inference kernel after the Google Quest competition.</p>
<p>you can simply do something like</p>
<p><code>
model = TFQARoberta()
model.half().cuda()
</code></p>
<p>no need for apex</p>
<p>might also as well be that is would have worked. I did not try much different strides. Just 128, 192, and 224. I just put fp16, because its a bit faster, and then 192 and checked if its under 20min for commit time. It was at around 22 min. So I changed stride to 224 and voila it was below 20 min.</p>
<p>stride doesn't skip anything as long as its less then 512 - (question len). Its rather the overlap of the windows. As for the number 224 exactly, we just increased from 192 and checked if our models fit into the kernel time</p>
<p><a href="https://www.kaggle.com/christofhenkel/inference-v3-224">https://www.kaggle.com/christofhenkel/inference-v3-224</a></p> |
TensorFlow 2.0 Question Answering | 45th solution, my journey and learnings, feeling grateful :) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I woke up at 4am today, as most days in the last weeks of this competition. The last time I checked yesterday I was 54th on the public LB, and I really counted on a medal and progressing to Kaggle Expert with this competition. I did my daily quick workout and finally opened up Kaggle to see that I moved a few places up and stayed in the silver zone. I smiled, relaxed, and decided to write up my learnings immediately. </p>
<p>My solution is very simple:
1. Started with the bert-joint kernel by prvi (thank you @prokaj)
2. Trained a new bert-joint model, starting with bert-large-uncased (I believe the whole-word-masking gave it a 1-2 points boost over the available bert-joint checkpoint), finetuning 1 epoch on SQUAD, then finetuning 1 epoch (lr=3e-5) on the NQ dataset. I tried several other settings, with lr between 1e5 and 3e5, 1-3 epochs, but the initial setting worked best.
3. I did extensive validation on the NQ dev set, compared the model outputs with ground truth (using a modified version of NQ browser), and used those insights to set the post-processing thresholds. </p>
<p>Things I wished to do - I spent quite some time trying to put another layer (bi-LSTM) over the output features of bert-joint, to learn the post-processing rules rather than setting them by hand. In the end, this turned out to take more time that I could afford between work and family, so I dropped the idea. I’m looking forward to see the winning solutions, see if they implemented this and learn from them. </p>
<p>Before I share my learnings, some context. I’ve been working in IT for 15 years, doing various roles across project and product management, operations and consulting, but no coding / data science work. I initially got interested in ML 2 years ago with Andrew Ng courses (signed up to Kaggle for the first time then), but haven’t really done anything practical until a few months ago when I discovered fast.ai. I did the part 1 of fast.ai Deep Learning course and came to Kaggle to practice the skills. Thank you @jhoward for the learnings and the motivation!</p>
<p>My learnings:
1. It’s ok to be overwhelmed. I said on many nights to my wife that this thing is too difficult for me… then I woke up in the morning, reviewed each line of code to understand the inputs/outputs, analyzed the errors, and came up with a solution.
2. A little time every day is better than nothing. I have a full time job, wife and 2.5 year old daughter… I started doing Kaggle in the evenings, once my girls went to sleep, but after trying to get my daughter to sleep for 1-2 hours I had no energy left for coding… then I switched to going to sleep early and waking up early, and with 1-2 hours per day I felt like I can learn and make progress.
3. The ML/Kaggle community is amazing! My go-to places for learning are the fast.ai forums, Kaggle discussion and ML Twitter. It’s amazing how open this community is, how much learning and sharing is going on. Thank you!!!</p>
<p>With this, I’d like to express my gratitude to Kaggle and Google for organizing this competition and providing the TPU credits. Thank you to the Kaggle community (especially @prokaj, @kashitsky, @christofhenkel, @yihdarshieh) for sharing your code and insights, it’s amazing to be able to learn from so many talented people. And congratulations to the winners, medalists, and everyone that learned something during this competition!</p>
<p>Last thing - I’ve done only solo competitions so far, but I’m looking forward to find partners for future competitions. If you’d like to team up in the future, please connect with me at darek.kleczek@gmail.com :) </p> |
TensorFlow 2.0 Question Answering | 47th Place Solution Write-Up (Ensembling) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>2 epochs. Sure, I'll update the post for ensembling logic too. If you check out my kernel you'll see how I do ensembling. I used simple weighted average ensembling, here's the code.</p>
<p>```
nq_logits = bert_nq(nq_inputs, training=False)
base_nq_logits = base_bert_nq(nq_inputs, training=False)</p>
<pre><code> (start_pos_logits, end_pos_logits, answer_type_logits) = nq_logits
(base_start_pos_logits, base_end_pos_logits, baseanswer_type_logits) = base_nq_logits
start_pos_logits = (0.2 * start_pos_logits + 0.8 * base_start_pos_logits)
end_pos_logits = (0.2 * end_pos_logits + 0.8 * base_end_pos_logits)
answer_type_logits = (0.2 * answer_type_logits + 0.8 * baseanswer_type_logits)
</code></pre>
<p>```</p>
<p>Thanks <a href="/higepon">@higepon</a> . Same for me, it was great a contest and I enjoyed competing with you too :)</p>
<p><a href="/yihdarshieh">@yihdarshieh</a> my BERT-joint-large which got 0.6 is the one I trained on TPU. It got 0.58 without changes is postprocessing but with some modifications mentioned above it score 0.6. And as I said I kept constant thresholds after deciding from dev set throughout the experiments.</p>
<p>Your work was really great, if you had focused on model training and ensembling you would've got a really good score on private set. As I said my solution is mostly based on your kernel :)</p>
<p><a href="/rajraviprajapat">@rajraviprajapat</a> we weren't able to get it working. But as I read in other discussions RoBERTa performed better compared to bert large</p>
<p>Thank you Kaggle and Kaggle community for this awesome competition. I learned a lot.
We tried a lot of new things with pytorch in the last week but it weren't able to get things working, but it has been really fun.</p>
<h2><strong>Our Solution</strong></h2>
<p>Our current solution is a TF 2.0 solution based on this great kernel <a href="https://www.kaggle.com/yihdarshieh/inference-use-hugging-face-models">https://www.kaggle.com/yihdarshieh/inference-use-hugging-face-models</a> by <a href="/yihdarshieh">@yihdarshieh</a>. Initially I started off with finetuning the official bert joint baseline but it didn't give much improvements. In next few weeks I completed the TPU setup for the contest and had my pipeline ready for training and validation on dev set.</p>
<p>All the experiments were done in Google colabs free tier TPUs. This is my first time seriously using TPUs and I have to say, it feels so good. Because they are so fast. Using TPUs dramatically reduced experimentation time in my case.</p>
<p>Our solution is a simple ensemble of following 2 models:</p>
<ol>
<li>BERT-joint-large public lb score 0.6</li>
<li>BERT-joint-base public lb score 0.58</li>
<li>DistillBERT-joint public lb score 0.54</li>
<li>Our final solution is an ensemble of 1 and 2 with some additions to postprocessing which scores 0.64 on public lb (also scored 0.64 on private LB but we didn't choose our best solution for final submission as it scored less on public LB with the new postprocessing)</li>
</ol>
<p>Here's our ensembling code, we use simple weighted average ensembling.</p>
<p>```
nq_logits = bert_nq(nq_inputs, training=False)
base_nq_logits = base_bert_nq(nq_inputs, training=False)</p>
<pre><code> (start_pos_logits, end_pos_logits, answer_type_logits) = nq_logits
(base_start_pos_logits, base_end_pos_logits, baseanswer_type_logits) = base_nq_logits
start_pos_logits = (0.2 * start_pos_logits + 0.8 * base_start_pos_logits)
end_pos_logits = (0.2 * end_pos_logits + 0.8 * base_end_pos_logits)
answer_type_logits = (0.2 * answer_type_logits + 0.8 * baseanswer_type_logits)
</code></pre>
<p>```</p>
<h2><strong>Postprocessing</strong></h2>
<p>After experimenting with multiple single models I started focusing on postprocessing to improve on model performance. Initially I used postprocessing provided by this great kernel <a href="https://www.kaggle.com/prokaj/bert-joint-baseline-notebook">https://www.kaggle.com/prokaj/bert-joint-baseline-notebook</a> by @prvi which helped my single models score in range 0.56-0.58. To improve further on this I had an in-depth look at the predictions of the model and ground truths. Here I found that our model was predicting duplicate answer spans. So I added a duplicate removal logic to postprocessing which helped a score increase on 0.01 on public LB and 0.02 on dev set. I also observed a score improvement if my model doesn't predict any "YES/NO" answer so essentially my model was only outputting answer spans and null answers in my final solution.</p>
<p>Deciding on thresholds was one of the important things to predict valid answers but I didn't play too much with answer thresholds. Initially I ran inference on validation set with 5 different answer thresholds [1.5, 3.0, 4.5, 6.0, 7.5] and saw the best validation score with a combination of 1.5 for long answer and 3.0 for short answer which I kept same for final solution.</p>
<h2><strong>New Addition to postprocessing</strong></h2>
<p>Postprocessing from @prvi's kernel only looks at the current <code>512</code> sequence to choose start and end indexes but as we are using strides of length <code>128</code> there's a overlap in every 2 consecutive sequences. So I decided to append top <code>k</code> start and end indexes from every 2 consecutive sequences to choose a pair of start and end indexes. In this case for overlapping start and end indexes I got 4 possible score values out of which I kept only the max score value and discarded the rest 3. This postprocessing didn't give us improvement on public LB for our current best model but it helped by a increment of 0.17 for a weak model.</p>
<p>Our final solution didn't use the new postprocessing but our best solution on private set scored 0.64 (a 0.01 increment) with new postprocessing.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F887695%2Fae6bdb038215e5da57e9b68cc6de5242%2Fscore.png?generation=1579758032774744&alt=media" alt=""></p>
<h2>** Last Week of Competition**</h2>
<p>In the last week of the competition I got an opportunity to team up with <a href="/abhishek">@abhishek</a> and <a href="/rinnqd">@rinnqd</a>. After working on BERT-large and BERT-base we wanted to try out ALBERT and RoBERTa in last few days so we started working on pytorch for these two models. Our GPU training pipeline was completed by <a href="/abhishek">@abhishek</a> in just few hours. We tried to port it to TPU in next few days but weren't able to get it working in time. It was a great experience teaming up with these guys I learned a lot about how to easily prototype your training pipeline. How to start off with validation pipeline and how important it is. Learnt about how to write TPU code for pytorch. </p>
<p>Thanks guys for teaming up. And thanks Kaggle for such a great contest.</p>
<p>Lastly, here's our final solution kernel <a href="https://www.kaggle.com/axel81/inference-use-hugging-face-postprocess">https://www.kaggle.com/axel81/inference-use-hugging-face-postprocess</a>.</p>
<p>Happy Kaggling :)</p> |
TensorFlow 2.0 Question Answering | 4th place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's a bert large model.</p>
<p>Thanks to Kaggle and the hosts for this competition. It was my first time to participate a question answering competition. I'm happy I learned a lot by doing researches in this area.</p>
<p>Here is my solution:</p>
<h1>preprocessing</h1>
<ul>
<li>No preprocessing for Text.</li>
<li>Different Negative sampling rate. Tried 0.02, 0.04 and 0.06.</li>
</ul>
<h1>Data Aug</h1>
<ul>
<li>TTA. not work</li>
<li>Change the answer by replacing it with similar questions' answer. not work</li>
<li>Transform from other question answer datasets like squad and hotpotQA. not work</li>
</ul>
<h1>Models</h1>
<ul>
<li>Tried XLNet, Bert Large Uncased/Cased, SpanBert Cased, Bert Large WWM.</li>
<li>Same loss function and prediction as Bert-joint script.</li>
</ul>
<p>All cased models Perform worse than its Uncased Version. Maybe there are something wrong in my data preparation script. WWM BERT Large Uncased performed best in my experiments.</p>
<h1>Knowledge Distillation</h1>
<p>I believe knowledge distillation is the key part in my solution.
* Trained a combined Bert-large model, by adding bert-large weight and wwm-bert-large weight like 0.8 * wwm-bert-large + 0.2 * bert-large. 1 step, 3e-5 lr.
* Freeze bert layer only finetune classifier weights. 2 step, 1e-5 lr.
* Treat the first model as a teacher model and do knowledge distillation to get a student model.
* Finetune student model with only classifer weights. 3 step 1e-5 lr.</p>
<h1>Validation</h1>
<p>The student model achieves 0.7117955439056357 on dev set. and 0.7 on both public lb and private lb.</p>
<h1>Other works</h1>
<p>I failed to implement Adversarial Training using Estimator framework in TF1.0. But it worth trying if you are using pytorch.</p>
<p>Yes, I tried different approaches like averaged bert output, concat, multiply. And weighted average gave me the highest val score.</p>
<p>Thank you for your explanation and I do agree with you.
For knowledge distillation, teacher's score is lower than student model.
For teacher model the local val score is 0.7021821120689655.
After KD, the student model's val score is 0.7137155780231474, and it got 0.69 in private lb.
After finetuning the classifier weights the val score decreased to 0.7117955439056357, but private lb increased to 0.70.</p>
<p>Thank you, you may want to read this paper to understand the idea. <a href="https://arxiv.org/abs/1503.02531">Distilling the Knowledge in a Neural Network</a>.</p> |
TensorFlow 2.0 Question Answering | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for spotting this. I forgot to share the notebook.</p>
<p>The final submission was a single BERT based model. It gave .71 on public data and 0.69 on private leaderboard. Looking at other solutions it was a little bit overcomplicated. </p>
<h2>Preprocessing</h2>
<p>I left out the special tokens introduced in the a baseline script (<code>[ContextId=..][Paragraph=0]</code> etc). Instead I kept the simplified html tags (table tags eg. contained <code>colspan</code> info which I removed). I also added <code>&lt;*&gt;</code>, <code>&lt;/*&gt;</code> at the beginning and the end of each segment. I kept 4 % of the negative examples, and also kept the very long answers that were not contained within one segment. I also processed the entire document text, so the <code>max_contexts</code> argument of the original script was ignored. </p>
<h2>Model output</h2>
<p>Similarly to the baseline I used the classification head, and one head for span start and end logits. With masking this used to get both the long answer, short answer logits.
I also added ''cross'' head, which is a bilinear function of the pairs of the sequence output of the BERT model. Short span logits then obtained as the sum of the start and end logits and the corresponding output of the cross head.
Impossible spans were masked out and <code>softmax</code> gave the span probabilities. For the long span cross entropy criterion was used both for start and end logits. For the short spans the error was negative log of the total probability of positive short spans. These error terms were computed only for examples having long, short answers. So the aim here is to learn the position given that there is an answer, the probability of having an answer came from the <code>answer_type</code> output.</p>
<h2>Postprocessing</h2>
<p>For each segment the long and short spans with maximal probability was computed. From the answer type head the probabilities of having a short or long answer in the segment were computed and these probabilities were assigned to the most likely spans within the segment. These votes were maximized over all segments containing the given span. Then the spans with highest overall scores was considered for the answer. Thresholds were computed using the development data of the NQ dataset. </p>
<h2>Training</h2>
<p>I trained on tpu for 2 epochs using learning rate 2.5e-5 and batch size 64. Before training on nq data, I fine tuned the BERT model on squad 2.0 dataset with the same setting and preprocessing.</p>
<h2>Code</h2>
<p>The final submission was produced with <br>
<a href="https://www.kaggle.com/prokaj/fork-of-baseline-html-tokens-v5">https://www.kaggle.com/prokaj/fork-of-baseline-html-tokens-v5</a></p>
<p>Pre and post processing code <br>
<a href="https://www.kaggle.com/prokaj/bert-baseline-pre-and-post-process">https://www.kaggle.com/prokaj/bert-baseline-pre-and-post-process</a></p>
<p>final model in saved model format <br>
<a href="https://www.kaggle.com/prokaj/tpu-2020-01-22">https://www.kaggle.com/prokaj/tpu-2020-01-22</a></p>
<p>model code (used on tpu) <br>
<a href="https://www.kaggle.com/prokaj/tpu-code">https://www.kaggle.com/prokaj/tpu-code</a></p>
<p>BERT implementation from official tensorflow models (preinstalled on TPU)
<a href="https://github.com/tensorflow/models/tree/master/official">https://github.com/tensorflow/models/tree/master/official</a></p> |
TensorFlow 2.0 Question Answering | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sure. Our attempted implementation is at <a href="https://github.com/boliu61/tf2qa/blob/master/jb_train_tpu.py#L1054">https://github.com/boliu61/tf2qa/blob/master/jb_train_tpu.py#L1054</a></p>
<p><strong>Update</strong>: code is available at <a href="https://github.com/boliu61/tf2qa">https://github.com/boliu61/tf2qa</a></p>
<p>inference notebook and model weights: <a href="https://www.kaggle.com/boliu0/7th-place-submission">https://www.kaggle.com/boliu0/7th-place-submission</a></p>
<p>We implemented it, but it didn't improve the score.</p>
<h1>Framework and hardware</h1>
<p>Initially we set out to try both TF 1.15 and TF 2.0. Unfortunately TF 2.0 pipeline's scores are significantly lower (we probably didn't figure out how to correctly use the TF 2.0 API), so all our top submissions and what is described below were done in TF 1.15. </p>
<p>All the experiments were done in Google cloud TPUs. This is my first time seriously using TPUs and I have to say, it feels so good. Because they are so fast and Google is generous enough to give us 5 TPUs, the experiment cycle is dramatically reduced.</p>
<h1>Validation scheme and experiment setup</h1>
<p>We noticed that the evaluation metric is not very stable on smaller validation set, for example better models on dev00 (1600 examples) may not be better on dev01. So we rely solely on the whole dev set (7830 examples) for validation (i.e. selecting checkpoints, selecting models, tuning thresholds, tuning ensemble weights). Same thing goes for the public LB. It only has 346 examples, even less stable than dev00.</p>
<p>Most of our models are based on the official implementation of <a href="https://github.com/google-research/language/tree/master/language/question_answering/bert_joint">joint-bert</a>, which is surprisingly hard to beat. For each variation to joint-bert, we ran 5 training sessions on TPU simultaneously, with different batch size and learning rate. We save checkpoints every 500 or 1000 steps. Usually bs=64 and lr=4e-5 for 1 epoch gives the best scores.</p>
<h1>Variations to official joint-bert</h1>
<p>We tried the following variations to joint-bert. The ideas for many of them are drawn from this IBM paper: <a href="https://arxiv.org/pdf/1909.05286.pdf">https://arxiv.org/pdf/1909.05286.pdf</a>
1) and 2) are the most important. Adding them always help. They contribute to the best single model. 3) to 6) sometimes help, sometimes don't. They depend on each other. Nevertheless, training with these variations did produce many diverse models, which are good for ensembling. </p>
<h2>1) pre-trained weights</h2>
<p>Official joint-bert was trained from "BERT-Large, Uncased", but training from "BERT-Large, Uncased (Whole Word Masking)" will see a big boost.</p>
<p>We also tried fine-tuning joint-bert on Squad 2.0 before fine-tuning it on NQ.</p>
<h2>2) negative sampling</h2>
<p>Official joint-bert samples 2% negative examples in both answerable questions (i.e. the sliding windows that don't contain an answer) and unanswerable questions. As explained in the IBM paper, joint-bert tends to be overconfident for unanswerable questions, so 1% for answerable and 4% unanswerable seem to be better. We saw about 1 point increase in F1 doing negative sampling.</p>
<h2>3) <code>max_seq_length</code>, <code>doc_stride</code></h2>
<p>Default are 512 and 128 respectively. This means for answers not in the beginning of article, they appear about 4 times after pre-processing (in training the example is seen 4 times per epoch, in inference it's predicted 4 times, then the one with max logits is selected), which seems like an overkill.</p>
<p>So we changed doc_stride to 256 during inference, which doesn't affect score much but reduced inference time in half. </p>
<p>For training, we used default 128 as well 192 and 256. IBM paper claims 192 gives best results, but we didn't see much difference.</p>
<h2>4) max_contexts</h2>
<p>Default is 48. There are some very very long wikepedia articles, so joint-bert only take first 48 paragraphs/tables/lists of each article. We tried different values like 100 and 200. Using a bigger value is tradeoff between more answer coverage v.s. more "empty" windows.</p>
<h2>5) sentence order shuffling</h2>
<p>Also proposed in the IBM paper: shuffling all the sentences in the paragraph containing short answers. This is an augmentation method. </p>
<h2>6) cased</h2>
<p><code>do_lower_case=False</code>
For this, we generated a new vocab file by adding all the NQ special tokens into the cased BERT vocab.</p>
<h2>7) Attention-over-attention</h2>
<p>Mentioned in the IBM paper as the most important change, but it didn't work for us.</p>
<h1>Ensemble</h1>
<p>In 3 hours, we can do inference for 3 models with doc_stride=256. Luckily, n=3 happen to be the number of our best ensemble: adding a fourth model does not help anymore. The ensemble strategy is simply averaging the probability of each candidate span.</p>
<p>Our 2 submissions consist of the following 5 single models:
a. wwm, stride=256, dev 62.4
b. wwm, neg sampling, pre-tuned on squad, <strong>dev 64.7</strong> (long 69.5, short 57.8) - best single model
c. wwm, neg sampling, max_contexts=200, dev 64.5
d. wwm, neg sampling, stride=192, dev 63.8
e. wwm, neg sampling, cased, dev 63.3</p>
<p>sub1: ensemble of a,b,c, dev 66.8 (long 71.6, short 59.8), private LB 0.69
sub2: ensemble of c,d,e, <strong>dev 67.0</strong> (long 71.6, short 59.9), private LB 0.69
(note: these scores are after post-processing)</p>
<h1>Post-process</h1>
<h3>yes/no thresholds</h3>
<p>These are tuned on dev set as well. If the yes/no logits in the <code>answer_type_logit</code> are over the thresholds, predict "YES"/"NO" regardless of the short span predictions. This gives 0.5 boost to dev F1.</p>
<h3>max_contexts</h3>
<p>Increase max_contexts from the default 48 to 100 or 80 can squeeze out another 0.3 F1 points, taking advantage of the leftover inference time within the 3 hour limit. For sub1 we did 100; for sub2 we only did 80 because generating features for the cased model <code>e</code> took a little more time.</p>
<h1>Code</h1>
<p>repo: <a href="https://github.com/boliu61/tf2qa">https://github.com/boliu61/tf2qa</a>
inference notebook and model weights: <a href="https://www.kaggle.com/boliu0/7th-place-submission">https://www.kaggle.com/boliu0/7th-place-submission</a></p>
<p>Thanks and congrats to you too. Looking forward to your solution. </p>
<p>We may have discovered that paper (which claimed SOTA) too early and spent too much time on it 😂</p> |
TensorFlow 2.0 Question Answering | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to thank the organizers for this interesting competition with a not-so-standard dataset. Question answering is one of the most fun tasks in NLP and it is great to finally see it on Kaggle. Also, thanks for providing GCP credits and TPU quota to the participants. I'm sure it greatly widened the range of models and ideas that were tested in this competition.</p>
<p>My work started as an attempt to reimplement BERT-joint in PyTorch using RoBERTa as the backbone. However, I added quite a few tricks and tweaks along the way and ended up with a model and pipeline significantly different from the original BERT-joint. Here I'm going to describe the main changes.</p>
<h2>Preprocessing</h2>
<p>Instead of sliding a window over the entire Wikipedia article, I processed each top level long answer candidate separately. For each candidate, I either converted it into one training example if its length permitted it or split it into several training examples if the candidate was too long. I also added some of the surrounding context to those candidates that were particularly short.</p>
<p>The above preprocessing resulted in approximately 152k positive and more than 12 million negative (not containing any answer) examples, so I decreased the number of negative examples to 160k by random sampling. I used a kind of hard negative mining strategy by sampling more of those negative examples that have high TF-IDF similarity between the question and the candidate. I also sampled several non-overlapping subsets of negative examples to use for different epochs of training thus increasing the diversity of my training data.</p>
<h2>Model</h2>
<p>My model is just RoBERTa-large (I use the implementation from Transformers library) with a new output layer on top of it. In addition to a token-level span predictor for short answers, I use a binary classifier to determine whether a candidate is a long answer or not. The combination of an answerability classifier and a span predictor is a standard approach for SQUAD2.0 (XLNet, RoBERTa, ALBERT all use it). NQ dataset differs from SQuAD2.0 in that a question can be considered answerable even when the correct short answer span is empty (this happens when a question has a long answer, but no short answer).</p>
<p>For span predictor, I use a trick from XLNet: instead of predicting start and end tokens independently, I first predict the start token, then concatenate its representation from the final encoder layer to representations of all the tokens and pass these concatenated representations as input to the end token predictor. This means that the prediction of the end token is conditioned on the start token, which significantly improves the quality of span prediction. </p>
<p>I did not find a way to include YES/NO answers in my predictions without a decrease in the total score so I chose not to predict such answers.</p>
<p>During inference, I first find the long answer candidate that has the highest answerability score. If this score is above a certain threshold, I choose this candidate as my long answer prediction and predict a short span for this candidate. If this span's score is also above a certain threshold, I choose it as my short answer prediction. I used the official NQ dev set to find the best thresholds for both long and short answers.</p>
<h2>Training hyperparameters</h2>
<p>I used AdamW optimizer with weight decay of 0.01 and a linearly decaying learning rate with warmup for all experiments. I had neither time nor computational resources to try a wide range of hyperparameters so the ones I've chosen can be far from optimal. I got the best results on the dev set with a model trained for 5 epochs with a batch size of 48 and a maximum learning rate of 2e-5. I used this model for one of my final submissions. I also had two other models with good results that I later used for ensembling: one was trained for 3 epochs with a batch size of 24 and a maximum learning rate of 3e-5 and the other was trained for 2 epochs with a batch size of 15 and a maximum learning rate of 3e-5.</p>
<p>Training RoBERTa-large for 1 epoch (312k training examples) takes approximately 4 hours on a single V100 GPU using mixed precision.</p>
<h2>Ensembling</h2>
<p>For my second final submission, I ensembled three models by simply summing their output layer logits. This approach led to a significant improvement on the dev set, but it could not fit in the submission time limit. In order to fix it, I decided to limit the number of long answer candidates per question by taking only the first N candidates (most answers are found in the first few paragraphs anyway). However, when my final models and ensembling code were ready, I only had five hours before the deadline and two submissions left so I did not have a chance to select the maximum value of N that will allow my submission to fit within the time limit. I ended up choosing too small of a value for N which probably harmed the performance of my ensemble. In hindsight, it seems a better approach could be to score all candidates with just one model and then use the other two models only for several candidates that got the highest answerability scores from the first model.</p>
<p>In the end, all three of my main models, as well as the ensemble, got a score of 0.68 on the private test set. Well, at least I got stable results.</p>
<h2>Some ideas that did not quite work</h2>
<ul>
<li><p>While SQuAD2.0 pretraining seemed beneficial in my early experiments, it harmed the performance of my final models so I ended up not using it. I suspect that while changing the output layer architecture I might have introduced some bugs in the SQuAD pretraining code. It explains why many other participants, as well as several papers about the NQ dataset, report improvements from SQuAD pretraining.</p></li>
<li><p>I tried adding one more binary classifier to determine whether a candidate contains a short answer or not, but it did not lead to an improvement on the dev set. Now I can see that my early submission with this additional classifier got a slightly higher score on the private test set than a similar submission without it, so it might have been a useful idea after all. </p></li>
</ul> |
TensorFlow 2.0 Question Answering | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First I would like to thank Kaggle Team and TensorFlow for wonderful competition, TFRC program for TPU credits, Google Cloud for 300$ credits and <a href="/prokaj">@prokaj</a> for sharing his solution.
It was great experience to work on big real-world high quality dataset, use TensorFlow 2.0 and TPUs first time, run inference in couple of seconds, train with batch_size=128 and finally win Gold Medal.</p>
<p>My solution is single model in TF 2.1 trained on TPU. It is Bert Joint with some tweaks and postprocessing. Here are main differences from Bert Joint:
1) Pretrained model: Whole-Word-Masking Bert Large
2) Tfrecords generated with include_unknowns=0.2 (10 time more examples without answer than in original paper).
3) Trained 1 epoch with batch size 128, lr=5e-5 (4-5 hours on TPU).
4) Use answer type logits:
- If answer_type=1 => yes_no_answer=’NO’
- If answer_type=2 => yes_no_answer=’YES’
- If answer_type=4 => no short answers</p>
<p>5) Get some answers with top_level=False</p>
<p>I did EDA and noticed that if 2 long answer candidates contain short answer and one candidate is top_level and another candidate is not top_level and it starts with "Li" HTML token => about 70% chance that correct candidate is non top_level one.
So I implemented this idea as postprocessing.</p>
<p>6) Linear regression over 9 logits as answer verifier.
9 logits included 5 answer type logits, cls_start_logit, cls_end_logit, start_span_logit, end_span_logit.</p>
<p>P.S.
In my local metric I had long_<em>non</em>_null__threshold = 1, short_non_null_threshold = 1 but for some reason it didn’t have big influence on leaderboard score (comparing to long_non_null_threshold = 2, short_non_null_threshold = 2).</p>
<p>Inference kernel: <a href="https://www.kaggle.com/user189546/tfqa-bert-train-tf2">https://www.kaggle.com/user189546/tfqa-bert-train-tf2</a>
Model weights: <a href="https://www.kaggle.com/user189546/unk0201128w">https://www.kaggle.com/user189546/unk0201128w</a>
Train code: <a href="https://www.kaggle.com/user189546/tfqa-train-code">https://www.kaggle.com/user189546/tfqa-train-code</a>
Tfrecords: <a href="https://www.kaggle.com/user189546/train-tfrecords">https://www.kaggle.com/user189546/train-tfrecords</a></p>
<p>P.S. I reused code from these sources:
1. <a href="https://www.kaggle.com/prokaj/bert-joint-baseline-notebook">https://www.kaggle.com/prokaj/bert-joint-baseline-notebook</a>
2. <a href="https://github.com/google-research/language/tree/master/language/question_answering/bert_joint">https://github.com/google-research/language/tree/master/language/question_answering/bert_joint</a>
3. <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/keras_flowers_gputputpupod_tf2.1.ipynb">https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/keras_flowers_gputputpupod_tf2.1.ipynb</a></p>
<p>Thanks everybody!</p> |
TensorFlow 2.0 Question Answering | Brief summary of 13th place solution (hide the pain Harold) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I just averaged the weights. All the models were trained with Adam.</p>
<p>At first, congrats to every team that scored higher than we did - since we finished on 13th place, that means all of you finished in the gold zone.</p>
<p>Our best solution consists of 4 bert-based models: 1 distilbert, 2 albert-large, and 1 bert-large WWM. </p>
<p><a href="/kashnitsky">@kashnitsky</a> has trained our best single model that scored around 0.64 on the local dev set (we used official NQ dev set for validation). He used original bert-joint repo and added hacks described in the <a href="https://arxiv.org/abs/1909.05286">paper</a>. He also spent a lot of time trying to make ALBERT models work, but none of them (even xxlarge version) wasn't better than bert-large. </p>
<p><a href="/yaroshevskiy">@yaroshevskiy</a> implemented his own version of bert-joint in pytorch, including all the pre and post processing stuff. His best model is based on ALBERT large pretrained on squad 2.0. Oleg also came up with a trick that one might call "window smoothing". The trick addressed bad predictions of start/end probabilities on the window edges. The idea is that for those start/end logits that are close to the edge of the window we use a linear combination of the current window logits and logits from the neighboring window. This improved the score by around 0.01-0.02.</p>
<p>I implemented my own pytorch model that is different from bert-joint in two aspects:
- Instead of working on arbitrary chunked texts, I work on top of long answer candidates
- start/end logits are predicted jointly by an attention-like layer, and the unrealistic start/end positions (like padding or question tokens) are filled with -inf</p>
<p>Implementation for the start/end module is the following:</p>
<p>```
class StartEndModule(nn.Module):</p>
<pre><code>def __init__(self, input_dim, hidden_dim):
super().__init__()
self.start = nn.Conv1d(input_dim, hidden_dim, kernel_size=1)
self.end = nn.Conv1d(input_dim, hidden_dim, kernel_size=1)
def forward(self, hidden, text_mask):
start = self.start(hidden).unsqueeze(3)
end = self.end(hidden).unsqueeze(2)
logits = (start * end).sum(dim=1)
triu_mask = torch.triu(logits, diagonal=1) == 0
text_mask = ((text_mask.unsqueeze(2) * text_mask.unsqueeze(1)) &lt; 0.5)
mask = text_mask | triu_mask
mask[:, 0, 0] = False
logits.masked_fill_(mask, float("-inf"))
return logits, mask.float()
</code></pre>
<p>```</p>
<p>Thus, logits is a square matrix where each entry is a score for a particular start/end pair. In order to find the best-scoring span, one just needs to compute an argmax over those scores.</p>
<p>I also applied <a href="https://arxiv.org/abs/1803.05407">SWA</a> to both of my models and got a nice boost in score (around 0.015), while Oleg and Yury reported none to minor improvements from SWA).</p>
<p>In order to speedup inference, we used distilbert model for candidate prescoring. The idea is for all other models except for distilbert we ignore those windows/candidates that received low scores from the distilbert model.</p>
<p>In order to blend our models together, we used a lightgbm boosting tree. For each candidate, we collect the corresponding scores from all the models as well as some meta-features (such as answer length or relative position of this candidate in the document) and the target is to predict if this candidate contains an answer. </p>
<p>Our best blend achieved around 0.67 on the local dev set and 0.68 on the LB.</p> |
TensorFlow 2.0 Question Answering | Rank 93 Solution: Or how you can still win a medal without any model fine-tuning | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot for your encouragement <a href="/mashlyn">@mashlyn</a> </p>
<p>We didn't have the time, skill or resources to fine-tune our model, but that did not prevent us from securing a top 8% rank - it took a lot of perseverance with 90-odd submissions.</p>
<p>The link to the kernel is <a href="https://www.kaggle.com/rohitagarwal/rank-93-solution-fork-of-bert-joint-14-changes-37">here</a>.</p>
<p>Thanks!</p> |
TensorFlow 2.0 Question Answering | rank68_the_ simplest idea to get the first medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>The whole process of solving this problem in my way:</strong></p>
<p><em><strong>source</strong></em>:fork from bert joint baseline notebook</p>
<p>**<em>problems</em> **(before starting this competition): </p>
<p>1.the first time to deal with QA </p>
<p>2.just have a month to go</p>
<p>3.need to understand source codes in tf2.0 ver(just familiar with pytorch)</p>
<p>4.GPU (just have a 1080ti)</p>
<p><strong>Based on current problems, I came up with some solutions:</strong></p>
<p>1.read tf2.0 source code(necessary) #done</p>
<p>2.fine-tune bert-joint-model (<strong>the simplest solution to get started and indeed understand the whole process</strong>)#done</p>
<p>3.try to convert tf2.0 code to pytorch ver (but have not a better result untill now )#done</p>
<p>4.change the pipeline(Some effort)</p>
<p>In this competition, I think the first and second points are the most important for the first time runners,and the main changes of mine are as follows:</p>
<p>1.FLAGS=DummyObject(skip_nested_contexts=True,
max_position=50,
<strong>max_contexts=130</strong>,
max_query_length=64,
max_seq_length=512,
doc_stride=128,
include_unknowns=0.02,
n_best_size=50,
max_answer_length=60)</p>
<p>2.entry["short_answer_score"] < 7.5:</p>
<p>3.entry["long_answer_score"] < 1.5:</p>
<p><strong>I hope to be of some help to those who have just started a new task with limited time and resources</strong></p>
<p>kernel:<a href="https://www.kaggle.com/vanle73/rank68-the-simplest-idea-to-get-medal?scriptVersionId=26631305">rank68_the_ simplest idea to get medal</a></p> |
TensorFlow Speech Recognition Challenge | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, an honor to compete with you all! I really learned a lot from doing this competition and from everyone here, and congrats to Heng CherKeng, Ryan Sun, and See!</p>
<p>I thought I'd share my methodology, with a focus on parts I haven't seen discussed here yet.</p>
<p>I was able to get a 90.9% private LB on a single model, but wasn't able to fully benefit from ensembling (I'm not very experienced in that, but learned a lot in a short time from the forums here). My ensembling technique was to vary a few parameters from the main model and average the square roots of the models' probabilities.</p>
<p>Model Architecture:</p>
<p>I used 120 log-mel filterbanks for my best model. Given this, I thought it was important to create a model that treated time and frequency differently. Specifically, I didn't do any downsampling in the time domain until the very end. With time as the first dimension and frequency as the second, my model architecture was:</p>
<p>1) Conv2d(64, [7,3] ) </p>
<p>I thought of this as a "denoising" and basic feature extraction step</p>
<p>2) MaxPool( [1,3] )</p>
<p>Getting back down to the standard 40 frequency features</p>
<p>3) Conv2d(128, [1,7] ) </p>
<p>Look for local patterns across frequency bands</p>
<p>4) MaxPool( [1,4] ) </p>
<p>Allow for speaker variation, similar to what worked here: <a href="https://link.springer.com/content/pdf/10.1186%2Fs13636-015-0068-3.pdf">https://link.springer.com/content/pdf/10.1186%2Fs13636-015-0068-3.pdf</a></p>
<p>5) Conv2d(256 [1,10], padding="VALID") </p>
<p>This allows it to treat each remaining freq band very differently, and compress the frequency dimension entirely. I think of this as detecting phoneme-level features</p>
<p>6) Conv2d(512,[7,1])</p>
<p>I think of this as looking for connected components of a short keyword at different points in time</p>
<p>7) GlobalMaxPool in time</p>
<p>Collect all the components</p>
<p>8) Dropout + Fully Connected 256</p>
<p>Because why not, and seemed to work well</p>
<p>Data Augmentation/Standardization:</p>
<p>In addition to time stretching (which gave a boost of +1% LB), there were two techniques I applied that I haven't seen mentioned yet here that I think really helped.</p>
<p>1) Standardize Peak (Windowed) Volume</p>
<p>Basically, I took every clip, split it into 20 to 50 chunks, and then standardized the volume of the clips so that every clip had the same max chunk volume. Why this approach? Well, standardizing by average volume would be fine, but since some keywords were longer than others, very short . keywords would have a low average volume even though the part of the WAV containing the actual word could be quite loud.</p>
<p>When I implemented this, my LB score went from 82% to 84%</p>
<p>My theory is that by doing this, the convnet didn't have to deal with as many issues in terms of different scales for the same feature, since the volumes of the WAVs spanned orders or magnitude. Obviously using Log Mels helped with this too.</p>
<p>2) Vocal Tract Length Perturbation</p>
<p>I used VTLP as described in this paper <a href="https://pdfs.semanticscholar.org/3de0/616eb3cd4554fdf9fd65c9c82f2605a17413.pdf">https://pdfs.semanticscholar.org/3de0/616eb3cd4554fdf9fd65c9c82f2605a17413.pdf</a></p>
<p>This perturbation could be applied when creating the weight matrix to convert a spectrogram into log mels, so it was a very fast augmentation method</p>
<p>This increased my LB score +1%, and I saw the greatest benefit using the same VTLP factor within a batch, along the line of reasoning described here:</p>
<p><a href="https://arxiv.org/abs/1707.00722">https://arxiv.org/abs/1707.00722</a></p> |
TensorFlow Speech Recognition Challenge | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>For sure but somehow it seems there is a limit for uploading files. So I could only attach one with predicted probabilities. The one attached has slightly better score than my selected one. (You can find the file in the main post)</p>
<p>Also congrats!</p>
<p>sure. I just look at the correlation of the submission probabilities of each class, and then use the difference of the model/inputs as well as model score on public LB and then decide a weight and then submit, if it is better than any of the single model score, I just use that. I might try 2 - 3 times and then settle with the weights.</p>
<p>sorry if I was not clear. So say, you have a model that scores 0.89 on LB.</p>
<p>You take the submission of that model, and use that as the labels for test data. Now this test data with 0.89 accurate labels become the new training data for the pretrained model. You train your pretrained model with only test data.</p>
<p>And then you use the weights from the pretrained model to initialize one same model, and now, instead of training on test data, you train on training data. and then do prediction with this fine tuned model for test data.</p>
<p>Hope it is clearer now.</p>
<p>I got sometime today to clean up my code and write the solution doc. </p>
<p>Pushed them here <a href="https://github.com/xiaozhouwang/tensorflow_speech_recognition_solution">https://github.com/xiaozhouwang/tensorflow_speech_recognition_solution</a></p>
<p>My best single model was only 0.87 on public LB and ensemble of them could only barely enter 0.89. The boost came from two semi supervised learning techniques. You can refer to the code and doc but I also conveniently posted the key part below:</p>
<p><strong>Models</strong></p>
<p>10 main models were trained</p>
<p>Resnet with 9 layers on mfcc and mel (input reshaped to 128 * 128)</p>
<p>Senet18 on mfcc and mel (input reshaped to 128 * 128)</p>
<p>Densenet121 on mfcc and mel (input reshaped to 128 * 128)</p>
<p>VGG (with GlobalMaxPooling+GlobalAveragePooling in the end) on mfcc and mel (input reshaped to 128 * 128)</p>
<p>VGG (with GlobalMaxPooling+GlobalAveragePooling in the end) on raw input</p>
<p>VGG (with fully connected layers in the end) on mel where it only convs along the time axis</p>
<p><strong>Semi Supervised Learning</strong></p>
<p>Weighted average of previously trained 10 models was used as the “ground truth” labels for test data. Two semi supervised learning techniques were used:</p>
<p>100% of training data + 20%-35% of test data selected per epoch were used as the new training data. Resnet (mfcc and mel), Senet (mfcc and mel) and VGG (mel and raw) were trained this way.</p>
<p>100% of test data were used to train a “pretrained” model. And then the weights of the “pretrained” model were loaded as initial weights to train the same model with only training data (fine tuning with training data). It turned out that fine tuning with only one epoch was enough to “correct” the overfitted “pretrained” model.</p>
<p>Both techniques were used with the hope that they could learn/approximate the test distribution better (instead of “overfitting” training data distribution). First one does it on the data level and second one does it on the model level. </p>
<p>Thank you Kaggle and Google Brain team for this great competition and congrats to all top teams! This surely wasn't an easy competition and I had quite a lot of fun doing it!</p>
<p>Update:
uploaded my best private sub with pred scores (slightly better than my selected one, 0.91084). </p> |
TensorFlow Speech Recognition Challenge | An LSTM CTC solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you:-) </p>
<p>Most models in this competition are CNN based and learns to categorise words. </p>
<p>This model is a Convolution Residual, backward LSTM network using Connectionist Temporal Classification (CTC) cost, written in TensorFlow. <a href="https://github.com/huschen/kaggle_speech_recognition">Source code</a>. It is more of a characters(phonemes) learning model.</p>
<h2>Performance (Private Score):</h2>
<p>Single model, single training run: 0.89357</p>
<p>Average over 5 folds (runs): 0.89874</p>
<p>No test data is used in training.</p>
<h2>Comparision with Word-Learning Models:</h2>
<p>In this particular competition and data set, character learning model has its limitations:</p>
<ul>
<li><p>High false negative rate (i.e. the tendency to miss predict key words as unknown words). Simply because it has to get all the characters right to predict the correct word. An ambiguous 'off' sound might be recognised as 'oP' or 'Uf'. A 'seven' with an unheard accent might be predicted as 'sIven'. Averaging over different fold runs sometimes helps.</p></li>
<li><p>More sensitive to incomplete word recordings (both in training and testing). Learning (maybe memorising) ‘LE’ ‘LEF’, 'EFT', etc. as 'LEFT' is harder for c-models than w-models, because c-models has to 'learn' the character that is not there in the sound.. It is quite a random guess in recognizing 'O' as 'go', 'off' or 'on' in the competition.</p></li>
<li><p>Need prior knowledge of pronunciations and preprocessing, as there is no one-to-one mapping between characters and phonemes. But it is possible the models can learn the mapping (and memorizing exceptions) with larger training data.</p></li>
<li><p>In the case of this particular Kaggle competition, the word-character (30-28) ratio is too small for c-models being efficient.</p></li>
</ul>
<p>But some advantages as well, possibly more suitable in practical uses:</p>
<ul>
<li><p>Generalisation. The c-model is able to learn unseen words, e.g. recognizing 'night' from learning 'NIne' and 'rIGHT', recognizing 'follow' as 'foow' (missing 'l' sound) from learning 'Four' 'dOWn' and 'nO'.</p></li>
<li><p>Easier to scale up to larger vocabularies, with the increase of the number of LSTM hidden units, as well as the increase of the training time when learning longer words. Bi-Directional LSTM will be needed to capture more complex patterns.</p></li>
<li><p>In practise, the property of high false negative (and low false positive) rate is likely a desired feature for key commands detecting applications. It is possible that the models will be customized to end users (with the balance of being general) so that the accents won't be a problem.</p></li>
</ul>
<p>Let me know what you think:-)</p>
<p>Thank you, but the model is unlikely to achieve your impressive score 0.91:-)
Any insights into the dataset and my comparison of word learning and character learning models? </p> |
TensorFlow Speech Recognition Challenge | LB Score 0.90637 Approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I had no background in speech recognition at all, but thanks to many generous kernels/discussions and I could learned a lot during this competition. Especially thanks to @ttagu99, @Heng, @vinvinvin</p>
<p>At first I wasn't interested in this competition, but 1D conv approach looked really interesting so I just gave it a try. Here's my initial approach:</p>
<pre><code>1. Used 1D conv net with 10 pooling layer, used kernel size 9, filter count 256 for first layer. Almost similar to ttagu99's, but replaced GAP+GMP to GMP and just used single FC with no dropout.
2. Split train/val by person id, and train/predict on all 10 folds.
3. Listened mis-predicted samples from validation set(around 2000?) and noticed some mislabeled samples and samples without any voice in it. Smart guy would find another algorithm to identify these, but I just listened to them. Eventually I identified 640 silences, 121 mislabels.
4. Concatenated all noise and identified silences in training set into single wav. And it is randomly sampled while augmentation.
5. Augmentation:
a. Time-shift augmentation: many samples are clipped at start or end, so I thought it's better not to cut these out. So I just randomly padded samples front & back with random noise and increased PCM sample count to 20k. I didn't expect this augmentation help much, somehow it helped somewhat.
b. Noise augmentation: Added up to x.5 noise and it improved LB little bit.
c. Tried other augmentations like pitch, volume, speed, but they didn't help much or even harmed the performance.
</code></pre>
<p>With this approach, I got LB score of .87 and couldn't increase the performance anymore with 1D conv. Tried some 2D Conv approach but didn't work well.</p>
<p>Later I formed a team with @Ildoo Kim who used high resolution mel spectrograms + VGG like network and had similar score as mine. We got immediate boost after merging of my augmentation and Ildoo's model. After some more fiddling of models, we got little bit of improvements. But we're stuck around .88 with single model, .893 with 5 model ensembles for a while. </p>
<p>I concluded that the model is large enough, so I worked more on data and found that adding heavy noise augmentation while keeping noise vs signal ratio doesn't exceed 2(yes, noise can be twice louder than voice) boost the score a lot. Just with this augmentation, same model (resnet-like net) got public LB score .898</p>
<p>Unfortunately we found this 2 days just before deadline and didn't have much time and submissions to experiment more. So we just trained a few more models and ensembled blindly even without checking individual scores.</p>
<p>One interesting thing was my original 1D model didn't work well after adding heavy noise. So I dropped it altogether. But later I found that 1D model also can get better score also once I add more capacity to the network.</p>
<p>In my case, I got better result with BatchNormalization instead of Dropout at FC layer.</p> |
TensorFlow Speech Recognition Challenge | Neural attention for speech recognition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. We are just waiting for a response but I think we will make the Git public anyways by the end of the month.</p>
<p>We are just waiting for acceptance note from the Journal. We expect it in under a month.</p>
<p>We got a reviewer assigned so we are more comfortable with posting our code: <a href="https://github.com/douglas125/SpeechCmdRecognition">https://github.com/douglas125/SpeechCmdRecognition</a></p>
<p>Well, our paper went to review so we posted our code: <a href="https://github.com/douglas125/SpeechCmdRecognition">https://github.com/douglas125/SpeechCmdRecognition</a></p>
<p>Hello;</p>
<p>Some friends and I are working on other speech challenges and we used the data from this competition (and V2, released later) to test our models.</p>
<p><a href="https://arxiv.org/abs/1808.08929">https://arxiv.org/abs/1808.08929</a></p>
<p>EDIT: Our code is now available at <a href="https://github.com/douglas125/SpeechCmdRecognition">https://github.com/douglas125/SpeechCmdRecognition</a></p>
<p>Best regards;</p>
<p>Douglas</p> |
TensorFlow Speech Recognition Challenge | Newbie solution. Best private score 0.90908. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Almost forget it. Before mixup two waves, I used data augmentation, which is the same as the tutorial,
<a href="https://paste.ubuntu.com/26409540/">code here, line 67-78</a>. </p>
<p>Hi, thank you all. This is my first time to Kaggle. I learned a lot in this competition. Congrats to Heng CherKeng, Ryan Sun, and See!</p>
<p>My best private score is 0.90908. What I chose as final submissions scores 0.90790 :).</p>
<p>Here is my solution. A very simple solution... Code comes from <a href="https://www.tensorflow.org/tutorials/audio_recognition">Google tutorial</a> mostly.</p>
<ul>
<li><p>I used vgg11 with an input of 40 * 98 log mel spectrogram. I got 0.86 private LB. After log mel + mfcc ensemble, I got 0.88 private LB.</p></li>
<li><p>I used mixup, I got 0.89 private LB. After ensemble, I got 0.909 private LB, my best LB score. <a href="https://arxiv.org/abs/1710.09412">paper link</a></p></li>
<li>knowledge distillation to train a smaller model. The small model scores 0.90 private LB, which runs very fast. <a href="https://arxiv.org/abs/1503.02531">paper link</a></li>
</ul>
<p><strong>Code</strong></p>
<p>mixup:</p>
<pre><code>def get_data(args):
weight = np.random.beta(alpha, alpha, batch_size)
x_weight = weight.reshape(batch_size, 1)
y_weight = weight.reshape(batch_size, 1)
index = np.random.permutation(batch_size)
x1, x2 = data, data[index]
x = x1 * x_weight + x2 * (1 - x_weight)
y1, y2 = labels, labels[index]
y = y1 * y_weight + y2 * (1 - y_weight)
return x, y
</code></pre>
<p>alpha is set to 2</p>
<p>Knowledge distillation code <a href="https://github.com/chengshengchan/model_compression">here</a>.</p>
<p>Yes.</p>
<p>Another point worth noting is, if you want to do ensemble between mixuped model with none mixup model, you should add a proportion to model, instead of taking the average.
For examples, if model A used mixup, model B didn't, C is the result. When ensemble, C=0.5*B + A. You can also try other numbers. 0.5 works well for my model.
The reason is, after mixup, the score of each class will be slower, which is not difficult to understand if you know the formula of mixup. If you just take the average, B will lead the ensemble process, which may get you bad result.</p>
<p>Set alpha to 2 is inspired by a blog. And this is my first attempt. I find it works well for me. I tried 2, 4, 6. And '2' works best. If you want to do more experiment, I think you can try to set alpha slower, such as 0.2, 0.4, and so no.</p>
<p>log mel feature: <a href="https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/discussion/45362">https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/discussion/45362</a></p>
<p>mixup: <a href="https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/discussion/47134">https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/discussion/47134</a> </p>
<p>knowledge distillation: <a href="https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/discussion/46945">https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/discussion/46945</a></p>
<p>I just mix the methods above up. -.-</p> |
TensorFlow Speech Recognition Challenge | Our approach [4th place] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to all participants and congratulations to the winners!</p>
<p>I’ll describe my approach and our final solution. My colleagues will provide more details about their techniques and tricks as well.</p>
<p>I started to participate in this competition three weeks ago. It was enough for training many different models and building an ensemble.</p>
<p>I splitted train dataset on 10 folds by speaker_id and generated 6000 silence files using background noise files provided by organizers. Next 1,5 weeks I was training different neural networks.</p>
<p>Almost all of my models have VGG-like and ResNet-like architecture. I just used different number of layers and different number of filters on convolutional layers. Also, I used different types of .wav preprocessing:</p>
<ul>
<li>FFT</li>
<li>MFCC</li>
<li>Mel spectogram</li>
<li>Chroma fft</li>
<li>Tempogram</li>
<li>Raw 16000-dimensional vector</li>
</ul>
<p>My CNNs were trained using disbalanced dataset. It was okay because I did not want to use these models as is. I also trained several RNNs to increase diversity in an ensemble. For augmentation I used:</p>
<ul>
<li>Pitch shift</li>
<li>Time stretch</li>
<li>Time shift</li>
<li>Random noise</li>
</ul>
<p>Next, for each fold I extracted predictions from each model. I did not apply softmax at the outputs to keep more information about predictions. Then, I trained xgboost on class-balanced data and got #11 place.</p>
<p>It was time to merge with other participants :) We merged with Aleksey, Giba, Dmytro and feels_g00d_man. The next challenge was to merge our solutions. Averaging best submissions did not work and did not improve our place at leaderboard.</p>
<p>We combined all L1 models and created one big csv file. It was passed to many different L2 models:</p>
<ul>
<li>XGBoost</li>
<li>LGBM</li>
<li>Catboost</li>
<li>Simple neural networks</li>
<li>Random forest</li>
<li>Extra trees</li>
<li>Adaboost</li>
<li>kNN</li>
<li>etc.</li>
</ul>
<p>I tried almost everything :) Special like to LGBM because this implementation of gradient boosting is very-very fast. XGBoost needs ~8 hours for training 10 folds on GPU and ~14 hours on CPU while LGBM needs only 15 minutes on CPU!</p>
<p>Next, I used weighted geometric mean of L2 models (thus, it was very simple L3 model).
To find optimal weights I built a simple neural network with custom regularizer (all weights >= 0 and the sum of weights is equal to 1).</p>
<p><strong>SILENCE TRICK</strong></p>
<p>It was very interesting to work with silence because we didn’t have train examples for this class. I found a very simple way for increasing LB score. Let’s look at the test predictions and their probabilities and sort them by confidence (maximal probability). Next, let’s select top K samples with the lowest confidence (I used K=10000) and compute power_level = np.max(librosa.feature.melspectrogram). Then, let’s interpret all selected samples with power_level < L (I used L=1) as silence. This simple trick allows to increase LB score on ~0.005</p>
<p>After some time, we have replaced this trick with a separate silence/no-silence model trained using semi-supervised approach. It slightly increased our score (+0.001 on private LB).</p>
<p><strong>UNKNOWN TRICK</strong></p>
<p>Since test set contains unknown unknowns we needed to find a way to work with such samples. feels_g00d_man found a very interesting method to do it. His approach described below.</p>
<p><strong>OVERALL</strong></p>
<p>The test dataset was very strange and we could not find a way how to validate our models. CV score did not correlate with public LB. Usually, increasing CV score was leading to decreasing LB score and it was annoying. Also, the hardest problem was how to select 3 final submissions. We expected a big shake-up and I was pleasantly surprised when I saw private LB. For final submissions we selected the best submit based on public LB, mode blend of 16 different submissions and one submit with a large number of unknowns in predictions. Our best public submit is our best private as well. </p>
<p>Thanks to the organizers, it was a very interesting competition for me because I have never worked with speech data and because of challenge with silence and unknown words :)</p>
<p>I also want to thank all my teammates, it was very cool to work with such great data scientists!</p>
<p>See you in other competitions :)</p>
<p><strong>feels_g00d_man's comment</strong></p>
<p>>
I teamed up with guys right before the merger deadline. So my initial solution was to train around 10 models on all 31 classes using only spectrograms on 5 folds with TTA and stack them with L2 xgboost. After teaming up I was focusing on training L1 models in our big ensemble. Also I trained separate model for unknown unknowns by generating "double words" and we replaced unknown unknowns from this model in our final ensembles (that gave 0.24 boost on private LB).
I pushed source code, described approach and "double words" on <a href="https://github.com/heyt0ny/TensorFlow-Speech-Recognition-Challenge-Solution">https://github.com/heyt0ny/TensorFlow-Speech-Recognition-Challenge-Solution</a></p>
<p><img src="https://i.imgur.com/yeh7Sh7.png" alt="Double words"></p> |
TensorFlow Speech Recognition Challenge | Top 5% Solution Source Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://github.com/subho406/TF-Speech-Recognition-Challenge-Solution">https://github.com/subho406/TF-Speech-Recognition-Challenge-Solution</a></p>
<p>Congratulations to all the winners! This was a wonderful experience, also thank you Google for the free credits without which I cannot imagine taking part in the competition. Also thank you Heng for your posts throughout the competition, your tips helped to jump many places in the leaderboard. Although I did not get a good enough rank but I still think my source code can serve as a template for beginners for any upcoming Kaggle Challenges. </p>
<p>My solution was an ensemble of 13 models ensembled using weighted averaging and stacking. No external dataset was used. The training data was augmented using randomly sampled background noise and time shifting.</p>
<p>The list of the Models used:</p>
<ol>
<li><p>A variant of Convolutional LSTM (<a href="https://arxiv.org/pdf/1610.00277.pdf">https://arxiv.org/pdf/1610.00277.pdf</a>) </p></li>
<li><p>LSTM-L (<a href="https://arxiv.org/pdf/1711.07128.pdf">https://arxiv.org/pdf/1711.07128.pdf</a>) </p></li>
<li><p>C-RNN (<a href="https://arxiv.org/pdf/1711.07128.pdf">https://arxiv.org/pdf/1711.07128.pdf</a>) </p></li>
<li><p>GRU-L (<a href="https://arxiv.org/pdf/1711.07128.pdf">https://arxiv.org/pdf/1711.07128.pdf</a>) </p></li>
<li><p>Resnet</p></li>
</ol>
<p>The features used were MFCC and Audio spectrogram. After initial training on the actual dataset each model was retrained on a combination of the train set and psuedo labelled test set (only predictions with 95%+ confidence were used). </p>
<p>In my case weighted averaging worked better than stacking using linear regression on private leaderboard. </p>
<p>The entire project was very well structured, modular to make the training and analysis easier. All models were implemented using Tensorflow 1.4. The entire source is available on GITHub. </p>
<p><a href="https://github.com/subho406/TF-Speech-Recognition-Challenge-Solution">https://github.com/subho406/TF-Speech-Recognition-Challenge-Solution</a></p> |
TensorFlow Speech Recognition Challenge | VGG19-BN scoring 0.89839, others >0.90 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Some people had really good results with mixup: <a href="https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/discussion/47730">https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/discussion/47730</a></p>
<p>So we decided to implement a naive mixup mixing the mel-spectograms: <a href="https://github.com/tugstugi/pytorch-speech-commands">https://github.com/tugstugi/pytorch-speech-commands</a></p>
<p>With this simple mixup, even VGG could get 0.89839. Other models could reach at least 0.90%.</p> |
TensorFlow Speech Recognition Challenge | what i have learned and moving forward | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow Speech Recognition Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>for each LB sample, we have p = [p0 p1 ... p11] probability values. We extract label=argmax(p) and confidence c = p[label]. </p>
<p>Now for silence and unknown, they are included in pseudo-label set only if confidence c>0.5</p>
<p>For the known classes, we use confidence c>0.8</p>
<p>These values are different training new models to improve diversity.</p>
<p>Thanks! I am trying out time series solution too!</p>
<h2>HENG SOLUTION</h2>
<p>my solution ]</p>
<ul>
<li><p>Most of the details are already posted. We are making document and code clean up for prize submission. Will share with you guys later. In short summary:</p>
<ul><li><p>ensemble of about 30 models comprise of wave, log melspectrogram, mfccs. </p></li>
<li><p>My models are weak in the range of 0.86. My teammate (@Ryan, @See) models are stronger, in the range of 0.88</p></li>
<li><p>mostly convolution networks</p></li>
<li><p>pseudo labeling to train some of the networks (not all)</p></li></ul></li>
</ul>
<p>[ my approach to this competition ]</p>
<ul>
<li><p>Each kaggle competition is different. For some competitions, the winning factor could be:</p>
<ul><li><p>feature engineering (or network design), e.g. the carvana car segmentation</p></li>
<li><p>dealing with noisy label, e.g. amazon satellite image classification</p></li>
<li><p>dealing with large data+category and efficiency, e.g cdiscount e-commerce product image classification</p></li></ul></li>
</ul>
<p>Most of the time, it is combination of the above. For this competition, i think the main challenge is dealing with data domain shift, i.e. "train+validation data" and "LB data" are different. Why?</p>
<ul>
<li><p>the gap between validation score and LB score is large (12% to 8%)</p></li>
<li><p>the trained model is sensitive to distribution of the training class. e.g if you just train with random sampling, simple cnn_trad_pool2_net gives less than 0.80 on public LB. But if you use balanced class sampling, it increases to 0.82</p></li>
<li><p>the trained model is also sensitive type of silence train samples, amplitude of noise level etc.</p></li>
<li><p>lastly i notice the unknowns in LB data is not the same as that in train+valid set</p></li>
</ul>
<p>I am not familiar with speech recognition or audio processing, hence i think it would be difficult for me to design good network. So I focus on the data instead. I used the simplest approach, create train data from the LB set.</p>
<p>Pseudo labeling may "overfit" and can be a dangerous approach. I need to make sure pseudo label LB samples are in correct label and correct distribution. The way to do is:</p>
<ul>
<li><p>let T = train set, V= validation set, L = LB set, </p></li>
<li><p>M = model trained on T</p></li>
<li><p>P = subset of L, labelled by M with label noise < e.g 10%</p></li>
<li><p>N = model trained on T+P</p></li>
<li><p>Accept N if accuracy of N is better than M on both V and L </p></li>
</ul>
<p>you can relax the last acceptance test. Note if P={empty}, you have the original results. You can modify P (e.g. by different threshold like 5%,15%,20% etc ...) until you pass the acceptance test. We actually use different thresholds for different classes (silence, unknowns, knowns) to ensure that the pseudo-labelled set is more balance. </p>
<p>[ some surprises of the competition ]</p>
<ul>
<li>We see ourselves in the top 5 but did not expect to be the winner. We had a bug and mistakenly used a model twice (due to typo error in code) . And this model has much higher weights than the rest. This bug is our winning submission private LB 0.91060
(public LB 0.90296), which is also attached below.</li>
</ul>
<p>Later, it reveal that our highest private LB is 0.91107 (public LB 0.90241) is another ensemble. It is hard to make selection given only 2 decimals are revealed at the competition.</p>
<ul>
<li><p>1d wave input actually works! </p></li>
<li><p>sometimes high score models does not improve when ensemble, especially for those greater than LB =0.88. I compare the confidence probability scores of weak (LB 0.86) and strong (LB 0.88) models. For strong model, the sample score are mostly very near to 1 or zero. So it is very hard to change the scores of test samples i think. </p></li>
</ul>
<p>[ how to go on ]</p>
<ul>
<li><p>I take this learning path. Be a master in training (hyper parameter tuning + data augmentation). Then be a master in network design and then be a ensemble expert.</p></li>
<li><p>After the basics above, i think semi/weak supervised learning is one way to go. From competition point of view, able to automatic label LB dataset and use it for training is very powerful. (My next competition is National Science Bowl 2018, where i hope to use GAN to generate "LB data" with label)</p></li>
<li><p>I want to make a LB score predictor</p></li>
<li><p>I want to make a better way to determine the ensemble weights, e.g. formulate the ensemble weights based on score distribution(which is a rough indication of error. err = 1-max(P_i) ). Maybe I can refer to boosting.</p></li>
<li><p>I want to run some of the kaggle solutions that uses CRNN and LSTM. I believe that is the correct way to do speech.</p></li>
</ul>
<hr>
<h2>SEE SOLUTION</h2>
<h1>Overview of my approach</h1>
<p>I started with the provided <a href="https://www.tensorflow.org/versions/master/tutorials/audio_recognition">tutorial</a> and could easily get better results by just adding momentum to the plain SGD solver (82-83% on the leaderboard). I have no prior experience with audio data and mostly used deep learning with images. For this domain you don't use features but feed the raw pixel values. My thinking was that this should work with audio data as well. Throughout the competition I ran experiments using raw waveforms, spectrograms and log mel features as input. I got similar results using log mel and raw waveform (86%-87%) and used the waveform data for most experiments as it was easier to interpret for me.</p>
<p>For the special price the restrictions were: the network is smaller than 5.000.000 bytes and runs in less than 175ms per sample on a stock Raspberry Pi 3. Regarding the size, this allows you to build networks that have roughly 1.250.000 weight parameters. So by experimenting with these restrictions I came up with an architecture that uses Depthwise1D convolutions on the raw waveform. Using <a href="https://arxiv.org/pdf/1503.02531.pdf">model distillation</a> this network predicts the correct class for 90.8% of the private leaderboard samples and runs in roughly 80ms.</p>
<h1>What didn't work</h1>
<ul>
<li><p>Fancy augmentation methods: I tried flipping (i.e: <code>* -1.0</code>) the samples. You can check that they will sound exactly the same. I also modified <code>input_data.py</code> to change the foreground and background volume independently and created a separate volume range for the silence samples. My validation accuracy improved for some experiments but my leaderboard scores didn't.</p></li>
<li><p>Predicting unknown unknowns: I didn't find a good way to consistently predict these words. Often, similar words were wrongly classified (e.g. one as on).</p></li>
<li><p>Creating new words: I trained some networks with even more classes. I reversed the samples from the known unwanted words, e.g. <code>bird</code>, <code>bed</code>, <code>marvin</code>, and created new classes (<code>bird</code> -> <code>drib</code> ...). The idea was to have more unknowns to prevent the network from wrongly mapping unknowns to the known words. For example the word <code>follow</code> was mostly predicted as <code>off</code>. However, neither my validation score not my leaderboard score improved.</p></li>
<li><p>Cyclic learning rate schedules: The winning entry of the <a href="http://blog.kaggle.com/2017/12/22/carvana-image-masking-first-place-interview/">Caravana Image Masking Challenge</a> used cyclic learning rates but for me the results got worse and you had additional hyperparameters. Maybe I just didn't implement it correctly.</p></li>
</ul>
<h1>What worked</h1>
<ul>
<li><p>Mixing tensorflow and Keras: Both frameworks work perfectly together and you can mix them wherever you want. For example: I wrapped the provided data AudioProcessor from <code>input_data.py</code> in a generator and used it with <code>keras.models.Model.fit_generator</code>. This way, I could implement new architectures really fast using Keras and later just extract and freeze the graph from the trained models.</p></li>
<li><p>Pseudo labeling: I used consistent samples from the test set to train new networks. Choosing them was based on a.) my three best models agree on this submission. I used this version at early stages of the competition. b.) using a probability threshold on the predicted softmax probabilities. Typically, using <code>pseudo_threshold=0.6</code> were the samples that our ensembled model predicted correctly. I also implemented a schedule for pseudo labels. That is: For the first 5 epochs you only use pseudo labels and then gradually mix in data from the training data set. Though, I didn't have time to run these experiments, so I kept a fixed ratio of training and pseudo data.</p></li>
<li><p>Test time augmentation: It is a simple way to get some boost. Just augment the samples, feed them multiple times and average the probabilities. I tried the following: time-shifting, increase/decrease the volume and time-stretching using <code>librosa.effects.time_stretch</code>.</p></li>
</ul>
<hr>
<p>I posted out submission results here, raw probability score is normalised from [0,1] to [0,255]</p>
<p>the final ensemble for submission:</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/279938/8533/overall.png" alt="enter image description here"></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/279938/8532/final.png" alt="enter image description here"></p> |
Text Normalization Challenge - English Language | #21 solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Text Normalization Challenge - English Language
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, I don't know is kaggle supported for image uploading or not. I just us imgur.com to upload image. </p>
<p>Or maybe you can upload your image to your GitHub repository.</p>
<p>Thanks for sharing, it's happy to see my approach of handling 'to' is similar to yours. And may I ask how about class predicition accuracy of your cnn model? </p>
<p>Well, I still need to find the pattern of some Normalization case myself. But I think some top team maybe not. They may let computer do everything and don't need to find the pattern themselves. </p>
<p>In other words, I think I am competing with many robots! (positive meaning)</p>
<p>My solution is based on <a href="https://www.kaggle.com/alphasis/bigdata-trick-or-treat-lb-0-9954">BingQing Wei's public kernel</a>, then I use several step to optimized it:</p>
<h3>1. Use xgboost to predict test cases' class:</h3>
<p>The model is similar to <a href="https://www.kaggle.com/alphasis/xgboost-with-context-label-data-acc-99-637">XGboost With Context Label Data (ACC: 99.637%)</a> (the author is also BingQing Wei, big thanks to his work)</p>
<p>In addition, I use extra xgboost model to predict a 4 digit number is 'DATE' or 'CARDINAL'.</p>
<h3>2. For some class, use customized normalize function to deal with it:</h3>
<p>I treat MEASURE, DATE, MONEY, DECIMAL, CARDINAL, and DIGIT. Because they have specific form. Each customized normalize function can reach from 98.9% to 99.7% acc. (But my customized normalize function can't handle some rare case, such like Sept. 21th 2017. I'm wondering that did the top team have smarter way.)</p>
<p>For example, to deal with the 'DECIMAL' class. I will use a function to normalized it.</p>
<pre><code> def decimal2word(key):
#
# 100% acc if change
#
if(len(key.split()) == 2):
# e.g. 0.21 million
unit_words = ['hundred', 'thousand', 'million', 'billion']
if(not is_decimal(key.split()[0])):
return key
else:
if(((key.split()[1]).lower() in unit_words):
return decimal2word(key.split()[0]) + ' ' + (key.split()[1]).lower()
else:
return key
else:
if(not is_decimal(key)):
return key
digit_dict = {'0': 'o', '1': 'one', '2': 'two', '3': 'three', '4': 'four', '5': 'five', '6': 'six', '7': 'seven', '8': 'eight', '9': 'nine'}
out = []
if(key[0] == '.'):
# e.g. .021 to point o two one
out.append('point')
for v in key.replace('.',''):
out.append(digit_dict[v])
else:
n1, n2 = str(int(key.split('.')[0])), key.split('.')[1]
out.append(digit2word(n1))
out.append('point')
if(len(n2) == 1 and n2[0] == '0'):
out.append('zero')
else:
for v in n2:
out.append(digit_dict[v])
word = ' '.join(out)
return word
</code></pre>
<h3>3. Use xgboost to deal with binary ambiguous case:</h3>
<p>Binary ambiguous case such like the '-' and the ':', which have two target norm, original char and 'to'. With xgboost model, it's able to handel ~98% precision and ~99.3% recall.</p>
<h2>Thanks for reading</h2>
<p>P.S. I feel like this picture after finish this competition. Waiting for more intelligent solution!
<img src="http://cdn3.collective-evolution.com/assets/uploads/2014/10/Human-Computer.jpg" alt="act as ai" title=""></p> |
Text Normalization Challenge - English Language | 19th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Text Normalization Challenge - English Language
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is my 19th place solution for the english text normalization challenge. I’ve gone into a bit of detail into the thought process that was used.</p>
<h2>Looking at the objective</h2>
<p>The required task for this competition is to correctly specify the spoken form of the tokens which make up sentences. <br>
A straight forward translation of a single input to a single output would not be practical because of the very different transformations required for different cases. The types of transformations required can be broken down into two main categories, replacements and transformations.
In the case of replacements we are replacing the input token with fixed strings. There are three possibilities.</p>
<p>Same replace: The output text is exactly the same as the input.</p>
<p>Single replace: The output text is different from the input, but it is always the same for the same input. For example ‘privatise’ is replaced by ‘privatize’.</p>
<p>Multi replace: There are tokens which have more than one possible string as the output, so the correct one would have to be chosen based on the context in the sentence. For example ‘st’ can have either ‘saint’ or ‘street’ as a replacement.</p>
<p>In order to do these replacements we need to have the replacement strings available in advance, so this is only possible for tokens that occur in the training set. For ‘Single’ and ‘Multi’ replace it is straight forwards to create a dictionary of tokens and their replacements using the data available at training time. For ‘Same’ replacements though, there will also be tokens in the test set that are not to be found in training data. These tokens would have to be identified by a classifier as being candidates for ‘Same’ replacement.
Aside from tokens that can be replaced with text constants, there are also tokens for which the output is not a replacement but is generated from the input token. In cases like these an input token from the test set that does not occur in the training set needs to undergo the same transformation, since it’s replacement is not available. Obviously all types of transformations have to be known in advance. An example of a transformation for a number is ‘20’ which is transformed to ‘twenty’. Transformations are also applied to words and other types of text, such as dates.
The majority of the transformations or replacements can be identified and done without any kind of machine learning, but some type of classifier is needed for the remaining few where the type of transformation needed is not so obvious. For example most purely alphabetic tokens end up unchanged but there are some that must be transformed in some way, such as being spelled as letters. These exceptions are the ones that need to be identified based on the context surrounding the token.</p>
<h2>The overall approach</h2>
<p>The approach which seems best for this problem is to first determine what type of replacement or transformation is needed and then apply it. We know that if the correct replacement or transformation is applied the output will always be correct. So the difficult part then is to classify a token and determine the appropriate operation to get the output.
The training set does include a ‘class’ column which categorizes the token into classes for transformation. This information is not available for the test set.</p>
<p>The different classes are:</p>
<p>'PLAIN', 'PUNCT', 'DATE', 'LETTERS', 'CARDINAL', 'VERBATIM', 'DECIMAL', 'MEASURE', 'MONEY', 'ORDINAL', 'TIME', 'ELECTRONIC', 'DIGIT', 'FRACTION', 'TELEPHONE', 'ADDRESS'</p>
<p>Looking at some of the classes in the data I could see that simply using these categories for determining transformations was not going to be adequate. The reason being that for many of the classes there can be more than one possible transformation to be applied. For example the token “XXIII” in the ORDINAL class can be transformed into ‘the twenty third’ as well as ‘twenty third’. Also this same token exists in the CARDINAL class and is transformed into “twenty three”. So there are a total of three possible transformations.</p>
<h2>Reclassing</h2>
<p>For this token then, a better approach would be to have three classes, one for each type of transformation. With this in mind I categorized the dataset into about 40 classes where each class represents one type of transformation. Aside from the one class to one transformation benefit, this method also eliminates the need to include some of the new classes in the classifier. This is because these classes can be determined simply by using regex methods. For example, if we look at the date tokens, the input token '19 July 1946' which transforms to 'the nineteenth of july nineteen forty six' is guaranteed to not be mistaken for any other class. So it can be detected and transformed without ever being considered by the classifier, making it’s job easier.</p>
<h2>Sub-tokens</h2>
<p>Since all input tokens are categorical, one issue that arises is that it is hard for the classifier to distinguish between tokens of different types. It is hard to distinguish between alphabetic tokens, numeric tokens, punctuation tokens, or tokens that are a mixture of different types.
Another issue is that of generalization. If we have the token ‘12345’ in the train set the classifier could learn that it is a number and should be categorized as one of the number classes. But this does not help it classify the token ‘54321’ if it only shows up in the test set.
To make it easier for the classifier to recognize token types each token is split up into sub-tokens. This is done by splitting the token text into words, numbers and punctuation characters etc. A maximum of 10 sub-tokens are produced for each token. Fig 1 shows an example of this.
<img src="https://i.imgur.com/rbm5tFf.jpg" alt="Fig 1" title=""></p>
<h2>The classifier</h2>
<p>For the classification portion of this task I decided to go with an LSTM based neural net. The net uses an encoder and decoder approach with attention. I used Pytorch because of it’s flexibility which made it easier I think to implement the attention code and some other features. The inputs to the neural net are:
The token index.
The sub-token type index.
The sub-token length index.
The word2vec embedding for the token.
All categorical indexes are fed into embeddings layers in the encoder and decoder. This setup can be seen in fig 2.
<img src="https://i.imgur.com/xni1NdO.jpg" alt="Fig 2" title=""></p>
<h2>Limiting output types</h2>
<p>If you take a look at tokens in the dataset you might realize that for most tokens we can only classify them as one or more classes which are a subset of all the classes. Meaning that we can exclude many of the classes for consideration. For example the number ‘12345’ should only be classified as some kind of number class and should not ever be classified as an alphabetic type class. Limiting the outputs of the neural net so that only valid classes are allowed means an increase in classification accuracy. I implemented this type of class limiting by adding a large positive constant to the raw probabilities of the valid classes (before softmax) which had been output by the neural net. This ensures that the class with the highest probability will be one of the valid classes.</p>
<h2>Ensembling sort of...</h2>
<p>The majority class in this dataset is the one for ‘same replacement’. In order to do some kind of ensembling I trained two models. The first model, was trained without any compensation for the class imbalance so it was good at classifying the majority class at the expense of the less frequently occuring classes. The second model, was trained with the same data but only a percentage of the tokens for the most frequent class (selected at random) have their error used in updating the weights of the network. This focuses the network on learning to classify the less frequent classes at the expense of the more frequent ones.
When generating the final output the predictions from the first model are used for ‘same replacement’ , while predictions for all other classes are taken from the second model.</p>
<h2>Training and validation</h2>
<p>For testing locally the train set was split up with 20% of the data used for validation. The optimizers used for the encoder and decoder were Adam optimizers, and the loss function was cross entropy loss. A batch size of 64 was chosen and each model was trained for 10 epochs. I didn’t get a chance to do any fine-tuning which I think would have improved my score.
When training for the actual test set the entire train set was used for training the neural net. It took about 6 hours to train two models on a GTX1060 gpu.
In addition to the data made available for the competition I also made use of the publicly available data on github for creating the replacement dictionaries. This was an improvement over making the dictionaries using only the train set.</p> |
Text Normalization Challenge - English Language | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Text Normalization Challenge - English Language
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Did't you try to add 3rd level with context-aware classifier on top of output from 2nd? I had an idea of some recursive classifier on adjacent words - and to run it iteratively many times on top of previous output until it converges, but didn't actually tried to implement it.</p>
<p>Yes, each layer is run on top of previous layer's output. E.g. second layer works on entries after first layer has done it's job - because first layer is little bit context-aware (takes into account previous and next word) and is capable to distinguish for some numeric cases better than simple regular expressions. It even managed to handle some year vs 4-digit-number, cases which I missed in general.</p>
<p>I didn't check scores for each layer separately because they mainly deals with different rows and none of them individually is capable to process all entries.</p>
<p>First, congratulations to the top 3 teams!</p>
<p>Finishing 4th probably is not the best thing what could happen. But... not the worst either. And competition itself was interesting, not a typical machine learning competition. Thanks to organizers!</p>
<p>I started to look at this competition only week before the end, so I didn't have much time to build something big and complicated. </p>
<p>My solution is very simple and basically consists of 3 text processing layers - each with different approach and tasks.</p>
<p><strong>1. layer</strong> - statistical approach. I collected statistics from training data about possible transformations for each word, taking into account also it's preceding and following words. And I calculated "confidence" level for each such transformation (depending on how much other alternatives were possible, what were counts of examples for each alternative, etc).
If statistical model was "confident enough" about needed transformation I used it. This layer mostly handled plain texts and common transformations like "dr" to "doctor" etc.</p>
<p><strong>2. layer</strong> - pattern based approach. Or, simply speaking, regular expressions.
This layer handled data with known format - dates, times, numbers, phones, URLs.</p>
<p><strong>3. layer</strong> - ML approach. I used several LightGBM models for deciding on ambiguous cases, where 1st layer was not confident enough about solution and also 2nd layer couldn't help.
This layer mostly made binary decisions between alternatives from 1.st layer, like whether "x" transforms to "x" or "by" and whether "-" is "-" or "to", etc.</p>
<p>Lack of time was the biggest problem for me - now I see that I've missed some trivial cases (which could be captured better if I had spent more time to analyze data). Like that 4 digits could actually be a number not a year and so on.</p>
<p>For time economy I also used too simple local validation and took some decisions based on public LB (which was only 1% of data!). But - real programmers test in production and real data scientists validate on test data ;)</p> |
Text Normalization Challenge - English Language | Seq2Seq Solution (54th Place) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Text Normalization Challenge - English Language
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our approach is as follows:</p>
<p>Rather than using the provided class variable we created our own regex based categorization. The idea being several classes are quite similar to each other and same ses2seq model can learn the patters available in them, for example ordinal and cardinal are not very different and the underlying patterns can be easily learnt by a single model. Here the the categories and the corresponding normalization strategies and accuracy scores:</p>
<p>Numerical: we captured all the tokens which either matched the following regex "[A-Za-z0-9].[a-z]" or contained digits in them. So we had most of the dates, ordinal, cardinal, money, time, digit, telephone and fraction in this category. We trained a character level 3 layer BLSTM encode decoder model to map all the before tokens to after tokens in this category. Our local CV accuracy for this category was 99.2%</p>
<p>Letters: All tokens not in numeric, all caps, not containing "AEIOUY" but containing only alphabets or "." or space belong to this category. We trained a character level 3 layer BLSTM encode decoder model to map all the before tokens to after tokens in this category. Our local CV accuracy for this category was 98.42%</p>
<p>Punctuation: All the token with class as "PUNCT" were put in this category except "-" and ":". The normalization strategy was to make after same as before token. The accuracy for this category was 100%</p>
<p>Clean words: I used two text corpus in nltk library, brown corpus and stanford treebank to get a set of 50K clean words. We removed ("i", "-", ":" and no) from this corpus because these tokens have multiple meaning so we wanted to handle those tokens with a model. All tokens which were part of this set were put in the clean category. The normalization strategy was to simply use the before token itself as after. The local CV accuracy was 99.94%</p>
<p>Electronic: to make electronic category we used the following regex: "[A-Za-z].[a-z]+"
we didn't use a model to normalize this category, rather just used a rule based method. The accuracy was 79.23%</p>
<p>Proper nouns: we used the following regex to categorize proper nouns: "[A-Z][a-z]+"
The idea is to let model decide how to interpret a token like "Abc" as "Abc" or "a b c" or "Abcdef"
Our seq2seq model gave 99.41 % accuracy in this category</p>
<p>Rest: All tokens which do not fall in any other categories falls under this category. We trained another seq2seq model for this category and got 99.21% accuracy.</p>
<p>All tokens were appended with previous two tokens and the next to tokens for the seq2seq models. To keep size manageable I restricted the final character sequence size to 10 + len(before token) + 10. </p>
<p>We started the competition late and basically devoted 5-6 days only to develop the approach and train the models but saw that with more epochs the accuracy continued to increase, so probably we should have spend more time on this one.</p>
<p>Our key learning here were training on similar tokens speeds up the training compared to handling all tokens at one go. </p>
<p>This was my first competition were I used the seq2seq model and thoroughly enjoyed creating the entire pipeline and the models. Hope this approach will help others get started with this type of challenge.</p>
<p>Regards</p> |
Text Normalization Challenge - Russian Language | 2nd place solution - fully convolutional network | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Text Normalization Challenge - Russian Language
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My solution is a fully convolutional network, made with FAIR Sequence-to-Sequence Toolkit <a href="https://github.com/facebookresearch/fairseq-py">https://github.com/facebookresearch/fairseq-py</a></p>
<p>I used en-fr architecture described in <a href="https://arxiv.org/abs/1705.03122">https://arxiv.org/abs/1705.03122</a></p>
<p>Data preparation was the same as in the reference seq2seq RNN <a href="https://arxiv.org/abs/1611.00068">https://arxiv.org/abs/1611.00068</a></p>
<p>I used 8 files from full Russian data set <a href="https://github.com/rwsproat/text-normalization-data">https://github.com/rwsproat/text-normalization-data</a> including all non-self tokens and 10% of self tokens. Total size of training set was about 5M examples.</p>
<p>The toolkit works on byte-pair encodings (BPE). I got best results wen using a 20K BPE dictionary for source items, taking care not to include any digits in BPE. For destination dictionary I included all tokens with frequency >2, total size of target dictionary was about 2K BPE.</p>
<p>The model was trained for 17 hours (13 epoch) on a single Tesla V100 (AWS p3.2xlarge instance) and did not converge fully as I was running out of time.</p>
<p>No post-processing was used on model output.</p>
<p>Some thoughts</p>
<p>The most surprising part of this competition is that after spending all the time optimizing the model on English data (mostly BPE dictionary experiments) I got 0.9890 score on English data set, while the only Russian model got 0.9928.</p>
<p>The majority of errors I saw when doing error checking where due to wrong transliteration, so that could be an easy and fast way to improve accuracy.</p>
<p>Compared with reference RNN, training set size matters for convolutional models I got score increase of 0.9875-> 0.9888 when retraining the English model from the best checkpoint with more data 3M examples - > 5.5M examples </p> |
Text Normalization Challenge - Russian Language | 3rd place - rule based system | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Text Normalization Challenge - Russian Language
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used statistical approach proposed by @ZFTurbo at the beginning of the competition with the following upgrades:</p>
<ol>
<li>Looking for <strong>5 word sequences</strong> from the test set - 2 words before and 2 after</li>
<li>Skipping <strong>punctuation</strong>, <strong>dashes</strong> and some <strong>short words</strong> with almost disambiguous values (i.e. "по","англ","ее" )</li>
<li>Parsing <strong>dates</strong> in form "11 января 2017 года" or "02.10.15 г."</li>
<li>Parsing <strong>URLs</strong> for "easy" cases</li>
<li>Parsing <strong>numbers</strong> like "978-5-104935-25-2" and using <em>num2words</em> package for all other cases</li>
<li>Using custom dictionary for <strong>endings</strong> like "$" or "км/ч" </li>
<li>Custom <strong>transliteration</strong> was my biggest pain. First I tried to use phonetic transliteration from <em>nltk.corpus.cmudict</em> package and map it directly to Russian (rupabet). But due to limitations of CMU dictionary (only 134,000 words) it did not work out well. Then I tried <em>transliterate</em> package as a base and scored around 15% accuracy on the train set. After adding 40+ rules manually on top of that I achieved 50% accuracy. Had to use some language expertise here :)</li>
</ol>
<p>Code is here: <a href="https://github.com/ppleskov/Text-Normalization-Challenge-Russian-Language/blob/master/rus_base.ipynb">https://github.com/ppleskov/Text-Normalization-Challenge-Russian-Language/blob/master/rus_base.ipynb</a></p> |
Text Normalization Challenge - Russian Language | 6th Place Solution: Differentiable Neural Computers | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Text Normalization Challenge - Russian Language
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was my first Kaggle attempt so I wanted to share my solution and get some suggestions.
The approach that my team used was to train to seq2seq model using a memory augmented neural network. </p>
<p>The model had two layers:</p>
<ol>
<li><p>The XGBoost Layer : The only job of this layer was to: given a word and 2 context words to the left and right, classify whether the word needs to be changed or not. If the word is classified as changed it is passed to the second layer.</p></li>
<li><p>The Seq2seq Layer: The Seq2seq layer used a memory augmented neural network as its core architecture. The model takes as input a sequence of characters, same as in the seq2seq approach of (<a href="https://arxiv.org/ftp/arxiv/papers/1611/1611.00068.pdf">https://arxiv.org/ftp/arxiv/papers/1611/1611.00068.pdf</a>) and outputs the normalized words one by one.</p></li>
</ol>
<p>The model used Tensorflow and was trained for 4 epochs on Tesla K80 Google Cloud GPU using the Google Free Tier. We could not complete the hyper parameter optimization for the seq2seq Layer due to unavailability of resources, which is probably the reason we dropped from 3rd position in public to 6th position in private leader-board. We had to use the exact same parameters as specified in the original research paper. </p> |
Text Normalization Challenge - Russian Language | N-gram works. Even for non-Russian Kagglers | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Text Normalization Challenge - Russian Language
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. Not really. Does not as good as in Ru. What I learned from the forum is that there are a lot of error in the dataset so I believe using n-gram is good for Ru.</p>
<p>:)</p>
<p>Sorry about that. </p>
<p>Congrats everyone and thanks to Kaggle and Google!</p>
<p>Our team members are all non-Russian speakers. We used ngram and that's all we have:
<a href="https://github.com/Shujian2015/kaggle-text-norm-ru">https://github.com/Shujian2015/kaggle-text-norm-ru</a></p>
<p>Please be aware that this code only applies to Ru competition. </p> |
The 2nd YouTube-8M Video Understanding Challenge | 1st place solution summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 2nd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Zhou,
Unfortunately we did not try quantized inference, so I cannot provide any numbers </p>
<p>Hi Wei,
which implementation of netVLAD are you using?
ones from <a href="https://github.com/miha-skalic/youtube8mchallange">https://github.com/miha-skalic/youtube8mchallange</a> or some other? models from model_configs.xlsx in that repo should be able to achive 0.87, even on a single GPU.</p>
<p>In regards to TF (OOM) error - We had some unsolved problems ourselves but not on a single GPU. I'm not sure I can help you with this (maybe try different TF/CUDA/drivers/...).</p>
<p>This is from the solution paper. Full paper will be released at a later time.</p>
<ol>
<li><p>LeCun, Y.A., Bengio, Y., Hinton, G.E.: Deep learning. Nature (2015)</p></li>
<li><p>Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet Classification with DeepConvolutional Neural Networks. Advances In Neural Information Processing Sys-tems (2012)</p></li>
<li><p>Graves, a., Mohamed, A., Hinton, G.: Speech recognition with deep recurrentneural networks. Icassp (2013)</p></li>
<li><p>Ng, J.Y.H., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R.,Toderici, G.: Beyond short snippets: Deep networks for video classification. In:Proceedings of the IEEE Computer Society Conference on Computer Vision andPattern Recognition. (2015)</p></li>
<li><p>Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B.,Vijayanarasimhan, S.: Youtube-8m: A large-scale video classification benchmark.CoRRabs/1609.08675(2016)</p></li>
<li><p>Miech, A., Laptev, I., Sivic, J.: Learnable pooling with context gating for videoclassification. CoRRabs/1706.06905(2017)</p></li>
<li><p>Wang, H., Zhang, T., Wu, J.: The monkeytyping solution to the youtube-8m videounderstanding challenge. CoRRabs/1706.05150(2017)</p></li>
<li><p>Li, F., Gan, C., Liu, X., Bian, Y., Long, X., Li, Y., Li, Z., Zhou, J., Wen, S.:Temporal modeling approaches for large-scale youtube-8m video understanding.CoRRabs/1707.04555(2017)</p></li>
<li><p>Chen, S., Wang, X., Tang, Y., Chen, X., Wu, Z., Jiang, Y.: Aggregating frame-levelfeatures for large-scale video classification. CoRRabs/1707.00803(2017)</p></li>
<li><p>Skalic, M., Pekalski, M., Pan, X.E.: Deep learning methods for efficient large scalevideo labeling. CoRRabs/1706.04572(2017)</p></li>
<li><p>Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado,G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A.,Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg,J., Man ́e, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J.,Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V.,Vi ́egas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng,X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015)Software available from tensorflow.org.</p></li>
<li><p>Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the in-ception architecture for computer vision. In: The IEEE Conference on ComputerVision and Pattern Recognition (CVPR). (June 2016)</p></li>
<li><p>Hochreiter, S., Schmidhuber, J.: Long Short-Term Memory. Neural Computation(1997)</p></li>
<li><p>Cho, K., van Merrienboer, B., G ̈ul ̧cehre, C ̧ ., Bougares, F., Schwenk, H., Bengio,Y.: Learning phrase representations using RNN encoder-decoder for statisticalmachine translation. CoRRabs/1406.1078(2014)</p></li>
<li><p>Laptev, I., Marsza lek, M., Schmid, C., Rozenfeld, B.: Learning realistic humanactions from movies. In: 26th IEEE Conference on Computer Vision and PatternRecognition, CVPR. (2008) Building Size Constrained Predictive Models for Video Classification9</p></li>
<li><p>Wang, H., Schmid, C.: Action recognition with improved trajectories. In: Proceed-ings of the IEEE International Conference on Computer Vision. (2013)</p></li>
<li><p>Wang, H., Ullah, M.M., Klaser, A., Laptev, I., Schmid, C.: Evaluation of localspatio-temporal features for action recognition. BMVC 2009 - British MachineVision Conference (2009)</p></li>
<li><p>J ́egou, H., Douze, M., Schmid, C., P ́erez, P.: Aggregating local descriptors intoa compact image representation. In: Proceedings of the IEEE Computer SocietyConference on Computer Vision and Pattern Recognition. (2010)</p></li>
<li><p>Perronnin, F., Dance, C.: Fisher kernels on visual vocabularies for image catego-rization. In: Proceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition. (2007)</p></li>
<li><p>Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNNArchitecture for Weakly Supervised Place Recognition. IEEE Transactions onPattern Analysis and Machine Intelligence (2018)</p></li>
<li><p>Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network.In: NIPS Deep Learning and Representation Learning Workshop. (2015)</p></li>
<li><p>Ruppert, D.: Efficient estimations from a slowly convergent robbins-monro process.Technical report, Cornell University Operations Research and Industrial Engineer-ing (2018)</p></li>
<li><p>Han, S., Mao, H., Dally, W.J.: Deep compression: Compressing deep neural networkwith pruning, trained quantization and huffman coding. CoRRabs/1510.00149(2015</p></li>
</ol>
<p>sorry, the link expired. Fixing it now.</p>
<p>edit: fixed</p>
<p>Code is now available:
<a href="https://github.com/miha-skalic/youtube8mchallange">https://github.com/miha-skalic/youtube8mchallange</a></p>
<p>First off: Congratulations to everyone who participated and was burning their GPUs trying to squeeze in as much performance into 1GB as possible. Personally I would like to thank David for all the effort, time and resources he put into this competition, carrying out all of our crazy ideas.</p>
<p><strong>TLDR</strong>: Our final model was combination of 9 submodels belonging to 4 model families - NetVlad, Deep Bag of Frames (DBoF), Fisher Vector (FV) and Recurrent neural networks (RNNs), weighting contributions based on test set performance. To make things work efficiently we used (multilayer) distillation, 8bit partial weights quantization, exponential moving averaging of weights and “smarter” inference time frame sampling. </p>
<h1>Models and Distillation schema</h1>
<p>We did not use any fancy new architectures for our models. The four used model families (NetVLAD, FV, RNNs and DBoFs) were all used in the first year competition. The models were however adjusted due to the competition constraints. We found especially useful architectures provided by Miech et al (<a href="https://arxiv.org/abs/1706.06905">https://arxiv.org/abs/1706.06905</a>). Following figure shows models scores and flow of distillation training:</p>
<p><img src="https://serving.photos.photobox.com/80844715f45b743d28b7d4ad3c3afcafbe48185f72e3fba054c5dbc6dba959e2bb3bf70c.jpg" alt="enter image description here">
*Scores in architecture figure are public leaderboard equivalent scores. Values were obtained through local validation which had a consistent offset to public leaderboard scores. The values are collected from quantized models.</p>
<h1>Tricks</h1>
<h2>1. Distillation</h2>
<p>Distillation was done in a similar way as in work of Wang et. al. (<a href="https://arxiv.org/abs/1706.05150">https://arxiv.org/abs/1706.05150</a>) using soft labels - mixture of ground truth and teacher model predictions. As a teacher model we used ensemble of 3 (or in case of 2nd layer model 6) models. For student networks we used same family of networks as teacher networks. This means that for a student NetVlad network we would use 3 teacher NetVlad. The idea was that similar networks could learn similar patterns from data - thus giving better results. This might not be entirely true since in some of our experiments, doing cross family training, the students performed as good as ones trained on same family while improving ensemble diversity. In general distillation allowed us to train lighter version of model with better performance.
For NetVlad, we chose 2nd layer distillation because of the positive impact it had on the overall ensemble score (+0.001) even though the individual models produced by it were not better than the 1st level distillation.</p>
<h2>2. Quantization</h2>
<p>To minimize weight of the models we casted them into 8 bit encoding (cutting weights size down to 1/4) in addition to storing the centroids. To not lose too much performance we kept the graph nodes in default float32 format and at inference time the variables are cast back to float32 format. We limited quantization only to variables with less than 1700 elements. This means that weights of typical fully connected layer would get quantized while batch norm factors wouldn’t get. We used uniform min-max quantization, since other methods (such as quantile based quantization) did not work well.</p>
<h2>3. Inference time sampling</h2>
<p>During training, for models like FV, NetVLAD and DBoF a collection of 300 frames was selected based on random sampling with replacement. During test time we ensured at each frame would be selected at least once and the remainder of the frames were chosen at random. This can be considered as a way of data augmentation.</p>
<h2>4. In model averaging</h2>
<p>To improve performance of single model we used in model averaging of weights over training time. We let the model train to convergence and then started averaging. Performance improvement was significant for our smaller models (DBoF and RNNs 0.002-0.003), while for bigger models (NetVLAD and FV) we did not see an improved prediction. We applied exponential decay averaging as well as equal weight averaging - both of them giving similar boosts, but we did not detect increase in performance if combining the two averages. </p>
<h1>Train and testing</h1>
<p>For training we used all samples except for samples in 800 validation TF records. The 800 randomly selected record files (~5% of all data) were used to monitor training, model selection and weighting of final meta-model. We observed a very consistent 0.00207 offset between local validation and public leaderboard so we used local validation for all testing. The test set turned out to be sufficiently big enough to avoid overfitting. Training on all data with the 800 records included did not improve final score for us.</p>
<p>Training was performed on single GPU’s, taking between 1.5-3.5 days per model. Approx 60 GPU-days would be required to train all models. We varied batch size, number of clusters, and number of hidden layers between models within a given family.</p>
<p>Random search was performed on local system to determine final weights of the ensembles. We observed a ~0.0050 range in GAP score that could be achieved between taking a weighted average of models vs optimized weights determined by random search in 500 iterations.</p>
<p>A detailed report/paper will be published at a later time in addition to making the code available.</p>
<p>It took us some time to figure out these kind of things ourselves. In general I find TF documentation hard to follow and not so descriptive - thus we applied trial and error approach.</p>
<pre><code>common_input = tf.placeholder(tf.float32, shape=(None, 300, 1152), name="CommonIn")
common_frames = tf.placeholder(tf.int32, shape=(None,))
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
for i, in_model in enumerate(in_models):
with tf.variable_scope("model_{}".format(i)), tf.device("/cpu:0"):
saver = tf.train.import_meta_graph(in_model + ".meta", clear_devices=True)
logging.info("restoring variables from " + in_model)
saver.restore(sess, in_model)
input_tensor = tf.get_collection("input_batch_raw")
num_frames_tensor = tf.get_collection("num_frames")
predictions_tensor = tf.get_collection("predictions")
for g_inpt in input_tensor:
tf.contrib.graph_editor.connect(common_input, g_inpt)
for g_inpt in num_frames_tensor:
tf.contrib.graph_editor.connect(common_frames, g_inpt)
final_out = predictions_tensor[0] * weights[0]
for sumodel_out, wfactor in zip(predictions_tensor[1:], weights[1:]):
final_out = final_out + sumodel_out * wfactor
tf.get_default_graph().clear_collection("input_batch_raw")
tf.add_to_collection("input_batch_raw", common_input)
tf.get_default_graph().clear_collection("num_frames")
tf.add_to_collection("num_frames", common_frames)
tf.get_default_graph().clear_collection("predictions")
tf.add_to_collection("predictions", final_out)
os.makedirs(save_folder)
uninit = set(sess.run(tf.report_uninitialized_variables()))
used_vars = [v for v in tf.global_variables() if v.name.split(':')[0] not in uninit]
saver = tf.train.Saver(used_vars)
saver.save(sess, os.path.join(save_folder, "inference_model"))
ref_falgs = os.path.join(os.path.dirname(in_models[-1]), "model_flags.json")
shutil.copy(ref_falgs, save_folder)
logging.info("We are done!")
</code></pre>
<p>We used combination of several GPUs (titanV and 1080Ti-s), simultaneously we used up to 12 GPUs for training. Importantly, we also teamed up early and started executing and adjusting our plans early. </p>
<p>The main goal was to keep as much as possible in float32 format, not to loose precision. But some variables (such as Fully connected "input X output" matricies were to big) thus we quantized only those.</p>
<p>Code teaser is bellow :P</p>
<pre><code>cast_variables = {} # variables that will be quantized
load_variables = {} # variables that will be loaded as they are
name_to_tensor_map = {}
xsaver = tf.train.import_meta_graph(model + ".meta", clear_devices=True)
fetch_vars = tf.global_variables()
for c_var in fetch_vars:
name_to_tensor_map[c_var.op.name] = c_var
try:
n_elements = reduce(lambda x, y: x * y, c_var.shape.as_list())
except:
n_elements = 1
if (n_elements &gt; min_elements) and (c_var.dtype.as_numpy_dtype == np.float32):
cast_variables[c_var.op.name] = tf.contrib.framework.load_variable(model, c_var.op.name)
else:
load_variables[c_var.op.name] = tf.contrib.framework.load_variable(model, c_var.op.name)
ge = tf.contrib.graph_editor
loadings = []
for var_name, c_np in tqdm(cast_variables.items()):
tf_var = name_to_tensor_map[var_name]
elif transform_type == "quant_uniform":
quant_var = tf.Variable(tf.zeros(c_np.shape, tf.uint8),
dtype=tf.uint8,
name=var_name + "_quants")
space_var = tf.Variable(tf.zeros(256, tf.float32),
dtype=tf.float32,
name=var_name + "_space")
output = tf.gather(space_var, tf.cast(quant_var, tf.int32))
ge.swap_inputs(tf_var.value(), output)
mat_shape = c_np.shape
c_np = c_np.flatten()
space = np.linspace(c_np.min(), c_np.max(), num=256)
quants = vq(c_np, space)[0].reshape(mat_shape)
loadings.append([quant_var, quants.astype(np.uint8)])
loadings.append([space_var, space.astype(np.float32)])
for var_name, c_np in load_variables.items():
tf_var = name_to_tensor_map[var_name]
loadings.append([tf_var, c_np])
sess = tf.Session()
for load_dest, np_val in loadings:
load_dest.load(np_val, session=sess)
saver = tf.train.Saver(var_list=[x[0] for x in loadings])
saver.save(sess, output_model)
</code></pre>
<p>code is coming soon™</p>
<p>Depends on how you look at it. From another perspective the model had 15 student models and 9 models in final ensemble.</p> |
The 2nd YouTube-8M Video Understanding Challenge | 3rd Place Solution Sharing: NeXtVLAD | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 2nd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>When tensorflow is building the graph, I think the batch size is always 1. That is not exactly the batch size used in training.</p>
<p>Thanks for your concerns.</p>
<p>I have tried the binary cross entropy loss, it is worse than the softmax version. I also tried some other parameters(T = 1), it seems the ONE doesn't work very well to improve the ensemble result.</p>
<p>But the experimental results I have are still limited, it is hard to draw any conclusions right now. </p>
<p>That means T=3 is even worse than no distillation. In the default settings, the mixture weight of soft target is T^2, it seems we need to set it to be a much smaller value. </p>
<p>That is an interesting approach. Focus more on samples with larger losses. Can you also share your code with us? </p>
<p>The final submission is trained on train + validation, which provide about 0.003 GAP improvement in the leaderboard. More data always mean better performance : )</p>
<p>I also tried MOE at the beginning of the competition, it shows some improvements(about 0.001 to 0.002 I suppose). Since I want to incorporate a technique, Knowledge Distillation with On-the-fly Ensemble, I need the logits values for the ensemble. Another reason is that MOE have lots of parameters, an MOE with 4 experts and 2048 dimensional input will have 9*2048*3862=71M parameters, which is almost the same with that of my final model.</p>
<p>The model performs well with no any data or label augmentation. Apparently, I can't push the model to limit with my current facilities. It will be fun to know whether other tricks will further improve the GAP or not. </p>
<p>Thanks for the reminding. Actually, if you check the papers from the first Youtube-8M competitions, you can find the final authors can be different from participants, such as <a href="https://arxiv.org/abs/1706.05150">https://arxiv.org/abs/1706.05150</a>
contains 3 authors but only 2 participants. So I think it should be fine.</p>
<p>Also, although they didn't join the competition, they provide me with the necessary training before the competition and stipends so that I can focus on the work. </p>
<p>The other two authors are my advisors for my Ph.D. They provide the RA position for me during summer. </p>
<p>I didn't receive any notification via email too. But you can review or modify the paper in cmt3 website.</p>
<p>Firstly, I would like to thank the Kaggle and Google team for hosting this challenge and congratulate every participant for completing such a challenging competition. </p>
<p>During the competition, I devote myself to design a new network architecture to aggregate the frame-level features. Inspired by the work of ResNeXt, the final model, NeXtVLAD is found to be both effective and parameter efficient. Briefly speaking, the basic idea is to decompose a high-dimensional feature into a group of relatively low-dimensional vectors with attention before applying NetVLAD aggregation over time.
A single NeXtVLAD model with less than 80M parameters achieves a GAP score of 0.87846 in private leaderboard. A mixture of 3 NeXtVLAD models results in 0.88722.</p>
<p>The code is publicly available at: </p>
<p><a href="https://github.com/linrongc/youtube-8m">https://github.com/linrongc/youtube-8m</a></p>
<p>Submitted Paper: </p>
<p><a href="https://github.com/linrongc/youtube-8m/blob/master/eccv2018submission.pdf">https://github.com/linrongc/youtube-8m/blob/master/eccv2018submission.pdf</a></p>
<p>Presentation Slides: </p>
<p><a href="https://github.com/linrongc/youtube-8m/blob/master/ECCV2018_phoenix_lin_presentation.pdf">https://github.com/linrongc/youtube-8m/blob/master/ECCV2018_phoenix_lin_presentation.pdf</a></p>
<p>Time is really tight to write down all the details and clean up the code in just one week, especially for an individual participant. I will keep updating the guideline and paper.
Please let me know if you have any questions!</p> |
The 3rd YouTube-8M Video Understanding Challenge | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the top teams! Thanks Kaggle/Google team for hosting this competition! And thanks all my teammates <a href="/srinath1993">@srinath1993</a> , <a href="/ahana91">@ahana91</a> and <a href="/jesucristo">@jesucristo</a> for their effort on this competition! I will go through some details of our journey in this competition.
<strong>Standard frame-level models</strong>
We first tried out the models from 1st and 2nd competition wining solutions, including different variations of netVlad, DBoF and LSTM. With the standard training on regular training set, the best result is similar to frame-level DBoF baseline model in the leaderboard (I think we joined the competition mid-July and it took us one month to achieve this. Before that we were stuck into a silly problem. We always used Excel to open the large predictions.csv file. But somehow Excel will automatically modify the large cell and make the submission failed. It let us feels like I should keep segment_max_pred, which is total number of segment outputs per entity, very low. Anyway, we figured it out after long time).
<strong>Fine-tuning on validation set with segment labels</strong>
The next thing we tried out and worked is fine-tuning on the labelled segments on the validation dataset. It will generally give us ~0.04 improvement. Our training goes into two phases: Phase 1, we trained the model on 1.4 TB regular training set. Phase 2, we fine-tuned the model on the validation set with segment labels. There are usually around 60K training steps in phase 1 and we only did around 2000~3000 steps of fine-tuning.
We found that more steps of fine-tuning lead to worst results. Probably it is because more fine-tuning steps will lead the model to overfit validation dataset, whose size is comparable much smaller to regular training dataset.
<strong>Attention/Multi-attention model</strong>
In the regular training set, we have frames across whole video. In the validation set and testing set, we have 5-frames segments. To bridge the gap between regular training set and validation set, we decided to formulate the problem as a multi-instance learning problem and use the attention weights to select important frames in the video. In our model, each frame will go through an attention network to obtain the attention weight of that frame. Then we pooled the frames based on the attention weights and used MoE/logistic model to obtain the final prediction. We also use gating mechanism in the attention network as shown in the paper(<a href="https://arxiv.org/pdf/1802.04712.pdf">https://arxiv.org/pdf/1802.04712.pdf</a>).
As our problem is a multi-class classification problem, we also considered that different high-level topics may require different attention network to learn the weights. We also trained multiple (8 or 16 in final submission) attention networks to emphasize the different important frames in different topics of videos. Finally, we pooled the output from each of attention network in our model. Our best individual model in public LB is 0.772.
<strong>Loss function and final ensemble</strong>
As in the final testing set, the segments only come from 1000 classes. In the loss function, we gave those 1000 classes more weights. Our final ensemble consists of 3 multi-attention models, 3 attention models, 2 DBoF models, 2 CNN models, 1 netVlad model and 1 LSTM model.
<strong>What was tried but didn’t work</strong>
We put a lot of effort on data augmentation and semi-supervised learning. In the beginning, we tried to use our best model to predict segments in the regular training dataset and choose top segments as our new training set. We also tried to pseudo-label the testing dataset. We chose top 200 segments from our best submission and fine-tuned the model based on that. Both methods didn’t work out. One possible reason is we did not blend them with standard validation dataset during the fine-tuning. How we label these segments and how we use them require more experiments.</p>
<p>The detailed technical write-up and Github repository will come soon. Thanks again all my teammates and I cannot wait to see the ideas and models from all top teams!</p> |
The 3rd YouTube-8M Video Understanding Challenge | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Overview</h1>
<p>I will first give an overview of our approach and explain each component in greater details. To begin with, we divide our models into <strong>video models</strong> and <strong>segment models</strong>. The job of video models is to propose plausible videos that might contain correctly labelled segments. The segment models are used to re-rank the proposed segments by the video models. For both models, we decided to measure mAP (mean average precision), mean positive recall and mean negative recall. Mean positive recall is defined to be the number of positive segments retrieved within the top 100k divided by the total amount of positive segments. Same for mean negative recall. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1221384%2Fdb5caaa8ff057f736c4ed7c3757ba9dc%2FEnsemble%20(1" alt="">.jpg?generation=1571178611116954&alt=media)</p>
<p>As shown in the diagram above, blue rectangles indicate regular models. Red rectangles indicate meta models that trained using other models. Models that are in the purple area are trained with YouTube-8M v2 dataset only. Models in the yellow area maybe trained with both v2 and v3 or v3 only. Green squares show the public leaderboard mAP. The orange square encloses candidates generated by the video model. The final submission is an average blend of all v3 models after a filtering process. </p>
<h1>Video Models and candidates generation</h1>
<p>Since video models are used to propose potential solutions, the only metric we measure for video models is positive recall. We decided to use the last year's 1st place model (Skalic et al. 2018) as our video model due to its high positive recall (99.7%). In order to generate candidates, we predict the top 100k most plausible videos that might contain positive segments, then break down the videos into segments to form the candidates pool. </p>
<h1>Segment Models</h1>
<p>Segment models are used to re-reank the segments proposed by the video models. Therefore, we measure its mAP within the 100k segments proposed by the video model.
Here are the segment models that we used: </p>
<p>| Model | Public LB |
| ------------- |:-------------:|
| segment LSTM | 81.3 |
| group LSTM | 80.8 |
| NetVLAD_LF | 81.2 |
| netFV_LF | 81.0 |
| NetVLAD_RI_LF | 80.6 |
| transformer | 79.1 |
| transformer_AG | 80.9 |
| ConvS1_RI | 81.3 |
| Conv_LF_RI | 80.8 |
| AttConv_RI | 81.2 |
| ConvS1 | 81.4 |
| CCRL XGB | 81.3 |
| CCRL XGB2 | 80.8 |
| CCRL XGB3 | 80.9 |</p>
<p>In the table above, RI means the model is randomly initialized instead of training on YouTube-8m v2 data first. LF indicates late fusion. AG indicates that the model is an average of many randomly intialized runs. More information about these models will be shown in the workshop paper. </p>
<h1>Post-processing filter</h1>
<p>For our final submission, we first average all the score predictions of the above models. After score averaging, we perform a filtering process to smooth out the predictions. The idea of filtering is to use the neighbouring segments' predictions for the current segment prediction. The filter we used is a size 3 kernel with value 1 in every dimension. We apply the filter over the time dimension and use a non-decreasing function on top. </p>
<p>| Method | Public LB |
| ------------- |:-------------:|
| average predictions | 84.3 |
| average predictions + filtering | 84.4 |</p>
<h1>Discussion</h1>
<p>We propose many high performing segment models and a novel learning framework with class as input, dubbed CCRL. More details of the models will be presented in the workshop paper and the code in Tensorflow 2.0 will be released soon! </p> |
The 3rd YouTube-8M Video Understanding Challenge | 24th place solution - fast simple naive RNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>1. Train RNN on only validation set</h3>
<p>I converted tfrecords into numpy array per <a href="https://www.kaggle.com/tbmoon/how-to-save-tfrecord-into-npy-too-slow">https://www.kaggle.com/tbmoon/how-to-save-tfrecord-into-npy-too-slow</a> to use pytorch.
Then I trained a 3-layers RNN only on validation set, there are 2 seperate GRUs for frame and audio data respectively, then concat them together and pass to a fully connected layer with softmax activation, this gave me 0.71 public lb. Also tried context gate, there is no much difference for this model.</p>
<h3>2. Train RNN on train set and fine tune on validation set</h3>
<p>For training set. I selected only samples with 1 label. To accelerate training speed, for each video randomly choose 15 continual frames for training. Train the RNN model on train set for about 4 hours, then fine tune on validation set. This fine tune single 3 layers RNN model gave me lb 0.746. Trained another 4 layers RNN gave me lb0.735. <br>
The total training time for each model takes less than 6 hours on single P100 GPU.
My final submission is the average ensemble of the 2 models, which is 0.753 for public lb and 0.744 private lb.</p>
<h3>3. Submission</h3>
<p>I predicted every 5-seconds segment of test set, and use top 100000 scores of each classes for submission. The prediction takes 30 minutes on single GPU.</p>
<h3>Transformer / multi-task experiments</h3>
<p>I replaced the RNN model with 3 layers or 6 layers Transformer, did not improve the score. Then I experimented mult-tasks on training set and validation set like this paper (<a href="https://arxiv.org/abs/1901.11504">https://arxiv.org/abs/1901.11504</a>) , it was not hepful either.</p>
<p>I was also planning to experiment sequence to sequence learning with validation set, but was not able to complete the code before competition ends.</p> |
The 3rd YouTube-8M Video Understanding Challenge | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am not sure whether you understand the meaning of set A-C well.
There's a "The Peak Muffin Moment" topic in the notebook and you will see that the 'food' label also exists in the video-level labels. Once the 'food' is one of the video-level labels, then 'food' will not be regarded as a negative sample in my algorithm. </p>
<p>Congrats to all the winners and thanks google research for providing such an interesting video understanding challenge.
I would like to share my solution here. Scores shown in the following tables are all private scores, including the private MAPs, which are about 0.01 lower than scores on the public leaderboard.</p>
<h1>Solution Overview</h1>
<p>
We firstly pre-train base models using videos from 2018-data, then those models are fine-tuned on all the segments(2019 data) using a segment-level loss. Finally, a refinement inference strategy takes in both the video-level predictions and the segment-level predictions to obtain the refined segment-level predictions.</p>
<h2>Video-level Pre-train</h2>
<p>We combine the mixture structure (3rd place last year) with different video classification models.
<strong>Frame-level Model</strong>: Mix-[NeXtVLAD, nonlocal-LightNetVLAD, nonlocal-EarlyNetVLAD, GatedDBOF, SoftDBOF, NetFV, GRU].</p>
<p><strong>Video-level Model</strong>: Mix[ResNetLike]</p>
<p>We train models using the 2018 large-scale YouTube-8M video-level annotation data. The trained models were evaluated on last year's video label prediction task. As shown in the table below, most frame-level models with the mixture structure can achieve high scores on the leaderboard, and the simple equivalent weight ensemble of these models can reach 0.88932 on the private GAP which is a high score (although we do not consider the model size limitation). These single models are regarded as ``Base Models" and were evaluated on this year's temporal localization task. Their MAP scores are shown in the second column. The Mixture-SoftDBOF achieves the best MAP, followed by the Mixture-GatedDBOF and the Mixture NeXtVLAD model.</p>
<p></p>
<h2>Segment-level Fine-tune</h2>
<p><strong>Fine-tune Loss</strong>: For a certain segment, suppose A is the set of 1000 segment-level categories, B is the annotated segment category, and C is the set of annotated video-level categories of the video which the segment located in. Cross Entropy Function represents as CE. We add some weak supervision information into our final loss function shows below. <em>α</em> is simply set as 1.0 in our experiments.
\( Loss = \Sigma_{i\in B} CE(p(i), L(i)) + \alpha * \frac{\Sigma_{i\in (A \setminus C)} CE(p(i), 0) }{|A \setminus C| } \)</p>
<p><strong>All Data</strong>: In order to utilize all the annotation data, we set two groups of experiments for each model. We first use 5/6 segment data as our training set and 1/6 data as the validation set to train models. Through the validation results, a good model step interval can be estimated when using full data as the training set. Finally we use the Stochastic Weight Averaging (SWA) technique to combine those models into a single one. On the one hand, training step interval estimation improves the tolerance of the model selection. On the other hand, SWA operation improves the robustness of the models and can gain higher scores.</p>
<h2>Refinement Inference Strategy</h2>
<p></p>
<h3>Basic Inference Strategy</h3>
<p>The basic version of the inference method creates 1000 minimum heaps for segment-level predictions. The predictions for each segment are pushed into the heap of respective categories. Once a heap size overflows the maximum threshold, the segment with the least predicted probability in the heap will be popped. Finally, the segment classification predictions can be converted into the final temporal localization results by sorting confidences in the heap.</p>
<h3>Refinement Inference Strategy</h3>
<p>The basic inference strategy ignores the powerful instruction of global video information. In this part, we will utilize video-level predictions to improve the segment-level predictions. </p>
<p>Considering the fact that if an entity has very low confidence in appearing in a video, then it is also unlikely to appear in any of the segments in this video. Based on this consistency observation, we build a list of candidate labels for segments classification. The list of candidate labels is obtained from video labels that predicted by pretrained models, and is quite effective in removing false positive predictions on video segments. </p>
<p><strong>Top rank k</strong></p>
<p>Our first thought is to select the top k predicting results on the video data. These selected classes consist of candidate categories, and are used to constrain the probable category scope for each segment.</p>
<p><strong>Confidence threshold to constrain</strong></p>
<p>The top k strategy is a good way to generate filters, but it ignores the diversity of categories between videos. For content-rich videos, it contains a large number of categories, while for a single-content video, the number of categories appears will be small. </p>
<p>So the main idea of our second strategy is to consider the confidence. If the prediction score is smaller than the threshold, then the related category won't be considered when predicting the inside segments.</p>
<p><strong>Video number constraint for each category</strong></p>
<p>The third idea is to simply limit the number of predictable videos for each category. We predict each video using video-level model, and then limits the predicted video number by confidence for each category. Only segments in these videos will be considered as the corresponding categories.</p>
<p></p>
<h2>Ensemble</h2>
<p>Finally, ensemble models can achieve great improvement. We finally choose Mix-NeXtVLAD, Mix-GatedDBOF, Mix-SoftDBOF, Mix-EarlyNetVLAD, and three kinds of Mix-ResNetLike Models for the final ensemble. Ratios we set are 3:3:3:1:1:1:1. More models for ensemble and other ensemble ratios may bring some improvement to the final MAP scores.</p>
<p></p>
<h2>Other things we tried but did not work so well</h2>
<ol>
<li>auto-annotation to achieve data augmentation</li>
<li>reranking by combining video-level and segment-level predicted confidence</li>
<li>fix params: fix some parameters to prevent over-fitting</li>
</ol>
<p>Sorry for my unclear illustration.</p>
<p>A\C means A setminus C (or A - C). We regard the classes in A\C as weakly negative, so the CE(p(i), 0) item appears in our loss calculation.</p>
<p><strong>The reason why we do this</strong>: The labels on the segments are insufficient( We may know there are (or are not) cats in some segments, but it is hard for us to know the existence of the other 999 classes). And we found that just using the segment-level annotations might bring a higher average predicted confidence for each segment. So we consider utilizing the video-level annotations of the videos that the segments located in. </p>
<p><strong>Our basic assumption is: If there are no dogs in video1, segments of the video1 will not be.</strong></p>
<p>So for a certain segment, most categories in A\C can be weakly annotated as negative categories. We calculate the average cross entropy val among classes in A\C and make it a part of our final loss function. </p>
<p>Rough experiments we do: Train Mix-NeXtVLAD and Mix-ResNetLike Models with new&old loss function. Original inference strategy is used. Top k we set: 1000. </p>
<p></p> |
The 3rd YouTube-8M Video Understanding Challenge | 3rd place solution sharing: A Deep Mixture Model with Online Distillation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>At first, I would like to thanks google research for providing another interesting video understanding challenge. This competition really provides fun to many of my weekends in the last 4 months. </p>
<p>Overall, my solution follows the widely-used system design: candidate generation and ranking. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F132211%2F38572e71d7518a50136e28e4fca386b5%2Fsystem.PNG?generation=1571208082982881&alt=media" alt=""></p>
<p>A quick offline analysis suggests that the top20 topics(among 1000 topics) cover over 97% of the positive labels. The segment level classifier is directly finetuned from video-level classifer(with the same structure).</p>
<p>I found larger model can generally perform better in the video dataset but will quickly overfit the smaller segment dataset. In this competition, I tried another approach to increase model capacity by training multiple models. Our final model is a 2-layer mixture model with online distillation. Each of the MixNeXtVLAD model is a mixture of 3 NeXtVLAD model. So in total, we trained 12 NeXtVLAD models in parallel using 4 Nvidia 1080 TI GPUs. The online distillation part can effectively prevent the whole model to overfit the smaller dataset. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F132211%2F6d516e4eb35e009469a87a8176afca78%2Fmix_mix.PNG?generation=1571208477789758&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F132211%2F0c96b3aff80ffdaa1bbbb8a14a41fe5a%2Fmix.PNG?generation=1571209130114152&alt=media" alt=""></p>
<p>More details about the model will be included in the research paper and shared in this post once I finish the writing : )</p>
<p>If you are interested in the performance of models I have tried, following are the results:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F132211%2Fa082666359eb8ed9f0314a71f0d87a21%2Fresults.PNG?generation=1571705893788412&alt=media" alt=""></p>
<p>Iuse all the available data for training, including the validation set, because the performance on local validation dataset is highly aligned with public LB. </p> |
The 3rd YouTube-8M Video Understanding Challenge | 5th place - Partial solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of I would like to thank the organizers for setting up another interesting competition. I hope that the Kaggle team will also do the right thing and clear the leaderboard of all the cheaters and competitors that broke the rules. </p>
<p>It has been a great pleasure working on this challenge with both Mikel and David. Unfortunately, we lacked time to go into details and explore "exotic" solutions.</p>
<p>Here I will describe part of the solution and Anokas will describe his part. We ended doing ranked sum ensemble for our final submission.</p>
<p>There are 3 part to the solution presented here:
1. Video level network
2. 5 Frame network
3. Localization network</p>
<h2>Video level network - P_V</h2>
<p>This was basically based on last year’s 1st placed solution - <a href="https://github.com/miha-skalic/youtube8mchallenge">link</a> . Essentially, for each fragment we would factor in prediction based on the whole video sequence.</p>
<h2>5 frame network - P_5</h2>
<p>We trained three models: 1x DBoF, 2x VLAD on sequences from 2nd year data, sampling 5 frames. Then we fine-tuned the model on annotated fragments from this year.</p>
<h2>Localization network – P_L</h2>
<p>This network took in a sequence of frames and a target label. The target label would be passed to an embedding layer and then concatenated with the sequence of frames. The concatenated sequence would then be processed by an LSTM to predict whether target label was predicted for each frame. Non-annotated frames would be masked out.
Downside of this approach was that we needed to run inference 1000 times. 1x for each target label.</p>
<h2>Combining the 3 models</h2>
<p>Multiplying the probabilities (weighted geometric mean) gave the best results.
Essentially probability for each fragment-class combination, p(fc), was computed as:
p(fc) = Log (P_v) + 2/3 Log (P_5-VLAD1) + 2/3 Log (P_5-VLAD2) + 2/3 Log (P_5-DBoF) + Log(P_l(*|c))
From here on we would just sort fragments based on p(fc) and for each class c report the top fragments f.</p>
<p>Detailed solution will be provided as workshop submission.</p> |
The 3rd YouTube-8M Video Understanding Challenge | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks organizers for hosting this competition! Congratulations for all the top teams! I hope that you have enjoyed this challenge! 🎉🎉 🎉 With <a href="https://www.kaggle.com/daokouer">@daokouer </a> we had a great time exploring the interesting topic.</p>
<p>Here we will present briefly our solution for this challenge. For this temporal localization problem , we regarded it as video segment classification problem. We trained our models based on video segment feature with or without context information and predictions were made for each segments.</p>
<h2>Models</h2>
<p>We trained two types of the models: sequence model and frame level model. These two types of models made their decisions based on different parts of input.</p>
<h3>Sequence Modeling Model</h3>
<p>We used Transformer and BiGRU as our sequence models. The whole video feature was taken as input and predictions were made for each five frames. For sequence models we believed that they focused on long-term temporal dependency. </p>
<h3>Frame Level Model</h3>
<p>NeXtVLAD were used as the frame level model. The frame level took exact five frames as input and output one prediction. We believed that with limited reception field, the NeXtVLAD focused more on the static feature of the segment.</p>
<h2>Model Pre-train</h2>
<p>Models above had a great number of parameters to learn but we only had few segment level labels. We thus wanted to take advantage of the huge training data with video label. We used EM-like process to make use of the training set. We initialized our model <code>f</code> using the same method as the training process in the official baseline code. During E-step, we estimated the segment label of the training set using model <code>f</code>. During M-step, we trained a new model <code>f</code> using the generated label and fine-tuned it on the segment label. We operated two EM iterations in our experiments. <br>
We also tried several multi-instance learning methods to make use of the video label, e.g. we performed max pooling over segment predictions and took it as video prediction to calculate the loss, etc. But we did not find a MIL method that outperformed the EM-like method. </p>
<p>Please refer to workshop submission for detailed solution. </p> |
The 3rd YouTube-8M Video Understanding Challenge | 7th Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The code is now available <a href="https://github.com/ceshine/yt8m-2019/">on Github</a>. I'll probably do some code cleanup, but at the current state it should already be able to reproduce my results. Please let me know if you have run into any problems.</p>
<p>Thank you!</p>
<p>Thank you for your kind words!</p>
<p>Thank you!</p>
<p>The two video-level models took about 28 hours to train, as stated above. The context-agnostic segment classifiers each took about 30 minutes to train on my local GTX 1070. The context-aware segment classifier each took about 90 minutes to train on a Tesla T4. (The exact time depends on the hyper-parameters used.) My best submission has 6 context-aware and 6 context-agnostic classifiers, so they took about 12 hours to train. </p>
<p>The time inference took is harder to estimate. I ensemble all the model on the fly at first, but at the very end decided to dump the predictions of each model to a <code>numpy.memmap</code> object, and ensemble them later. The bottleneck is the data loading so the time does not grow linearly. I can probably make and dump the predictions of the 12 models under 6 hours (just a rough estimate).</p>
<p>The final ensembling takes around 25 minutes. The number of models in the ensemble has very limited effect on this final step.</p>
<p>Thanks!</p>
<p>Thank you!</p>
<h2>20191031 Update</h2>
<p>The cleaned codebase has been published at <a href="https://github.com/ceshine/yt8m-2019/">ceshine/yt8m-2019</a>. The models were reproduced with minor variation (still able to obtain 7th place).</p>
<h1>The Original Content</h1>
<p>This is going to be my first gold medal after more than 6 year of Kaggling. I’m very grateful for this great community on Kaggle, and all the friendship and support from some of the best people I’ve met online. Thank you to the sponsor and organizers for this very interesting competition. And thanks for the $300 GCP credit. It would be impossible for me to reach this posision without the access to T4 and P100 GPUs.</p>
<p>I did not realize I have a chance at landing in the top 10 until exactly one week ago. It was very lucky that the context-aware segment classifier idea I chose to implement worked well enough. I spent the last one week tuning hyper-parameters and model architectures. The public score improved from 0.79450 to 0.81247 just from that. There are still some low hanging fruits because of the time constraint, but I don’t think that they will make a big difference with regard to ranking. </p>
<p>My approach is fairly simple. First, train a video-level model on the frame-level dataset. Then fine-tune the model on the segments dataset to create segment classifiers. There are two types of the segment classifiers — context-aware and context-agnostic. The final predictions come from simply averaging the probabilities (I did not try other averaging methods) generated by these classifiers (with roughly 1:1 mix of the two types). </p>
<h2>Video-level model</h2>
<p>Only some subsets of the frame-level training and validation datasets were used to train the base video-level model. They add up to just under 1TB. I’ll provide a list of the tfrecord files used in the code repository.</p>
<p>Only videos tagged with the 1000 relevant labels were used in training. This was mainly to speed up training. I did not try training with the full set of labels. </p>
<p>A random sample of 150 frames are drawn from a video when its length is larger than that.</p>
<p>I had two video-level models that were good enough at the end of the competition:</p>
<ul>
<li>A NeXtVLAD model with <code>n_clusters = 64</code> and <code>n_groups = 16</code>. Trained 200k steps with batch size 48 and slanted triangular learning rates (<code>max_lr = 3e-4</code>). Training took almost 15 hours with a single P100 GPU.</li>
<li>A context-gated dbof model with <code>frame_dim = 4096</code> and <code>video_dim = 2048</code>. Trained 100k steps with batch size 128 and slanted triangular learning rates (<code>max_lr = 3e-4</code>). Training took 13 hours with a single T4 GPU (this is a rough estimate because the instance was preempted several times). </li>
</ul>
<h2>Segment classification model</h2>
<p>The segments dataset was split into 8 folds (by tfrecord files). </p>
<p>In addition to the 5 seconds/frames labeled by human, the previous and later <em>n</em> frames are also included to the training example. So for a labeled segment start at time t, the actual segment fed to the model starts from <em>t-n</em> and ends at <em>t+4+n</em>. I found <em>n=3</em> works best in early experiments and stuck with it. This particular choice probably caused some overfitting to the leaderboard. </p>
<p>The context-agnostic classifiers are just the video-level models fine-tuned on the segments datasets. </p>
<p>A context-aware classifier consists of three parts: a video encoder, a segment encoder, and a fully connected classifier. A video encoder is created by removing the final logistic (mixture-of-experts) layer of a video-level model. I tried different ways of creating segment encoders, and found that using the same method as video encoder seemed to create the fewest complications. The encoded vectors are concatenated and fed to the fully connected classifier (one linear layer plus ReLU activation on top of the mixture-of-experts classifier).</p>
<p>The first 6 segments and the last 2 segments in the video features are dropped before being fed to the video encoder to reduce noise (probably should have also done this when training video-level models). Like in the training of video-level models, 100 ~ 200 random samples are drawn when the video length is larger than that.</p>
<p>The weights of the video encoder are frozen when training (that is, they are not being updated). I tried fine-tuning the video encoder but did not find significant benefit from doing so. </p>
<p>The learning rate of the segment encoder is set to be half the learning rate of the fully connected classifier (a.k.a. discriminative learning rates).</p>
<p>I also did some negative label mining to speed up the convergence.</p>
<h2>Inference</h2>
<p>Just like in the training of segment classifiers, the first 6 segments and the last 2 segments in a video are skipped, and the segment starts from <em>t-n</em> and ends at <em>t+4+n</em> for a segment starts at time <em>t</em> on paper. </p>
<p>The probabilities were discretized from 0. ~ 1.0 to 0 ~ 9999 and put into 10000 buckets. Regular pruning is performed to reduce memory usage. The most probable 100,000 segments for each class were put into the final submission.</p>
<h2>Implementation</h2>
<p>I use PyTorch in the whole pipeline. The data is directly loaded from tfrecord files. This means that the data loading might have been sub-optimal. I did find a very small amount of examples have video features of length 0. Those examples are discarded in both training and inference. I considered converting tfrecord files to other format to speed up the loading and avoid grouping segments from the same video in the same batch, but decided not to for better flexibility.</p>
<p>The NeXtVLAD ,context-gated DBoF, and mixture-of-experts models are re-implemented in PyTorch with some small modifications.</p>
<p>Finally, I want to give some credits to the fast.ai library. Although I did not use the fast.ai library directly, many parts of my code were inspired or simply copied from fast.ai. It’s really phenomenal.</p>
<p>Thank you for remembering my work from two years ago, and your kind words. I'm flattered. 😃 </p>
<p>Thank you!</p>
<p>Thank you <a href="/jesucristo">@jesucristo</a>. I'm also looking forward to the coder/paper from you and your team!</p> |
The 3rd YouTube-8M Video Understanding Challenge | Solution for 21st place and a PyTorch kernel | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Out solution is pretty straightforward. We didn't use the train dataset at all. We trained a simple model (pooling + MLP) with a softmax classifier for 1000 classes. A single fold of this model resulted in 0.750 on public LB, ten folds resulted in 0.760 and a blend of 4 similar models resulted in 0.761, which is our final result.</p>
<p>The best model looks like this (exact values were found with hyperopt):
```
nn.AdaptiveAvgPool1d(1) # average in time dimension</p>
<p>nn.Linear(1152, 2765)
nn.BatchNorm1d(width)
SwishActivation()</p>
<p>nn.Linear(2765, 1662)
nn.BatchNorm1d(width)
SwishActivation()</p>
<p>nn.Linear(1662, 1000)
<code>``
I created a kernel with this model, and it achieves a public LB score of 0.753 in less than one hour, including submission generation: https://www.kaggle.com/artyomp/stronger-baseline/. This became possible since my teammate <a href="/tenich">@tenich</a>:
1. has created a TensorFlow dataloader for PyTorch;
2. optimized submission generation from 5 hours to 5 minutes (check out</code>generate_submission`).</p>
<p>I used a slightly different approach regarding data loaders: I converted all data to np.arrays. This makes training a lot faster since 90% of validation data is not labeled. So, actually the size of the segment-labeled dataset is 2 Gb. 1 epoch takes less than a minute.</p>
<p>For the sake of speed, this kernel uses combo-approach. It only converts the validation data to np.array and uses TFDataset data loader for the final inference.</p>
<p>I hope this sheds some light on how challenges of this kind could be solved!</p> |
The 3rd YouTube-8M Video Understanding Challenge | first silver medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I want to thank Google and Kaggle for making the competition available. I'm so happy to get this silver medal!!! Last week I failed miserably in IEEE fraud detection, and I decided to do another competition.</p>
<p>Most of this silver medal came from running the DBoF baseline provided by the host. I did a little bit hyper parameter tuning to reduce overfitting. </p>
<p>I guess the most difficult part was to dare to enter the competition one week before the deadline and walk through / trouble-shooting all the G-Cloud Platform set-up. When I was finally able to generate a solution, it was satisfactory.</p>
<p>A little bit feedback to the host: I guess if the guidance to generate a baseline could be easier to follow, and if you could provide a more intuitive illustration about how the baseline models are constructed, this could attract more people to participate (although the entry-barriers definitely helped me win the medal this time...).</p>
<p>It was a lucky win. I'll probably spend some time understanding better the models and features used by other people. After all, that's where you learn most.</p>
<p>Thanks!</p> |
The Allen AI Science Challenge | ~0.39 Quick Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The Allen AI Science Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The idea of this approach is measure the distance between words. Say "This Science Challenge is amazing", the distance between 'Science' and 'amazing' is 3 (For this example, 'is' is not excluded). Then we use this information to score the candidate answers.</p>
<p>1.Build the corpus from all tokens in question and answer</p>
<p>2.Index (position)</p>
<p>3.Measure the distance - for each document (both words occur) return minimum distance of given pair of words -> store in list -> take the percentile 20th -> relate_dis</p>
<p>4.Score = 1/ sqrt(relate_dis)</p>
<p>5.Compute all scores for all questions and answers pair -> take the median</p>
<p>6.Pick the answer which has the maximum score</p>
<p>7.Repeat 3-6 for each question</p>
<p>It use pretty large amount of memory about ~ 2-3 GB on my system.
I'm not sure if it going to perform better with the larger corpus, I do some test on corpus size vs score, with the corpus size > 2500 pages, the performance of the model start to converge at ~0.38-0.40.</p>
<p>Note : func and func2 are cython file (need to compile before they can work properly) run build.bat to compile (if you on windows, for 64-bit system you need windows sdk to compile this)</p>
<p>Edit: Score = 1/relate_dis -> Score = 1/ sqrt(relate_dis)</p>
<p>'Science' and 'amazing' is 4 -> 'Science' and 'amazing' is 3 (For this example, 'is' is not excluded)</p> |
The Big Data Combine Engineered by BattleFin | Share your approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The Big Data Combine Engineered by BattleFin
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No one wants to share? So sad... T T</p>
<p>I am not in top 10. But I have used a lot from what others have done, especially from Miroslaw Horbal. So I want to share what I have done. Hopefully, those in top 10 could give us some thoughts. If those winners can share, that will be the best. </p>
<p>I have used two approaches:</p>
<p>1. use gbm to do feature selection. Then use linear regression with l1 loss, you can find Miroslaw's code: <a href="http://www.kaggle.com/c/battlefin-s-big-data-combine-forecasting-challenge/forums/t/5582/the-hint-thread">here</a>. This approach is simple. But it costs a lot of time. To do feature selection, it costs me around 24 hours in a server with 20 threads. But I have a server with 16 cores, 32 threads. So it's fine for me. I have uploaded the result of feature selection. You can download it directly, and put it into output dir. </p>
<p>2. use another model, can I say it as ar model. This also comes from Miroslaw's sharing: <a href="http://www.kaggle.com/c/battlefin-s-big-data-combine-forecasting-challenge/forums/t/5582/the-hint-thread?page=3">here</a>. I defined the method as this:</p>
<p>p = a0 * x0 + (1-a0)*a1*x1 + (1-a0)(1-a1)a2*x2 + ... + (1-a0)(1-a1)...(1-an-1) an xn + b.</p>
<p>To minimize the function, we have to define cost and gradient. You can refer to my code. This approach is quite efficient, it just costs me a few minutes. From private scores, this model is also better. </p>
<p>With the first approach, public score: 0.41820 private score: 0.42668</p>
<p>With the second approach, public score: 0.41833 private score: 0.42532</p>
<p>To run the code:</p>
<p>mkdir data, output, res</p>
<p>python2 dataProcess.py</p>
<p>python2 model linr se</p>
<p>python2 model ar</p>
<p> </p>
<p>Unbelievable, we have seen a great approach. </p>
<p>Thanks a lot, Sergey. : )</p> |
The ICML 2013 Whale Challenge - Right Whale Redux | Really Curious what worked for people | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The ICML 2013 Whale Challenge - Right Whale Redux
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I did pretty well in the competition seeing how I'm fairly new to this but I'm really curious if anyone is willing to share what worked well for them. What features did you find most useful, what learning algorithms did you use, did you combine models? I
noticed a lot of entries in the past few days that did really well and I'm curious what you guys tried -- my progress stalled a few weeks back and my new ideas never really panned out.</p>
<p>Here's my approach:</p>
<p>For features I used a short FFT to make a spectrogram of each clip (37 time samples by 40 frequency). For my learning algorithm i used Deep Belief Nets (stacked Restricted Boltzmann Machines w/ 500 logistic units per layer). I did greedy pre-training followed
by dropout for a long time. I toyed with a bit of model combining (random forests) but never got anything good working in that direction. I attempted improving the resolution of my features and increasing the number of parameters in my models but neither improved
the performance that I got.</p>
<p>I know there will be a summary eventually like last time for the winners but I thought it might be interesting to get thoughts directly from the other competitors now that we're done.</p>
<p>What did you try? What worked for you?</p>
<p></p>
<p>--RL</p>
<p>[quote=ryank;25865]</p>
<p>Whole thing was around an hour on a GPU. Didn't tune the feature learning hyperparameters, just tried a few reasonable things.</p>
<p>RightLeft: How much did pre-training help your model?</p>
<p>[/quote]</p>
<p>Well I'm jealous of that GPU performance now. I'm currently working on my run of the mill laptop with just my CPU. It took quite a while to run thru the full pre-training/training regimen so I didn't get a chance to find how each individual part of it affected
my performance. I never tried using the deep net without the pre-training phase but I did try ordinary back propagation vs. dropout back propagation. Ordinary back prop had a tendency to overfit the data very quickly whereas the dropout version was able to
avoid that. From that experience I think I would guess that if I had trained with ordinary back prop to begin with my network wouldn't have done that well. It would be an interesting experiment to try the same architecture with no pre-training and all dropout
back prop. It's my understanding of dropout that it would probably do very well but take longer to get there without the pre-training.</p>
<p>Some extra info for anyone who isn't familiar with Hinton's group's work (pretraining RBMs and dropout):</p>
<p><a href="http://www.cs.toronto.edu/~hinton/absps/ncfast.pdf">http://www.cs.toronto.edu/~hinton/absps/ncfast.pdf</a></p>
<p><a href="http://www.cs.toronto.edu/~hinton/absps/dropout.pdf">http://www.cs.toronto.edu/~hinton/absps/dropout.pdf</a></p>
<p></p>
<p>ryank -- Do you use your own GPU code or do you use a library that's already available like Pylearn2?</p>
<p>--RL</p> |
The ICML 2013 Whale Challenge - Right Whale Redux | The Leakage (and how it was fixed) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The ICML 2013 Whale Challenge - Right Whale Redux
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p dir="ltr">As you know, this contest was restarted because some data leakage was found in the first release of the dataset. Now that the contest is over, I’d like to talk a bit about that leakage; l think the technical details are interesting, and they might
make for an interesting case study.</p>
<p dir="ltr">I found that it was possible to get an 0.9973 AUC using only leakage,
<em>without</em> reading the audio files we were given. For a brief moment, that put my team (“Leaky Larry”) on top of the
<a href="http://www.flickr.com/photos/chef_ele/8835763752/in/set-72157634048640327">
leaderboard</a> and ahead of legitimate teams that created the top algorithms in the first Whale Detection Challenge. </p>
<p>Leakage can, of course, undermine the credibility of the leaderboard scores. In the first week of the contest, I gave Kaggle the details about the leakage I found. Cornell / Marineexplore then promptly revised the dataset, and then the competition was restarted.
The revision fixed most of the issue, and the algorithm that got 0.9973 on the original dataset (using only leakage) could only achieve a 0.59 AUC on the revised dataset.</p>
<p>The leakage came from three sources:</p>
<ol>
<li>The distribution of file lengths (in bytes) </li><li>The timestamp embedded in the audio clip filename (focusing mostly on the millisecond field)
</li><li>The chronological order of the clips (ordered using the full timestamp) </li></ol>
<p><strong style="font-size:10px"> </strong></p>
<p dir="ltr"><span style="text-decoration:underline"><em><strong>File Sizes</strong></em></span></p>
<p dir="ltr">After downloading the data and typing a simple “ls -lS”, I noticed that a large number of audio clip files had both the same size and the same label (i.e. whale vs no-whale). Interestingly, fIles with the same size were not duplicates (their
md5sums differed). This seemed fishy.</p>
<p dir="ltr">To investigate further, I made histograms of the file sizes, broken down by label. Audio files with whale upcalls turned out to have a very specific set of file sizes (see this “comb-like”
<a href="http://www.flickr.com/photos/chef_ele/8835763634/in/set-72157634048640327">
histogram</a> where some files are multiples of 148 bytes plus a constant). In contrast, files without whale upcalls had sizes that looked much more evenly distributed (see this <a href="http://www.flickr.com/photos/chef_ele/8835134511/in/set-72157634048640327">histogram</a>). Thus,
certain file sizes provided a strong indication that a clip had a whale upcall in it. Also, files with whale upcalls were larger, on average. </p>
<p dir="ltr"></p>
<p dir="ltr"><span style="text-decoration:underline"><em><strong>Millisecond timestamps</strong></em></span></p>
<p dir="ltr">Another anomaly was related to the timestamps embedded in the filenames. If a whale upcall was not in a file, the millisecond field in the timestamp was almost always a multiple of 10 ms (see
<a href="http://www.flickr.com/photos/chef_ele/8835763570/in/set-72157634048640327">
this</a>). However, if a whale upcall was in a file, the millisecond field seemed evenly distributed in time - i.e. multiples of 1 ms (see
<a href="http://www.flickr.com/photos/chef_ele/8835763486/in/set-72157634048640327">
this</a>). Thus, a zero in the last digit of the millisecond field was strongly predictive; using a simple test for zero as a binary feature yielded a 0.945 AUC by itself.</p>
<p>Next, additional histograms of the millisecond timestamps showed that the audio clips without whale upcalls were more likely to start in the first half of a given second (see
<a href="http://www.flickr.com/photos/chef_ele/8835134379/in/set-72157634048640327">
this</a>). Clips with whale upcalls were more evenly distributed across time (see
<a href="http://www.flickr.com/photos/chef_ele/8835763426/in/set-72157634048640327">
this</a>).</p>
<p>Putting these observations together, it seems like the audio clips with upcalls were processed in a different way than those without upcalls. As a competitor, though, it’s impossible to tell what the true root causes of these differences were, or if they
could be useful.</p>
<p></p>
<p><span style="text-decoration:underline"><em><strong>Clip order</strong></em></span></p>
<p dir="ltr">in the first Whale Detection challenge, the (chronological) ordering of the clips contained information. The same was true in this contest. A moving average of clip labels in the training set showed a familiar pattern: minutes or hours of high
whale-call activity, followed by equally lengthy lulls (see <a href="http://www.flickr.com/photos/chef_ele/8835763360/in/set-72157634048640327">
this</a>). A moving average could be used to capture some of this serial correlation in the leaked test clip labels, providing a “temporally local” probability of a whale upcall.</p>
<p dir="ltr"></p>
<p dir="ltr"><em><strong><span style="text-decoration:underline">Features</span></strong></em></p>
<p dir="ltr">To take advantage of all these observations, I created the following features:</p>
<ul>
<li>File size in bytes (as an integer) </li><li>File size in bytes (as a categorical variable). </li><li>Timestamp_milliseconds (as an integer) </li><li>0 if timestamp_milliseconds was a multiple of 10, 1 otherwise </li><li>0 if timestamp_milliseconds was a multiple of 10, moving average of the above 0/1 feature otherwise
</li></ul>
<p>A logistic regression using these simple features yielded 0.9973 AUC on the leaderboard. I used the ‘glmnet’ package in R for logistic regression, and python to create the features.</p>
<p dir="ltr"></p>
<p dir="ltr"><span style="text-decoration:underline"><em><strong>Fixes</strong></em></span></p>
<p>Cornell corrected these anomalies by:</p>
<ul>
<li>Capping the file size so that most files were exactly 2 seconds long (8088 bytes).
</li><li>Reducing the millisecond field in the filename timestamp from 3 digits down to 1 (though what that 1 digit represents is unclear).
</li></ul>
<p>I think it’s easy to see how these slight anomalies could go unnoticed, though some turned out to be unexpectedly effective predictors. Even if competitors didn’t notice these these anomalies, it’s possible that some algorithms might have silently picked
up on them if they weren’t corrected.</p>
<p>Like software bugs, leakage seems hard to avoid entirely. I applaud Kaggle & Cornell for swiftly fixing these issues once they were found.</p>
<p></p> |
The Nature Conservancy Fisheries Monitoring | Solution Sharing and Congratulations | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The Nature Conservancy Fisheries Monitoring
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Awesome write-up Felix! I would up-vote you more than once if that were allowed. You really covered the gamut of state-of-the-art object detection methods and pushed Keras and other platforms to new depths. Very impressive.</p>
<p>Thanks bobutis, great reference. I wonder what the comparisons they did would look like using these fish data. Also wondering if anyone had any success here with R-FCN.</p>
<p>Zsolt, bestfitting, thanks very much, sounds like you have very solid and thoughtful solutions. If you have any time to make extra submissions now, I believe everyone would be interested to hear if your STN and part-based models do well on the private test set. Thinking perhaps the public test set may have been a fair bit closer to the training data than the private one, creating an alluring and strong temptation to climb the public leaderboard and let go of models that are in fact very competitive on the private set. </p>
<p>Nope, just tried to match the seven classes we have. Will add we did use fairly heavy data augmentation with the standard tricks like flipping and rotating. </p>
<p>Well, shucks, our bsm is actually 1.12, better than our ensemble and would have put us up one place on the LB (assuming others stay where they are, which isn't quite fair). With this harsh move from public to private I guess a lot of folks did not pick their best submission. </p>
<p>Big congratulations to our top five winners, especially Paulo Pinto and 24TH for steadily climbing the leaderboard and jumping to #1 and #2 by clear margins. Private scores are way worse and it's a pretty major shakeup--if anyone has the shakeup metric script comparing to other competitions please run it. </p>
<p>Our team Chicken of the Sea (#6 private) used an ensemble of ssd with extra data from imagenet, faster-rcnn in caffe, tensorflow, and mxnet, and straight image classification on crops with some standard architectures vgg, resnet, xception, and inception. See below for more details from Dmytro, Xulei, and me. We had a really fun team.</p>
<p>Update 1: Both Paulo's and 24TH's position may be in jeopardy. Paulo has <a href="https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring/discussion/31607">openly discussed</a> that his uploaded model does not match his #1 solution, and according to the <a href="https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring/leaderboard/private">old-style leaderboard</a> 24TH has not uploaded one at all. Combined with a fair bit of <a href="https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring/discussion/31211">frustration</a>, the KDM (Kaggle Drama Metric) for this competition appears to be high. Stay tuned, and in the meantime, Happy Easter!</p>
<p>Update 2: After describing the situation to my wife, she immediately responded: "They need to stick to the rules." Looks like that's exactly what happened. Congratulations to bobutis and team TROLL, Liuftvafas, clsfishbetter/frankman, x_enhanced_AI, and final medal winners. It's been a valuable learning experience for all of us. Please continue to post as you are able--this problem appears to be right on the edge of current CNN capabilities and is a good one to study.</p>
<p>That would be the SSD model by Dmytro--very impressive work from a first-time competitor.</p>
<p>When we formed our team we just took a simple average of our best models on the logit scale and it provided a nice jump on the public leaderboard. As time and submissions permitted, we then tuned the weights a little.</p> |
Toxic Comment Classification Challenge | 12th place single model solution share | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No self attention means calc comment to comment attention, it is like rnet self matching layer. </p>
<p>I did some late submissions, as some other guy mentioned, concatenating attention and max pooling helps a bit. With glove crawl only embedding + ngram it can get public 9866 private 9864. Since data set small there are much uncertainty for single model performance, ie same model train with validation on FOLD 9 got public 9865 private 9861. AVG of FOLD 0 and FOLD 9 model can get public 9868 private 9865.</p>
<p>First thanks for my teammates, great fun working with you guys, we talk on we chat, I think Pony Ma should thank me for the promotion of WeChat:) <br>
Especailly thanks to @sergei, our captain has made our ensemble improve a lot and our result always with similar cv, public and private score. <br>
Here I will only show some of my single models, Savva has some other great models and he and sergei has also successfully used some other loss functions, they have shared code on github. <a href="https://github.com/thinline72/toxic">Solution</a> <br>
For single model here I mean model train on first 90% training data, monitor and decay learning rate on remaining 10% training data. So if train using full data might got slightly better result. </p>
<p><strong>1. Best single model</strong> <br>
My best single model is also bi-gru based model, I borrow code/idea mostly from HKSTU implementaion of rnet. Really grea code, learn a lot from it. <a href="https://github.com/HKUST-KnowComp/R-Ne">HKSTU-RNET</a> <br>
Basicly it is 3 layer bi-gru with dropout 0.7, and addtional label aware attention bi-gru layer + final max-pooling.
With word + trigram with <strong>semantic fusion</strong> embeding we got public 9863 private 9863. <br>
Concat glove and fasttext embedding we got better cv, public 9870 private 9862, special post deal can make private 9863. </p>
<p><strong>2. Gru + Cnn model</strong> <br>
We tried 1 bi gru layer + 4 conv only layer model(<a href="https://github.com/facebookresearch/InferSent">facebook sentence encoding</a>), this model convergent fast and can got good result also but easy to overfit, anyway it provides diversity. <br>
public 9862 private 9860 on epoch 20, public 9859 private 9858 on epoch 25. <br>
One thing interesting is we use word + ngram for this mode, if only use word embedding the model overfit quickly, it can not get upper then 0.99 validation auc while with ngram embedding it can get 0.9925. </p>
<p><strong>3. Preprocess</strong> <br>
Simply try to remove non en chars and split non in glove dict words(id f.u.c.k -> fuck, ABCFUCK -> ABC FUCK) <br>
We have both lowered and cased preprocess version, lowered version model perform slightly better then cased version with additional token info(like is_upper, is_lower) embedding. </p>
<p><strong>4. Word embedding</strong> <br>
we tried <br>
- glove crawl <br>
- fasttext crawl <br>
- lexvec with context <br>
- glove train on jigsaw wiki comment talk corpus <br>
- fasttext train on jigsaw wiki comment talk corpus <br>
talk corpus embeddings provides some diversity but seems still glove and fasttext perform better as with more additional knowleage. <br>
For me interstingly glove crawl performs best while other guys mostly got better result with fasttext crawl. <br>
For me fasttext version got public 9865 private 9855, private much lower then public. </p>
<p><strong>5. Char embedding</strong> <br>
- We have char only model, using first 1000 char of comment with 3layer bigru+maxpooling, the model is not strong, but it provides much more diversity they other gru models. Char model also helps detecting toxic comment like (f u..c ..k) And help a lot when ensemble. <br>
We find char models need much more steps to convergent, for this model <br>
epoch 70: public 9738 private 9722 <br>
epoch 100: public 9741 private 9725 <br>
so if more epochs might still improving.. <br>
- Word level (per word encoding word embedding + char embedding) model <br>
This model works, but similarly it needs more epochs to train. <br>
epoch 20: public 9863 private 9852 <br>
epoch 50: public 9867 private 9858 <br>
epoch 70: public 9866 private 9860 </p>
<p><strong>6. Ngram embedding</strong> <br>
Actually we only used tri gram, also tried bigram + trigram, not help much. <br>
Trigram is like good encoding as |go goo ood od| <br>
Ngram embedding helps a lot, my best models always with ngram embedding. </p>
<p><strong>7. How to combine different embeddings ?</strong> <br>
Add or concat ? We have tried both, and finally we find <strong>semantic fusion</strong> is much better. <br>
We follow this paper <a href="https://arxiv.org/abs/1705.02798">Reinforced Mnemonic Reader</a> <br>
But in the paper they do not use it for concat different embeddings, may be we are the first
to try and find it suprising effective(not sure:) <br>
Pay attention you might need large dropout when combing word ngram embeddings, we find 0.5 the best.</p>
<p><strong>8. label aware attention layer</strong> <br>
Just following rnet, we find adding label aware attention useful, it like to add some noise to make model loss decrease more stabel. We use 6 * 100 embedding and follow rnet to calc comment to label embedding attention and concat to get label aware comment encoding. </p>
<p><strong>9. self attention</strong> <br>
Also following rnet, we tried comment to comment attention, this seems not improve score, but provides some diversity. public 9867 private 9859 </p>
<p><strong>10. other useful tricks</strong> <br>
- Do not limit comment length <br>
Mostly I use buckets length 400,1000 with batch sizes 64,32,8 <br>
I might try bigger batch sizes later as many other guys find bigger batch size helps. <br>
- Per class learning rate decay
This helps a lot! Since some classes like obscence more likely to overfit then other classes. <br>
I use per class patience 3 and decay factor 0.8. <br>
Using tensorflow you can easily achieve this by the trick below: <br>
x = 0.1 * x + 0.9 * tf.stop_gradient(x) <br>
- Post deal <br>
Post deal helps us improving about 2 places finally, might +0.00005.
We find train.csv and test.csv distribution diff mostly from non en like comments.
UNK UNK UNK... comments much more in test.csv. Our models tend to predict high toxic score <br>
on them especailly for later epoch models which has higher cv but tend to over fit on these comments. <br>
So we just use simple unk rule co find comments with lots of unks and sort them by <br>
ensemble prediction on toxic. <br>
With about 30 minutes checking we got 1500+ such comments as white list and *0.001 for those comments.</p>
<p>That's all thanks for everyone again, thanks to jigsaw for holding such a wonderful contest, I am also working on community anti spam this contest helps me a lot, the code written during contest is used in our system with small modifications :)</p> |
Toxic Comment Classification Challenge | 15th Solution Summary: Byte Pair Encoding | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all! Thanks to everyone here who participated and contributed. The community here is just awesome :)</p>
<p>Solution Summary:</p>
<ol>
<li><p><strong>Embeddings</strong>: <a href="https://nlp.stanford.edu/projects/glove/">GLoVe</a>, <a href="https://fasttext.cc/docs/en/english-vectors.html">Fasttext</a>, <a href="https://github.com/commonsense/conceptnet-numberbatch">ConceptNet-NumberBatch</a>, Reddit embeddings from <a href="https://github.com/ravishchawla/word_2_vec">here</a></p></li>
<li><p><strong>Neural network architectures</strong>: Our team mostly used Bi-GRU with max+attention pooling. Other networks in our blend were Hierarchical attention networks, <a href="https://github.com/bfelbo/DeepMoji">DeepMoji</a>, <a href="https://www.kaggle.com/chongjiujjin/capsule-net-with-gru">CapsuleNet</a>, Bi-LSTM, <a href="https://github.com/neptune-ml/kaggle-toxic-starter">DPCNN</a>. My teammates put in lot of effort to tune those!</p></li>
<li><p><strong>Other Models</strong>: Logistic regression on tfidf of words and chars, LightGBM public kernel by Olivier <a href="https://www.kaggle.com/ogrellier/lgbm-with-words-and-chars-n-gram">here</a>, WordBatch FM_FTRL by anttip <a href="https://www.kaggle.com/anttip/wordbatch-1-3-3-fm-ftrl-lb-0-9812">here</a></p></li>
<li><p><strong>Preprocessing</strong> : I would like to highlight this! Test had lot of different words than train (~60% for glove embedding), probably one of the reasons setting embedding layers to trainable didn't work well in this competition. We used two techniques to handle this:</p>
<ol><li><p><a href="https://github.com/bheinzerling/bpemb"><strong>BPE (Byte Pair Encoding)</strong></a> - I tried breaking OOV(Out of Vocab) using BPE and then again finding those broken parts in embedding.</p></li>
<li><p><strong>Spell Correction using Embedding</strong>: We used this <a href="https://www.kaggle.com/cpmpml/spell-checker-using-word2vec">excellent script by CPMP</a>. I wanted to change word_rank based on index to based on NB probability of belonging to toxic class but dropped ball on that.</p>
<p>Individually, models with BPE and spell correction were only marginally better on validation but score improved from 9923 to 9929 during blending. After preprocessing using language translation, BPE and spell correction, one of the models had only 60 OOV words (train and test combined :D) </p></li></ol></li>
<li><p><strong>Blending</strong>: Combination of simple average and geometric average worked best for us. I had no luck with non-linear stacking whatsoever. I also did hill-climbing for linear blending using this excellent kernel by <a href="https://www.kaggle.com/hhstrand/hillclimb-ensembling">Hakon</a></p></li>
</ol>
<p>DeepMoji was our best model on public lb (0.9865) while a fastext preprocessed with BPE and Bi-GRU had best CV score of 0.9910.</p>
<p>Finally, our cross-validation score after blending was 0.99294 (which might be a little optimistic as most of the models share same oof splits) which is also our best lb score!</p>
<p>Special thanks to Jeremy, Jagan, Vladimir, Bojan, Konrad, Tilli, Olivier, Sterby, Alexander, Roussi, Pavel, SpongeBob, Jakub, to name a few, for great kernels and discussions! </p>
<p>A thumbs up to <a href="https://neptune.ml/">Neptune ML</a> team for free GPU credits and starter code. </p>
<p>A big thanks to all my teammates Apoorv, Rahul and Soham for putting in lot of effort and a great learning experience!</p>
<p>I'll will make available my github repository with my portion of codes in few days!</p>
<p>An excellent post by Sebastian Ruder on <a href="http://ruder.io/word-embeddings-2017/index.html">recent trends in NLP</a> <a href="http://ruder.io/word-embeddings-2017/index.html">here</a> was really inspirational and helpful!</p>
<p>EDIT: Best model on public LB was CapsuleNet closely followed by DeepMoji</p> |
Toxic Comment Classification Challenge | 1st place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>LGBM - it's documented in the solution :) </p>
<p>We’d like to begin by thanking Jigsaw/Kaggle for challenging us with a novel data science problem. And congratulations to everyone who made it through to the finish line - this was a marathon, not a sprint!</p>
<p><strong>Summary of our approach:</strong></p>
<ol>
<li><p>Diverse pre-trained embeddings (baseline public LB of 0.9877)</p></li>
<li><p>Translations as train/test-time augmentation (TTA) (boosted LB from 0.9877 to 0.9880)</p></li>
<li><p>Rough-bore pseudo-labelling (PL) (boosted LB from 0.9880 to 0.9885)</p></li>
<li><p>Robust CV + stacking framework (boosted LB from 0.9885 to 0.9890)</p></li>
</ol>
<p>We didn’t possess prior NLP-specific domain knowledge so our overall strategy was to test standard ML/DL techniques against the problem in a systematic fashion. We were gratified to see general techniques such as TTA and pseudo-labeling work effectively here. </p>
<p>It would be remiss of us to not to mention that our hardware setup facilitated an exhaustive search of the solution space: we had 6 GPUs between the two of us. We had optimized code that allowed us to crunch 8-fold OOF data trained against 1+ million samples (TTA + PL) to convergence in approx. 2 hours per model. </p>
<p><strong>Our approach in detail</strong></p>
<p><strong>1. Diverse pre-trained embeddings</strong> </p>
<p>Given that >90% of a model’s complexity resides in the embedding layer, we decided to focus on the embedding layer rather than the post-embedding layers. For the latter, our work-horse was two BiGru layers feeding into two final Dense layers. For the former, we searched the net for available pre-trained word embeddings and settled primarily on the highest-dimensional FastText and Glove embeddings pre-trained against Common Crawl, Wikipedia, and Twitter.</p>
<p><strong>2. Translations as train/test-time augmentation (TTA)</strong></p>
<p>We leveraged Pavel Ostyakov’s idea of machine translations to augment both train and test data sets using French, German, and Spanish translations translated back to English. Given the possibility of information leaks, we made sure to have translations stay on the same side of a train-val split as the original comment. For the predictions, we simply averaged the predicted probabilities of the 4 comments (EN, DE, FR, ES). </p>
<p>This had a dramatic impact on the performance of our models. For example, </p>
<ul>
<li>Vanilla Bi-GRU model: 0.9862LB</li>
<li>“ (w/ train-time augments): 0.9867 LB</li>
<li>“ (w/ test-time augments): 0.9865 LB</li>
<li>“ (w/ both train/test-time augments): 0.9874 LB</li>
</ul>
<p>In other words, a single TTA-ed model was beating the majority of teams’ (presumably) ensembled submissions on the public ladder.</p>
<p>We were curious if this technique worked primarily by “fixing” non-English comments - we tested translating the original comments directly into English (which had the side-effect of translating non-English comments) and that resulted in lower performance than our full-form of augmentation.</p>
<p><strong>3. Rough-bore pseudo-labelling (PL)</strong></p>
<p>We tried a number of PL variants - canonical per-batch updates, altering the loss functions etc.
The variant that performed the best was simply labeling the test samples using our best-performing ensemble, adding them to the train set, and training to convergence. </p>
<p>There’s been a fair amount of hay made on the forum about the difference in train and test distributions. PL helps with that.</p>
<p><strong>4. Robust CV + stacking framework</strong></p>
<p>For stacking, we used a weighted mean of arithmetic averaging and stacking, which worked marginally better (~.0001) than either approach alone. For stacking, we used primarily LightGBM, which both was faster than XGBoost and reached slightly better CV scores with heavy bayesian optimization. </p>
<p>Parameters were selected by choosing the best out of 250 runs with bayesian optimization; key points in the parameters were small trees with low depth and strong l1 regularization. We bagged 6 runs of both DART and GBDT using different seeds to account for potential variance during stacking.</p>
<p>For CV, we tracked accuracy, log loss and AUC. A model was “strong” if when added to our stack it improved CV-log loss and CV-AUC in addition to improving the public board. We discarded a lot of models from our stack+blend for failing to do any of the three above in fear of overfitting. </p>
<p><strong>5. Miscellaneous takeaways/explorations</strong></p>
<p>During our search of the solution space we tried a number of different approaches with varying success. A number key takeaways we thought would be helpful to share are below:</p>
<ol>
<li><p>Since most of the model complexity lay in the pre-trained embeddings, minor architecture changes made very little impact on score. Additional dense layers, gaussian vs. spatial dropout, additional dropout layers at the dense level, attention instead of max pooling, time distributed dense layers, and more barely changed the overall score of the model.</p></li>
<li><p>Preprocessing was also not particularly impactful, although leaving punctuation in the embeddings for some models (with fasttext, so they could be accomodated) was helpful in stacking.</p></li>
<li><p>Many comments were toxic only in the last sentences -- adding some models trained with the ending 25-50 characters in addition to the starting 200-300 assisted our stack. </p></li>
<li><p>Some approaches struggled to deal with the “ordering” problem of words. The same phrase with two words swapped can mean completely different things. This meant that CNN approaches were difficult to work with, as they rely on max-pooling as a crutch. Our best CNN (a wavenet-like encoder connected to some time distributed dense and dense layers) scored about .0015 lower than our best RNN. </p></li>
<li><p>Overall, other architectures struggled to achieve comparable performance to RNN. The only one that reached RNN levels was Attention Is All You Need, and it took significantly longer to train. </p></li>
<li><p>On that note, despite all the hate, tensorflow is superior to Keras in a lot of ways-- try implementing Attention Is All You Need or scalable models in Keras for a practical demonstration of that (yes, Keras just got tensorflow-served). That said, Keras is also fantastic, especially for rapid iteration, and what we used for our primary RNN models. </p></li>
<li><p>Kagglers often don’t want to mix models with different OOF splits, and it’s hard to understand why. It can make for over-optimistic CV predictions, but you won’t overfit the public board compared to the private with that strategy. As long as you are looking at directionality and not comparing stacks from different OOF splits to stacks from a single OOF split you shouldn’t have any issues. </p></li>
</ol>
<p>Cheers! Expect to see more of the Toxic Crusaders in future competitions!</p>
<p>P.S. I'm on the lookout for interesting Data Science/ML jobs so if you're privy to one please DM me!</p>
<p>I personally only tested/used RNN/CNNs but my team-mate, Ray might have tried alternative models. I'll defer to him on that end. </p>
<p>9880 to 9885</p>
<p>Soft labels.</p>
<p>We tested both and found that the probs worked better.</p>
<blockquote>
<p><strong>Jiwei Liu wrote</strong></p>
<blockquote>
<p>Congrats! I have a question in Rough-bore pseudo-labelling (PL) part. Did you binarize the prob predictions of ensemble to train? Or do you just use the ensemble probs as is and do regression? Thank you so much.</p>
</blockquote>
</blockquote>
<p>Let me flip the question and say - why not try it in every competition?</p>
<p>This seems to be like a standard technique that should be in everyone's competition tool kit. </p>
<blockquote>
<p><strong>Peter Hurford wrote</strong></p>
<blockquote>
<p>To put my question another way, why can't you just pseudo-label every test set in every Kaggle competition? Assuming that won't work, what makes it work well in this particular competition?</p>
</blockquote>
</blockquote>
<p>Soft targeting and all of them.</p>
<p>We tried variants including hard labels, only selecting high confidence predictions - they all performed worse. </p>
<p>PL can be seen as a form of regularization so overfitting isn't really a concern. </p>
<p>And part of the reason why I jumped on this is I used it to great effect in another competition (IEEE Cameras). </p>
<blockquote>
<p><strong>Peter Hurford wrote</strong></p>
<blockquote>
<p>Wow. I didn't try pseudo-labeling as I was too fearful it would overfit. Glad to see that it helped! Any tips on avoiding overfit with this approach or how you went about deciding you could trust it?</p>
</blockquote>
</blockquote>
<p><a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038">https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48038</a></p>
<ol>
<li><p>We trained a separate model for each pre-trained embedding. We tried fusing multiple embeddings in a single model but that performed worse than training separately and ensembling. </p></li>
<li><p>We froze the layer for the most part although I believe Ray built a couple models using fine-tuning.</p></li>
</ol>
<p>Yes except we only used log-loss for the loss function. Performance degraded significantly when we used other loss functions such as pairwise loss. </p>
<p>We did evaluate the AUC of the validation set every epoch to checkpoint the model, and load the best epoch's model weights before making test predictions. </p>
<p>The latter. We tested typical class balancing techniques (e.g., class weights etc.) - didn't seem to help. </p>
<p>Convergence took around 2-6 epochs depending on the model - we used check-pointing to save the weights at the best epoch. </p> |
Toxic Comment Classification Challenge | 25th Place Notes (0.9872 public; 0.9873 private) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Jigsaw and Kaggle for running this interesting competition and dataset, and to all the great sharing that really makes this place what it is. It's going past the point where one person can keep up with everything shared, so triaging what to try is becoming a more important skill.</p>
<p>Congrats to the winners too, some neat tricks already disclosed, I'm kicking myself for forgetting about the machine translated training augmentation. I'm also looking forward to reading all solutions, from anywhere on the leaderboard (edit: or indeed off the leaderboard...)</p>
<p>I started off with Jeremy Howard's NB SVM and LSTM baselines and added other public kernels later, particularly by Tilii and Olivier (logistic regression & LGB, re-run with my KFold seed.) Thanks for sharing guys! Other than that I used mainly CuDNNLSTM and CuDNNGRU models (Bidirectional, ~120 units, max & avg pooling and small dense layer), plus some CNNs: with five fold CV I could train two over lunchtime every day.</p>
<p>Here are some selected tips and tricks, probably implemented by others too...</p>
<h2>Word Vectors</h2>
<ul>
<li>glove.42B.300d.txt</li>
<li>glove.6B.200d.txt</li>
<li>glove.twitter.27B.200d.txt</li>
<li>fasttext-crawl-300d-2m.zip</li>
<li>lexvec.commoncrawl.300d.W.pos.neg3.vectors</li>
<li>(too late to try, but really interesting: <a href="https://github.com/agnusmaximus/Word2Bits">https://github.com/agnusmaximus/Word2Bits</a>)</li>
</ul>
<p>The twitter embeddings use special tokens like "<allcaps>" which must help out a bit in this task. The code for preprocessing is at:</p>
<ul>
<li><a href="https://gist.github.com/tokestermw/cb87a97113da12acb388">https://gist.github.com/tokestermw/cb87a97113da12acb388</a></li>
<li><a href="https://nlp.stanford.edu/projects/glove/preprocess-twitter.rb">https://nlp.stanford.edu/projects/glove/preprocess-twitter.rb</a></li>
</ul>
<h2>Input Transforms</h2>
<p><a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/51426#latest-297028">CPMP's spell checker</a>, and adapted to also split combined words (based on word vector file order, i.e. commonest first).</p>
<p>Reversing the sentence order, a hack that's a bit Kaggle specific. (Kind of think of this as a vaguely Yoda-like model, I do.)</p>
<pre><code># standard, default
def word_tokenize(s):
return list(itertools.chain.from_iterable([nltk.word_tokenize(t) for t in nltk.sent_tokenize(s)]))
# last sentence comes first - might learn differently?
def word_tokenize_rev(s):
return list(itertools.chain.from_iterable([nltk.word_tokenize(t) for t in nltk.sent_tokenize(s)[::-1]]))
train_toks = train.comment_text.apply(word_tokenize_rev)
</code></pre>
<p>Word order within sentences is preserved, but new combinations of neighbouring words are created between sentences...</p>
<p>Also, because sequence based models need fixed length input, I trained duplicated models on simple reversed word order, which actually had better CV scores. For short comments the performance should be the same but for long comments these models are certain to diverge from the original models. (This might still miss toxicity in the middle.)</p>
<p>As noted by others, unlike TF-IDF where rare words are omitted, even a single occurrence of a known word should get an entry in the embedding matrix: it's embedding may be similar to that of other words. My embedding matrix was often over 200k entries (I did not limit it's size). Some models I ditched out-of-corpus words, some I used a single 'unk' token/vector.</p>
<h2>Explore All Epochs</h2>
<p>Prompted by a CPMP comment on the forum (and vaguely reminiscent of a <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44700#251709">Faron comment</a> from Porto Seguro), I made validation & test set predictions after every epoch, which slows training a bit, but not much.</p>
<p>An example log:</p>
<pre><code> 0 AUC [0.9785 0.9898 0.9914 0.9536 0.9839 0.9673] 0.9774138
1 AUC [0.9806 0.9902 0.9925 0.9609 0.9861 0.9807] 0.9818244
2 AUC [0.9824 0.9911 0.9930 0.9729 0.9872 0.9893] 0.9859826
3 AUC [0.9834 0.9912 0.9932 0.9814 0.9877 0.9895] 0.9877439
4 AUC [0.9835 0.9911 0.9932 0.9855 0.9877 0.9901] 0.9885134
5 AUC [0.9840 0.9913 0.9934 0.9893 0.9883 0.9903] 0.9894455
6 AUC [0.9844 0.9913 0.9936 0.9916 0.9885 0.9910] 0.9900701
7 AUC [0.9843 0.9913 0.9935 0.9920 0.9884 0.9907] 0.9900389
8 AUC [0.9844 0.9912 0.9937 0.9928 0.9883 0.9909] 0.9902207
9 AUC [0.9846 0.9912 0.9938 0.9929 0.9884 0.9905] 0.9902244
10 AUC [0.9845 0.9913 0.9937 0.9928 0.9883 0.9905] 0.9901801
11 AUC [0.9849 0.9914 0.9935 0.9925 0.9884 0.9910] 0.9902788
12 AUC [0.9848 0.9910 0.9937 0.9934 0.9884 0.9910] 0.9903721
13 AUC [0.9844 0.9913 0.9935 0.9935 0.9884 0.9905] 0.9902595
14 AUC [0.9849 0.9913 0.9936 0.9937 0.9879 0.9904] 0.9903008
15 AUC [0.9846 0.9911 0.9935 0.9942 0.9883 0.9905] 0.9903677
16 AUC [0.9840 0.9913 0.9935 0.9933 0.9882 0.9905] 0.9901266
Best epochs per class:
[11 11 9 15 6 6]
Best AUC values per class:
[ 0.984888 0.991392 0.99376 0.994183 0.988477 0.990986]
Mean best AUC: 0.990614250515
</code></pre>
<h2>Model Targets</h2>
<ul>
<li>Binary 6 output logloss.</li>
<li>Regression: np.log1p(y.sum(1)) (mean centered)</li>
<li>Three class softmax on bucketed label sum: [ 0 ] [ 1 2 ] [ 3 4 5 6 ]</li>
</ul>
<p>The latter two cannot be used for blending but helped nonlinear stacking.</p>
<h2>Stacking</h2>
<p>I used my per-epoch predictions to try different combinations for stacking, e.g. best epoch per class, single best epoch... Settling on simply best epoch per class.</p>
<p>For level 2 neural nets it was important to use stretched logit scale predictions:</p>
<pre><code>def safe_logit(df, eps=1e-6):
return logit(np.clip(df, eps, 1-eps))
</code></pre>
<p>This reached CV AUC of 0.9922. (See below for OOF report.)</p>
<p>Six tuned LightGBM models on the level 1 predictions and some extra features reached 0.9925.</p>
<h3>Extra LGB L2 Features</h3>
<ul>
<li>comment text length</li>
<li>count / ratio of uppercase chars</li>
<li>standard deviation of word lengths in comment</li>
<li>lzw compression ratio of comment text</li>
</ul>
<p>The rank average of those two scores <strong>0.9872</strong> public, <strong>0.9873</strong> private...</p>
<h2>Error Analysis</h2>
<p>Some advice particularly for newbies...</p>
<p>The most important part of the iteration process: where are my models going wrong?</p>
<p>You can take OOF predictions and look at the worst mistakes. Sort by the predictions, and for target==0 high values are bad, for target==1 low values are bad.</p>
<p>Here is the code I use, to make a multi-sheet spreadsheet:</p>
<pre><code>train = pd.read_csv('../input/train.csv', index_col='id').fillna(' ')
list_classes = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
cols = list_classes + ['oof', 'comment_text']
# Edit: replace newlines for more compact view
train['comment_text'] = train['comment_text'].str.replace('\n', ' ')
# Load OOF predictions (shape is (159571, 6))
oof = np.load('l2_keras_oof.npy')
# Change fp and fn to dump as many rows as you like.
# Some classes cannot have 3000 false negatives
# e.g. threat only has 478 postive training examples.
fp = 1000
fn = 3000
with pd.ExcelWriter('oof_mistakes.xlsx') as writer:
for i, name in enumerate(list_classes):
train['oof'] = oof[:,i]
srt = train.sort_values('oof') # sort ascending
df = srt.loc[srt[name]==0].tail(fp)[cols]
df.to_excel(writer, 'not_%s_ranked_high'%(name), encoding='utf-8')
df = srt.loc[srt[name]==1].head(fn)[cols]
df.to_excel(writer, '%s_ranked_low'%(name), encoding='utf-8')
</code></pre>
<p>Edit: for Python 2.7 this may help:</p>
<pre><code>from unidecode import unidecode
train['comment_text'] = train['comment_text'].str.decode('utf-8').apply(unidecode)
</code></pre>
<p>My predictions are rank transformed and 0/1 min-max scaled, so 0.9 means bottom of the top 10% most toxic.</p>
<p>The columns are always sorted ascending so 'not_toxic_ranked_high' the worst errors are at the bottom, 'toxic_ranked_low' the worst errors are at the top. (I find it easier this way but it should be easy to alter.)</p>
<p>Attached are my OOF reports from the layer 2 stacking neural network. This combines ALL my input models, so it is a good measure of the whole ensemble.</p>
<p>For example, my last model was still making mistakes ranking 'obscene' comments too low, the worst was 0.727; the comments at that end of the spectrum mostly include the word 'bitch' but obfuscated (e.g. 'b1tsch'). (This seems like an interesting sub-problem in itself, an <a href="https://en.wikipedia.org/wiki/Evolutionary_arms_race">arms race</a> between content filters and people exploiting flaws in them, creating new mutations of words still perceived as offensive but not easily machine recognizable.)</p>
<p>(For curiosity, I made oof_greatest_hits: this shows the other ends of the spectrum, the toxic comments ranked most toxic, and the negatives with the strongest negative scores. I don't use this much... It's funny to see pure distilled toxicity though.)</p>
<p>Also: simpler models will create identical predictions for inputs they 'see' as identical (but the comment text may not be strictly identical.) It was interesting to find some duplicates in the training set this way... but a bit too late to change the CV folds accordingly.</p>
<p>Most of the 'false positives' ("not toxic" but ranked high) look like bad training set labels to me. I trust the model much more. So improvements are left to the other end, false negatives: true targets ranked too low.</p>
<p>Reading through all the mistakes looking for improvements is hard... Who knows what kind of <a href="http://www.medicaldaily.com/brain-science-cursing-how-swear-words-work-inflict-and-relieve-pain-331264">subconscious effects</a> it might have!</p>
<h2>Leftovers</h2>
<p>Some unexplored/abandoned ideas.</p>
<h3>Visual Models</h3>
<p>One idea I thought of but didn't try was to try to read text the way humans actually see it. To classify at the perception level of text itself: e.g. 81tch → is that offensive? With a little standardization and 'rendering' the text into pixels surely this would work at some level, at least for the more basic obfuscated text examples. (Seems in some way more granular than a character RNN... But would not be capable of learning large phrase concepts.)</p>
<h3>Clustering Word Vectors</h3>
<p>I tried clustering the word vectors, for train & test combined, the results are attached. This is 21265 clusters, aiming for a mean of ten words per cluster, though the clusters are considerably more 'lumpy' than that, many have one word, some have dozens (this was created using <code>sklearn.cluster.MiniBatchKMeans</code>).</p>
<p>Since <a href="https://www.kaggle.com/jhoward/nb-svm-strong-linear-baseline">NB SVM</a> worked so well I thought it might improve generalisation to use assigned word vector cluster IDs instead of words. The CV score for doing this with cluster IDs & bi-grams of cluster IDs was 0.9770, not an improvement. In theory, one cluster per word is identical to TF-IDF, and too few clusters loses too much information: there must be an optimal point in the middle but I ran out of patience to find it. The clusters look fairly good, it is easy to see themes for each, and some are excellent markers for toxicity (e.g. cluster 980). Distance of the word vectors in the comment text to some of these clusters might have made good features for other models. This is a low-complexity alternative to what neural networks are doing with the vectors.</p>
<p>I may edit this post with more details later...</p> |
Toxic Comment Classification Challenge | 27th place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, I have replied to your questions. You can check it on GitHub.</p>
<p>I intend to do so. As I mentioned in the previous comment, with more filters, you should choose a high dropout value or the result would not improve and get even worse.</p>
<p>Thanks. It was a pleasure working with you :)</p>
<p>TextBlob transforms sentences into tags rather than vectors. For example:</p>
<p><code>text = "The titular threat of The Blob has always struck me as the ultimate movie monster..."</code></p>
<p><code>blob = TextBlob(text)</code></p>
<p><code>blob.tags # [('The', 'DT'), ('titular', 'JJ'), ('threat', 'NN'), ('of', 'IN'), ...]</code></p>
<p>I assign trainable vectors to each tag by Embedding layer. Here is the structure of my pos RNN:</p>
<p><img src="https://i.imgur.com/RkQ9ztx.png" alt="posrnn"></p>
<p>The difference is it takes 2 input: one for the raw sentences, the other for the pos sequences and I concatenate these two embeddings to a single one. </p>
<p>The following structure is totally the same as my original rnn.</p>
<p>Thanks. I will update the link after getting things done.</p>
<p>I use <code>plot_model</code>.</p>
<p>You can get more details in the <a href="https://keras.io/visualization/">keras document</a>.</p>
<p>I also used tf.nn.top_k() and it is similar to CPMP's implementation.</p>
<p>Firstly, congratulations to all winners and thanks a lot to Jigsaw for such impressive competition. There were so many interesting approaches proposed during this competition, which is really helpful for beginners like me. I think it's my turn to give a feedback.</p>
<p>Our solution is based on the following 3 parts:</p>
<h2>Preprocessing</h2>
<p>We trained our models on 3 datasets with different preprocessing:</p>
<ul>
<li>original dataset with spellings correction: by comparing the Levenshtein distance and a lot of regular expressions.</li>
<li>original dataset with pos taggings: We generate the part of speech (POS) tagging for every comment by TextBlob and concatenate the word embedding and POS embedding as a single one. Since TextBlob drops some tokens and punctuations when generating the POS sequences, that gives our models another view. </li>
<li>Riad's dataset: with very heavily data-cleaning, spelling correction and translation</li>
</ul>
<h2>Models</h2>
<p>In our case, simple models always perform better. I tried some complicated structures (RHN with recurrent dropout, DPCNN, VDCNN, HAN, Convolutional Attention Model). Most of them had performed very well locally but got lower AUC on the leaderboard. The models I kept trying during the final stage are the following two:</p>
<p>Pooled RNN (public: 0.9862, private: 0.9858)
<img src="https://i.imgur.com/AQkbPn7.png" alt="pooledRNN"></p>
<p>Kmax text CNN (public: 0.9856 , private: 0.9849)
<img src="https://i.imgur.com/WfbXVh3.png" alt="kmaxCNN"></p>
<p>The structures are very simple and coverage quickly that we are able to experiment with several combinations of hyper-parameters and evaluate the performance of different preprocessing and pre-trained embedding.</p>
<p>As many competitors pointed out, dropout and batch-normalization are the keys to prevent overfitting. By applying the dropout on the word embedding directly and behind the pooling does great regularization both on train set and test set. Although model with many dropouts takes about 5 more epochs to coverage, it boosts our scores significantly. For instance, my RNN boosts from 0.9853 (private: 0.9850) to 0.9862 (private: 0.9858) after adding dropout layers. </p>
<p>For maximizing the utility of these datasets, besides training on the original labels, we also add a meta-label "bad_comment". If a comment is labeled, then it's considered to be a bad comment. The hypothesizes between these two labels sets are slightly different but with almost the same LB score, which leaves us room for the ensemble.</p>
<p>In order to increase the diversity and to deal with some toxic typos, we trained the models both on char-level and word-level. The results of char-level perform a bit worse (for charRNN: 0.983 on LB, 0.982 on PB, charCNN: 0.9808 on LB, 0.9801 on PB) but it does have a pretty low correlation with word-level models. By simply bagging my char-level-rnn and word-level-rnn, it is good enough to push me over 0.9869 in the private test set. </p>
<p>By the way, the hyperparameters influence the performance hugely in the char-based models. A large batch size (~256), very long sequence length (>1000) would ordinarily get a considerable result even though it takes much time for the K-fold validation. (my char-based models usually converge after 60~70 epochs which is about 5 times more than my word-based models.)</p>
<p>As for the pre-trained embedding, we used:</p>
<ul>
<li>Fasttext: crawl-300d-2M.vec</li>
<li>GloVe: glove.840B.300d.txt</li>
<li>GloVe: glove.twitter.27B.200d.txt</li>
</ul>
<p>We initialize the OOV embeddings by random noises rather than all zeros for regularization. We have also played some tricks on word embeddings, for the instance, concatenating two as a single one, however, most of them do not perform better than directly bagging the submissions of different embeddings.</p>
<h2>Ensemble</h2>
<p>Since I teamed up with Riad Souissi at a very late moment, our ensemble was 1-level only and kind of plain.</p>
<p>We simply select some best OOFs from each type of models (RNN, CNN, LightGBM, NBSVM) and ensemble them by LightGBM. Most of the meta-features are the same as <a href="https://www.kaggle.com/ogrellier/wordbatch-fm-ftrl-using-mse-lb-0-9804">olivier's awesome kernel</a>. The only difference is we add #typos, #oov_in_fasttext, #oov_in_glove and the sentiment scores trained by twitter comments for showing the degree of positiveness, which gives a little boost to my local AUC from 0.9920 to 0.9928.</p>
<p>You can find part of our solution in this <a href="https://github.com/zake7749/DeepToxic">github repository</a>. Finally, I have to pay tribute to my teammates, @Riad Souissi, @Wokerker. They propose many inspiring ideas that support our team keeping pushing in the final moment. Many thanks.</p>
<p>The correlation of <code>k=10</code> and <code>k=4</code> is about 0.99 and they have close AUC scores. </p>
<p>I choose 3 or 4 eventually because it usually takes more epochs to converge with a large k.</p>
<p>Sure, but I am sorry that I can not share more detail about the difference between AUCs on the leaderboard because I did not submit all kinds of them.</p>
<p>I roughly replicate experiments in the paper <a href="https://arxiv.org/pdf/1510.03820.pdf">"A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional
Neural Networks for Sentence Classification"</a> and reach the same conclusion.</p>
<ul>
<li>My number of feature maps is ranged from 100~ 400. With a large number of feature maps, you should also apply a large dropout rate (in an extreme case, I set 0.8) or the result would not improve and get even worse.</li>
<li>The activation function on Conv1D is 'RELU', which outperforms 'Tanh'.</li>
<li>The initialization method of Conv1D is not the key point in this case.</li>
<li>The windows sizes I used are (1, 2, 3, 4), which is slightly better than (1, 2, 3) and (2, 3, 4) but worse than the bagging of them. The window size larger than 4 does not get a better result for word-level model.</li>
<li>Stack more convolution and pooling layer would not bring a better result. One layer is good enough in this case.</li>
<li>On the idea of selecting feature maps of different window sizes, I tried avg, sum, and apply attention to them. But all of them do not outperform max pooling.</li>
<li>The gap between max pooling and k(3)-max pooling is about 7e-4.</li>
</ul> |
Toxic Comment Classification Challenge | 2nd place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Please have a look at this comprehensive "awesome trend" repo on Pretrained Word Embeddings:
<a href="https://github.com/Hironsan/awesome-embedding-models">https://github.com/Hironsan/awesome-embedding-models</a></p>
<p>It sounds good! Maybe I will try it next time</p>
<p>fasttext crawl vectors gave better results.</p>
<p>First of all we would like to thank <a href="https://jigsaw.google.com/">Jigsaw</a> and Kaggle for this fascinating NLP Competition and also Congratulations to all the participants from the 1st place to the 4.551st, I am sure we all had a full learning experience! </p>
<p><strong>Overview</strong>:
For this competition we built an ensemble of RNN,DPCNN and GBM models in order to achieve the appropriate diversity. The most interesting part of our solution was the neural networks on which we did:</p>
<ul>
<li>train on pre-trained embeddings. (FastText, Glove twitter, BPEmb, Word2Vec, LexVec) </li>
<li>train and test-time augmentation (TTA) using translations to German, French and
Spanish and back to English thanks to Pavel Ostyakov’s open contribution. </li>
<li>train on translations to the above languages and using DE, FR, ES
BPEmb pre-trained embeddings. </li>
</ul>
<p>Train and test time augmentations as well as the non-English embeddings gave as the boost to the second place. I have also to mention that for this competition we used 1 GPU and trained on 5fold CV.</p>
<p><strong>Ensemble</strong>:
Finally we ended up with about 30 different models of which we took the average. </p>
<p>Thank you and Happy Kaggling!</p>
<p>We trained each individual model on one embedding at a time</p>
<p>As an example,</p>
<ol>
<li>GRU with TTA BPEmb: Private Leaderboard 0.9860</li>
<li>DPCNN with TTA BPEmb: Private Leaderboard 0.9843</li>
</ol>
<p>In general GRU performs better at all embeddings</p>
<p>Averaging the predictions of the translations , same way as we do with images on test set when e.g we flip horizontally and vertically and then average the predictions.</p>
<p>Yes TTA was based on translations on test set.</p>
<p>The majority of our models took about 2 hours to train, some other took 4 hours.</p>
<p>We only tried BPEmb pre-trained embeddings with 200000 merge operations </p> |
Toxic Comment Classification Challenge | 33rd Place Solution Using Embedding Imputation (0.9872 private, 0.9876 public) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sure no problem! Let me know if you run into any problems. I'm not sure if any of the architectures are really that novel. I think most of the score comes filling in missing vectors with something better than zeros or random numbers. </p>
<p>Hey thanks authman! </p>
<p>It allowed me to streamline my process and train/tune a lot of neural networks quickly. <br>
I plan to write a kernel on how to use it effectively. </p>
<p><strong>Private LB: 0.9872, 33/4551</strong>; Public LB: 0.9876, 45/4551</p>
<p>Thank you to Jigsaw and Kaggle for hosting a competition with such a respectable goal! I benefited and learned a lot from all the public kernels. Thank you to everyone who posted something. In particular, I would like to thank
<a href="https://www.kaggle.com/hhstrand/oof-stacking-regime">Håkon Hapnes Strand</a>,
<a href="https://www.kaggle.com/jhoward/nb-svm-strong-linear-baseline">Jeremy Howard</a>,
<a href="https://www.kaggle.com/ogrellier/wordbatch-fm-ftrl-using-mse-lb-0-9804">olivier</a>,
<a href="https://www.kaggle.com/mosnoiion/two-rnn-cnn-columns-networks-with-keras">Ion Moșnoi</a>, and
<a href="https://www.kaggle.com/chongjiujjin/capsule-net-with-gru">chongjiujjin</a> for their contributions.</p>
<p>My main insight was in handling the out-of-vocabulary words which I discuss below. I also created a custom <code>NN</code> class that allowed me to streamline my process. Here are some of the key features:</p>
<ul>
<li>monitors AUC directly and implements early stopping</li>
<li>creates logs of model performance</li>
<li>creates out-of-fold predictions</li>
<li>creates test set predictions</li>
</ul>
<p>The <code>NN</code> class made it easy to swap out different neural network architectures, tune them using random search, and make predictions with the optimal hyperparameters. I started the competition late but within one week I had trained over 1000 neural networks using 2 1080Ti's. </p>
<p>The code can be found on my <a href="https://github.com/mattmotoki/toxic-comment-classification">github</a>. </p>
<h3>Models (best private score shown):</h3>
<ul>
<li>CapsuleNet (<em>0.9860 private</em>, 0.9859 public)</li>
<li>RNN Version 1 (<em>0.9858 private</em>, 0.9863 public)</li>
<li>RNN Version 2 (<em>0.9856 private</em>, 0.9861 public)</li>
<li>Two Layer CNN (<em>0.9826 private</em>, 0.9835 public)</li>
<li>NB-SVM (<em>0.9813 private</em>, 0.9813 public)</li>
</ul>
<h3>Ensembling (best private score shown):</h3>
<ul>
<li>Level 1a: Average 10 out-of-fold predictions (as high as <em>0.9860 private</em>, 0.9859 public)</li>
<li>Level 1b: Average models with different embeddings (as high as <em>0.9866 private</em>, 0.9871 public)</li>
<li>Level 2a: LightGBM Stacking (<em>0.9870 private</em>, 0.9874 public)</li>
<li>Level 2b: Average multiple seeds (<em>0.9872 private</em>, 0.9876 public)</li>
</ul>
<h3>Embedding Imputation Details:</h3>
<p>Replacing missing vectors with zeros or random numbers is suboptimal. Using fastText's built-in OOV prediction instead of naive replacement increases the AUC by ~0.002. For GloVe and LexVec embeddings, I replaced the missing embeddings with similar vectors. To do this, I first trained a fastText model on the competition data. </p>
<p><code>
fasttext skipgram -input "${INPUT_FILE}" -output "${OUTPUT_FILE}" -minCount 1
</code></p>
<p>The <code>-minCount 1</code> flag ensures that we get perfect recall; i.e., we get a vector for every word in our vocabulary. We can now use the cosine similarity to find the most similar vector in the intersection of the local vocabulary (from this competition) and the external vocabulary (from pretrained embeddings). </p>
<p>With this technique, GloVe performed just as well if not better than the fastText with OOV predictions; LexVec performed slightly worse but added valuable diversity to ensembles. </p>
<p>Hi Ivan, thanks a lot for catching that! I'll make the change now. </p>
<p>By the way, looking through my submissions, the only model that had the same public and private LB score was NB-SVM. </p>
<p>Hmm, I'm not sure if there are any key points that I can speak of. My relatively stable score might due to what I didn't do--I didn't tune my models using public leaderboard feedback. </p>
<p>I'm not sure if feeding POS embedddings will help. I didn't have enough time during the competition to try it out, but I think it's a cool idea and it doesn't hurt to try. </p>
<p>Hi thanks! Great question, thanks or asking! </p>
<p>I found those hyperparameters using random search. The <code>NN.predict_kfold</code> function actually facilitates both exploration and exploitation. If we provide tuples in <code>param_lims</code> then the hyperparameters are chosen randomly, if we provide single values then hyperparmeters remain at those values. We can also mix and match tuples with single values to tune just a subset of the hyperparameters. </p>
<h2>Exploration</h2>
<p>In <code>Tune RNN Version 1.pynb</code> we pass tuples</p>
<pre><code>param_lims = {
'n_filters': (25, 100),
'n_recurrent': (25, 100),
'dropout_rate': (0, 0.25),
'l2_penalty': (-5, -2)}
nn.predict_kfold(param_lims, ...)
</code></pre>
<p>which performs CV with </p>
<ul>
<li><code>n_filters</code> chosen from a discrete uniform distribution over [25, 100]</li>
<li><code>n_recurrent</code> chosen from a discrete uniform distribution over [25, 100]</li>
<li><code>dropout_rate</code> chosen from a continuous uniform distribution over [0, 0.25]</li>
<li><code>l2_penalty</code> chosen uniformly on a log-scale over [0.00001, 0.01]</li>
</ul>
<p>I used this method to try out many different hyperparameter combinations. Most of the hyperparameter combinations were bad but some were good. </p>
<p>If you want to gain an intuition about good hyperparameters, you could take look at the logs files (see attached files above) and try to pick out patterns. The intuition you gain from the log files can then guide you when you refine the search space. </p>
<p>I literally created 100's of those log files. After awhile I came up with rules of thumb like "A good range for the number of total trainable parameters is roughly [500K, 1M]". This insight in turn helped me choose reasonable ranges for <code>n_filters</code> and <code>n_recurrent</code>. </p>
<h2>Exploitation</h2>
<p>Once good hyperparameter values are found we can use single values for <code>param_lims</code>. For example, in <code>Refine RNN Version 1.pynb</code>, we have </p>
<pre><code>param_lims = {
'n_filters': 54,
'n_recurrent': 68,
'dropout_rate': 0.052287,
'l2_penalty': 2.1464e-05}
nn.predict_kfold(param_lims, ...)
</code></pre>
<p>and so no randomization is done. </p>
<h2>Choosing the search scale</h2>
<p>You might be wondering why I searched <code>l2_penalty</code> over a log-scale. I haven't actually taken this course but it looks like it provides a pretty good explanation. </p>
<p><a href="https://www.coursera.org/learn/deep-neural-network/lecture/3rdqN/using-an-appropriate-scale-to-pick-hyperparameters">Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization</a></p>
<p>Thanks YaGana! At a glance, I don't really notice a pattern in my public and private LB scores. It is an interesting point of discussion though. </p>
<p>Hi Eric, yes exactly. The callbacks in the kernels don't actually implement early stopping with respect to the AUC; they just print the AUC. </p> |
Toxic Comment Classification Challenge | 34th, Lots of FE and Poor Understanding of NNs [CODE INCLUDED] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>One of my favorite things to do is write up the solution afterwards, so I'm glad I got a rank good enough that I feel comfortable regaling you with all of our exploits.</p>
<p>This was a very interesting problem, especially hot after finishing <a href="https://www.kaggle.com/c/mercari-price-suggestion-challenge/">the Mercari competition</a> which was also big on text analysis. Prior to this competition and Mercari competition, I had never done textual analysis before.</p>
<p>I originally got excited by this competition and worked on it a bit early, but got turned off when the test set leakage was announced, so I left to focus on the Mercari competition. I was very excited when this competition got extended, because it let me work on this in the final month and apply everything I learned from Mercari. Pretty much all this work was done in the last month of the competition.</p>
<p>Our final model was a weighted average of submodels, one of those <em>blend of blends</em> everyone talks about. Let's dive deeper into that.</p>
<h2>The Final Level 3 Weighted Average</h2>
<p>Final toxic model = 0.713 * Model A + 0.011 * Model B + 0.142 * Model C + 0.134 * Model D</p>
<p>Final severe toxic model = 0.231 * Model A + 0.443 * Model B + 0.279 * Model C + 0.047 * Model D</p>
<p>Final obscene model = 0.146 * Model A + 0.2 * Model B + 0.309 * Model C + 0.345 * Model D</p>
<p>Final insult model = 0.371 * Model A + 0.005 * Model B + 0.250 * Model C + 0.374 * Model D</p>
<p>Final threat model = 0.204 * Model A + 0.524 * Model B + 0.124 * Model C + 0.148 * Model D</p>
<p>Final identity hate model = 0.318 * Model A + 0.412 * Model B + 0.141 * Model C + 0.129 * Model D</p>
<p>For each of the models, the predictions were transformed to ranks prior to being averaged together. This ended up being important for the five-fold CV (boost of ~0.0002), but didn't seem to impact the Public or Private LB scores.</p>
<p>The five-fold CV (5CV) was 0.99231, 0.9874 on the public leaderboard, and 0.9872 on the private leaderboard. 5CV for toxic was 0.98877, for severe toxic 0.99242, for obscene 0.99566, for insult 0.99039, for threat 0.99460, and for identity hate 0.99203. Notably, while the 5CV score was 0.0003 higher than any of the submodels, the public and private leaderboard score was identical to Model B, at least down to the four digits that Kaggle reported.</p>
<p>It was cool to see that each one of the four models was the highest weighted model in at least one of the six labels.</p>
<p>The weights for the model were found by adapting <a href="https://www.kaggle.com/tilii7/cross-validation-weighted-linear-blending-errors">Tilli's automatic weighted blending from the Mercari competition</a>. I had talked a lot in this forum about finding weights by hand (trial and error) and had done so with my final weights until the very last day, when I gave the automatic approach a try. Doing so made the model ~0.00003 better, and in a competition where every inch counts, I guess I'm happy I did that.</p>
<h2>The Level 2 Stacker Models</h2>
<p>I was very bullish on doing some Level 2 stacking with tree-based models. While it backfired for me
in the <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/">Porto Seguro competition</a> (my first competition), I thought it would do well here because of the multi-label interactions -- I thought that a model trained for toxic could, for example, help inform not just the toxic prediction but also the predictions for insult, obscene, etc. So thus I made four Level 2 models.</p>
<p>My level 2 models (Models A, B, C, and D) were all trained on the same 757 features -- 334 of those features are 5CV out-of-fold predictions of Level 1 submodels and the other 423 features was handcrafted feature engineering.</p>
<p><strong>Model A was a LightGBM with 20% column sample.</strong> Toxic, threat, insult, and identity hate were trained with 15 leaves and a learning rate of 0.02, whereas severe toxic and obscene were trained with 4 leaves and a learning rate of 0.005. I found this parameter mix via Bayesian hyperparameter tuning. I used early stopping to find the right round limit for each label, but then hardcoded that round limit across all the folds, rather than use early stopping on a per-fold basis, as I found per-fold early stopping to produce some overfitting. Model A had a 5CV of 0.99208, a public leaderboard score of 0.9873 and a private leaderboard score of 0.9871.</p>
<p><strong>Model B was an XGBoost with 10% column sample</strong>, a scale_pos_weight of 5, and 0.01 gamma. Again toxic, threat, insult, and identity hate were best going deep -- I trained each with a learning rate of 0.005 and a max depth of 8! Severe toxic and obscene were best with a max depth of 3 and a learning rate of 0.01. Surprisingly, the correlation between the predictions of the XGB and LGB was just ~0.92, so it looked like they might average well to produce a boost. This boost happened on the 5CV score. Model B had a 5CV of 0.99203 (total level 3blend was 0.99231), but the public leaderboard score of 0.9874 and private leaderboard score of 0.9872 matched the results from the total level 3 blend.</p>
<p><strong>Model C was a neural net with bagging</strong>, implemented in Keras, that took the input through a RELU dense layer with 64 units before outputting into the final 6 unit dense layer. Tuning revealed that using dropout, more units, or a more complex architecture resulted in a lower score. The batch size was 32 and it was trained for three epochs. Because it trained so fast (20 seconds per fold), I decided to use bagging and ran the NN 20 times with a random 80% of the rows sampled with replacement for each bagged run. Model C had a 5CV of 0.99140, a public leaderboard score of 0.9868 and a private leaderboard score of 0.9862. While clearly worse than Models A, B, and D, it added well to the average, at least in terms of 5CV score. The use of bagging produced an increase of 0.0001 on the 5CV score of the NN.</p>
<p><strong>Model D was a random forest with 500 trees</strong> using entropy split, balanced class weight, and a 'min_samples_split' of 10 and 'min_samples_leaf' of 5. Toxic, threat, and identity hate had a max_features of 0.1, insult had a max_features of 0.2, and severe_toxic and obscene had a max_features of 0.3. Obscene and threat used max_leaf_nodes of 100, toxic, severe_toxic, and identity_hate had max_leaf_nodes of 200, and insult had max_leaf_nodes of 500. Model D had a 5CV of 0.99120. I trained it last minute on the penultimate day, so I never submitted it to the leaderboard.</p>
<p>Each of these Level 2 models were trained five-fold with the predictions for the fold combined together via rank transformation. This was important for making the fold average 5CV match the overall 5CV, but did not impact the Public or Private LB scores.</p>
<h2>The Feature Engineering</h2>
<p>A lot of Kagglers bemoan feature engineering, but I think it's the one thing I'm really good at and is what makes up for me being bad at neural nets. (If you're good at neural nets, but bad at LightGBM tuning and feature engineering, I think we'd make a great team in a future competition!) I liked feature engineering here because I think these features pick up a lot that TFIDF and neural nets miss. For here, I dumped the entire kitchen sink at the problem and produced 423 unique features:</p>
<ul>
<li><p><strong>Basic feature engineering:</strong> number of words, number of unique words, unique words per word, number of characters, number of capital letters, number of lowercase letters, number of capitals per character, number of lowercases per character, number of stopwords, number of punctuations, punctuation characters per character, number of words that are entirely in uppercase, number of words entirely in lowercase, number of words that are titled (start with captial letter, rest lowercase), number of characters per word, the number of sentences, the average length of a sentence in words, the maximum length of a sentence, the minimum length of a sentence, the standard deviation of sentence length, the number of repeated sentences, and whether or not the sentence contained a foreign word.</p></li>
<li><p><strong>Olivier's features from</strong> <a href="https://www.kaggle.com/ogrellier/lgbm-with-words-and-chars-n-gram"><strong>this kernel</strong></a> <strong>:</strong> whether the description starts with columns, has a timestamp, has a date, has a link, or has an email. I also added has IP address.</p></li>
<li><p><strong>Parts of speech features:</strong> My theory was that every insult, threat, etc. would have a unique part of speech expression, so I looked at the number of nouns, nouns per word, number of conjunctions, number of prepositions, number of adjectives, adjectives per word, number of modal words (e.g., should, will), number of personal pronouns, personal pronouns per word, number of adverbs, number of adverb participles, number of verbs, verbs per word, number of past tense verbs, number of third person singular present tense verbs, whether there is a modal followed by a verb (e.g., "should kill"), whether there is a personal pronoun followed by a singular present-tense verb (this captured things like "you bitch", since "bitch" was erroneously inferred to be a verb), whether there was an adjective followed by a noun, whether there was a noun followed by a preposition, whether there was a verb followed by the word "you" (e.g., "kill you"), and whether there was the word "you" followed by a verb (again, this would capture "you bitch").</p></li>
<li><p><strong>Syllable-level data:</strong> I thought that toxic comments would be much simpler than other comments, so I captured this with syllable-level data. I looked at the total number of syllables, the average syllables per word, the maximum number of syllables in a word, the standard deviation of syllables across words, the number of big words (more than two syllables), the number of simple words (one syllable), and number of big words per word.</p></li>
<li><p><strong>Readability level:</strong> Along the same lines as syllable info, I used some formal readability scores. I used <a href="https://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readability_tests">the two Flesch-Kincaid tests</a>, <a href="https://en.wikipedia.org/wiki/SMOG">SMOG grade</a>, <a href="https://en.wikipedia.org/wiki/Linsear_Write">Linsear-Write grade</a>, the <a href="https://en.wikipedia.org/wiki/Automated_readability_index">Automated Readability Index</a>, and <a href="https://en.wikipedia.org/wiki/Gunning_fog_index">the Gunning fog index</a>.</p></li>
<li><p><strong>Manual bad word list:</strong> I looked through <a href="https://storage.googleapis.com/kaggle-forum-message-attachments/4810/badwords.txt">a bad word list</a> and <a href="https://github.com/t-davidson/hate-speech-and-offensive-language/blob/master/lexicons/hatebase_dict.csv">another one</a> for manual bad word features to highlight. I also took a look through our TFIDF vectorizers to see which bad words scored the worst. My partner then looked at the relative frequencies of every word to see what words occur the most in each label. We then manually binned these words into higher-level buckets (e.g., obscene words, threat words) and created higher-level features (e.g., has_obscene_word). The idea here was to try to help the model understand certain bad words with really low frequency by putting them into a bin.</p></li>
<li><p><strong>AFINN scores:</strong> I looked at <a href="http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=6010">the AFINN sentiment scores</a>, and calculated the total AFINN score across all words in the comment, plus the mean AFFIN score on a word-by-word basis, the maximum AFINN score, the minimum AFINN score, the standard deviation of scores, the number of words with a score (instead of 0), the number of words with a positive score, the number of words with a negative score, the number of words with a score per word, the number of positive words per word, the number of negative words per word, and the ratio of positive to negative words.</p></li>
<li><p><strong>Sentiment:</strong> I also added the TextBlob sentiment polarity score.</p></li>
</ul>
<p>My general approach here was to use a little bit of theory to guide what I was doing and pick features that felt sensible, whereas I would let TFIDF pick up whatever it wants. I manually tested the individual feature AUC of every feature to ensure that it had some signal… I dropped things that didn't score at least 0.55.</p>
<p>The feature engineering was against some tough competition competing with out of fold predictions, but I was pleased to see that when feature impact was run, a few of the custom features did beat out a good chunk of the models, and that deep trees were working for some of the labels and really using a lot of the features. Setting really low column sample rates helped a lot of these features shine and produce more diverse trees.</p>
<h2>Level 1 Models</h2>
<p>For the level 1 models, I did a whirlwind tour of the public kernels. I wouldn't be anywhere near where I was without these kernels, as no one on our team really understands neural nets. We had to use Google to even know what LSTM or GRU models were -- I'd never heard of them prior to the competition! For this competition, we used 53 different models:</p>
<ul>
<li><p><strong>16 different logistic regression models (112 features total).</strong> I varied whether the model was trained with sag, with L1, with L2, or with <a href="https://www.kaggle.com/jhoward/nb-svm-strong-linear-baseline">the naive bayes adjustment</a>. I varied the TFIDF matrix used, whether it was based on words (1-2 ngrams and 200K features), a cleaned word list (using <a href="https://www.kaggle.com/prashantkikani/pooled-gru-with-preprocessing/code">the preprocessing from this kernel</a>) with stopwords removed (1-2 ngrams and 200K features), based on characters (2-6 character grams and 200K features), or based on a combination of characters and words like <a href="https://www.kaggle.com/tunguz/logistic-regression-with-words-and-char-n-grams">in this kernel</a>. The best model was a LR with the sag solver on the combination of characters and words, with a 5CV of 0.98587 on the original six labels. Each logistic regression was set up to produce seven labels, one for each of the original six and then a seventh "non-toxic" label that was 1 if all the labels were 0, or 0 otherwise. (I don't think the seventh label helped the final score that much.)</p></li>
<li><p><strong>3 LightGBMs on TFIDF matricies (18 features total).</strong> I used <a href="https://www.kaggle.com/peterhurford/lightgbm-with-select-k-best-on-tfidf">my own LightGBM with SelectKBest approach</a> that did well for me in the Mercari competition. I trained one LightGBM on the word TFIDF matrix (1-2 ngrams and 200K features), another on the character TFIDF matrix (2-6 ngrams and 200K features) after <a href="https://www.kaggle.com/prashantkikani/pooled-gru-with-preprocessing/code">the preprocessing from this kernel</a>, and a third LightGBM on the union of character and word vectors (10K words and 50K characters, same as in the LR approach). Each feature set was reduced via SelectKBest and a Logistic Regression to about 10000 best features, as seen in my kernel. The part that I didn't include in my kernel that helped a lot was using dart boosting for the LightGBM and a 0.1 feature fraction, which allowed training for hundreds of rounds. The best model here was on the union TFIDF set, with a 5CV of 0.98468.</p></li>
<li><p><strong>A LightGBM on the union TFIDF matrix plus all my feature engineering.</strong> I took the same LightGBM from before and added my 423 additional features to it. This improved the 5CV score to 0.98779.</p></li>
<li><p><strong>A LightGBM on the Twitter embedding, SVD, and all my feature engineering.</strong> Rather than training the LightGBM on the TFIDF matrix and SelectKBest directly, I instead transformed the TFIDF matrix using the Twitter embedding (glove.twitter.27B.200d.txt) and using TruncatedSVD with 200 dimensions. I then added my 423 additional features to it from the custom feature engineering, for 823 features total. This made for my best non-NN single model, with a 5CV of 0.98788.</p></li>
<li><p><strong>A factorization machine from</strong> <a href="https://www.kaggle.com/anttip/wordbatch-1-3-3-fm-ftrl-lb-0-9812"><strong>this kernel</strong></a><strong>.</strong> I used it as-is but without the prediction chaining… I tried to add my own feature engineering to the bins, but it ended up overfitting quite badly. I also removed the prediction chaining because that was also overfitting for me. 5CV of 0.98730, but 0.9815 on the public leaderboard and 0.9813 on the private leaderboard.</p></li>
<li><p><strong>A Ridge classifier</strong>, loosely inspired by <a href="https://www.kaggle.com/rednivrug/5-fold-ridge-oof/code">this kernel</a>, but using the same features (and binning) as the factorization machine. 5CV of 0.98641, but 0.9813 on the public leaderboard and 0.9807 on the private leaderboard.</p></li>
</ul>
<p>At this point, I was able to combine all my base models and feature engineering into a Level 2 LGB and get a 5CV of 0.98995, with 0.9848 on the public leaderboard. I knew the only way to go further would be to use neural nets, but I didn't understand them yet. So it was time to study up and go through the kernels. I used:</p>
<ul>
<li><p><strong>The Pooled GRU from <a href="https://www.kaggle.com/yekenot/pooled-gru-fasttext/code">this kernel</a>.</strong> 5CV 0.98698.</p></li>
<li><p><strong>A modified version of <a href="https://www.kaggle.com/yekenot/pooled-gru-fasttext/code">that Pooled GRU</a></strong> to use 200K max features (up from 30K), 200 max length (up from 100), and 0.3 spatial dropout instead of 0.2. I also used the Twitter-200d-27B embedding instead of Crawl-300d-2M and I used <a href="https://www.kaggle.com/fizzbuzz/toxic-data-preprocessing">Zafar's cleaned text</a>. This was all based on the theory I read in the discussion that higher max features, higher max length, and higher spatial dropout worked better, and the Twitter embedding was better, especially with a trainable embedding. This was my second best single model by 5CV, with a 5CV score of 0.98920, but it didn't do so well on the leaderboard, with a public leaderboard score of 0.9848 and a private leaderboard score of 0.9832.</p></li>
<li><p><strong>Another modified version of <a href="https://www.kaggle.com/yekenot/pooled-gru-fasttext/code">that Pooled GRU</a></strong> with everything the same as in the kernel, but with Twitter-200d-27B embedding instead of Crawl-300d-2M, 0.5 spatial dropout instead of 0.2, and trained for an extra epoch (3 instead of 2). 5CV 0.98721.</p></li>
<li><p><strong>The FastText GRU from <a href="https://www.kaggle.com/atikur/simple-gru-with-fasttext-lb-0-9847">this kernel</a>.</strong> 5CV 0.98880, public leaderboard 0.9852, but private leaderboard 0.9839.</p></li>
<li><p><strong>A modified version of <a href="https://www.kaggle.com/atikur/simple-gru-with-fasttext-lb-0-9847">that FastText GRU</a></strong>, with 200 max length instead of 175, Twitter-200d-27B embedding instead of Crawl-300d-2M, a trainable embedding, and <a href="https://www.kaggle.com/fizzbuzz/toxic-data-preprocessing">Zafar's cleaned text</a>. I had to drop the epochs down to 12 from 19. The 5CV showed improvement at 0.98972, making it my best single model. However, the public leaderboard and private leaderboard were worse at 0.9848 and 0.9832 respectively.</p></li>
</ul>
<p>At this point I was having fun tweaking the kernels slightly and was amazed at how small tweaks could produce a new just as good model with a correlation of ~0.95. It seemed like each NN I was adding was producing a solid boost for my final ensemble, usually ~0.0002 or so. But I didn't really know what I was doing, so I dumped in more kernels. As long as the correlations were in check, it would give me more model diversity, which seemed like a good thing!</p>
<p>I used <a href="https://www.kaggle.com/antmarakis/bi-lstm-conv-layer">the Bidirectional GRU from this kernel</a> (5CV 0.98623), <a href="https://www.kaggle.com/zhbain/pooled-gru-fasttext-6c07c9/code">the double GRU from this kernel</a> (5CV 0.98646), <a href="https://www.kaggle.com/shujian/textcnn-2d-convolution/code">the 2DConv CNN from this kernel</a> (5CV 0.98807), <a href="https://www.kaggle.com/fizzbuzz/bi-lstm-conv-layer-lb-score-0-9840/">the Bi-LSTM from this kernel</a> (5CV 0.98863, Public Leaderboard 0.9846, Private Leaderboard 0.9836), <a href="https://www.kaggle.com/michaelsnell/conv1d-dpcnn-in-keras/code">the DPCNN from this kernel</a> (5CV 0.98655), <a href="https://www.kaggle.com/mosnoiion/two-rnn-cnn-columns-networks-with-keras/notebook">the RNN+CNN from this kernel</a> (I rewrote it in Python) (5CV 0.98722), <a href="https://www.kaggle.com/fizzbuzz/capsule-net-with-gru-on-preprocessed-data/notebook">the Capsule net from this kernel</a> (5CV 0.98358), and <a href="https://www.kaggle.com/yekenot/pooled-attention-lstm-fasttext/code">the attention layer LSTM from this kernel</a> (5CV 0.98472).</p>
<p>Lastly, I played modify again and took the RNN+CNN and upped max features to 200K from 30K, maxlen to 200 from 100, used Twitter-200d-27B instead of glove.840B.300d, and swapped to a trainable embedding. The score increase was negligible (5CV 0.98722 -> 0.98725), but the result was surprisingly uncorrelated.</p>
<p>Some of the higher scorers noticed that different embeddings would be the key to model diversity. I didn't notice that, but I did realize it would be at least <em>somewhat</em> important, so I ended up with three different embeddings across the models -- Crawl-300d-2M, glove.840B.300d, and Twitter-200d-27B. I wonder if training more NNs on embeddings I didn't use would have upped my score.</p>
<p>At this point, adding more NNs was no longer increasing my ensemble score at all, so I stopped. Well, not quite. I did throw in <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/51836">all the OOF predictions from the 16 models provided by Neptune</a>. This gave me some model diversity, as I think they trained some GPU-only models (whereas I couldn't figure out how to work a GPU) and trained some different embeddings (like fasttext).</p>
<h2>Things I Tried that Didn't Work</h2>
<ul>
<li><p>In <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/46035">the External Data thread</a>, I was surprised to learn that <a href="https://www.perspectiveapi.com/">the PerspectiveAPI was fair game</a>, so I painstakingly queried the ConvAI API for every row. It took 18 hours to run with the rate limit. However, it turned out that all the results were overfit, probably because the model was trained on the same train data as our data, so I couldn't get out of fold predictions.</p></li>
<li><p>I also tried to use the additional toxicity and aggression labels <a href="https://www.kaggle.com/sreeram004/test-lr-with-convai-dataset">as explored in this kernel</a>, but I noticed that the data was the same as in our train set but with different preprocessing, and I had trouble linking the data together. I trained a bunch of Ridge regressors on it, but the results would just overfit.</p></li>
<li><p>I explored a few different Level 2 and Level 3 architectures, where I stacked LightGBMs on top of each other and used the feature engineering more in the submodels, but it turned out that using all the feature engineering at the top level was both simpler and higher scoring.</p></li>
<li><p>I tried putting the Twitter embeddings and 200-dimension SVM on the second level tree models, but it looked like it was overfitting and the overall dataset ballooned too large (2.4GB) to effectively tune.</p></li>
<li><p>I couldn't get any worthwhile SVM model to work.</p></li>
</ul>
<h2>Things I Wish I Had More Time to Explore</h2>
<p>It would have been nice to know more about the various NNs and how to build and tune them. I was mildly successful at making my own NNs with better 5CV scores and good model correlation, but I couldn't improve upon the kernels in the leaderboard and emphatically did not know what I was doing. Seeing <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/52581">the best single model private leadearboard scores</a>, my single models had a lot of room to improve. My best private leaderboard score for a single model was a GRU with private leaderboard 0.9841, which is a far cry from the 0.986X's being reported by some. Unfortunately, I just don't know yet how to make a GRU good, and I'm not exactly sure where to learn.</p>
<p>If I had more time, I would have explored dropping some of the features or submodels to see if that improved the model scores, or maybe even made specific feature sets unique for each label.</p>
<p>This is a bit rough since the competition scores were so close, I feel like exploring any one thing could have been enough to send us flying up a few ranks.</p>
<p>Another idea my teammate had that we're not sure is any good is repetitively resampling and augmenting the data set with the mislabeled / borderline toxic comments. The high AUC seems mostly driven by very easy to identify good comments and we wonder if removing the bulk of them will reduce some of the overfitting on the “good” side (probably not).</p>
<h2>Things I Think About With The Benefit of Hindsight</h2>
<ul>
<li><p>In hindsight, I should have explored training on a wider variety of embeddings. I never thought about byte encoding embeddings or combining embeddings, or just being systematic about training with a bunch of different embeddings. I also wonder how training different embeddings with LightGBM would have helped.</p></li>
<li><p>I initially thought about using the translation augmentation approach, but didn't think it would be that important and didn't have time to explore it. Seeing it being used by the highest ranked people makes me really wish I had more time to explore this.</p></li>
<li><p>I was too nervous about pseudo-labeling the test set and worried that it would overfit, so I didn't do it. I should have at least explored it and seen how it changed the 5CV and leaderboard scores. If it increased both substantially, I could have convinced myself to use it for at least one of my two submissions. ...But I didn't look into it. In hindsight, another mistake. Given how useful pseudo-labeling was, I wonder if it would have been useful to pseudo-label more of the Wikipedia comment dumps that were not in the train or test set.</p></li>
</ul>
<h2>Key Takeaways</h2>
<ul>
<li><p>I don't have to understand NNs to get a good score with them, but I'll probably need to understand them to get a gold medal someday.</p></li>
<li><p>Kernels are great. I checked kernels every day and still ended up missing some stuff!</p></li>
<li><p>It seems like taking almost every single public kernel, training them with 5CV, combining them with a little bit of special secret sauce and a good level 2 stacking scheme, is enough to do well at Kaggle.</p></li>
<li><p>Even slight tweaks to NNs can produce new models with low (~0.95) correlations.</p></li>
<li><p>Variety in embeddings is important for model diversity.</p></li>
<li><p>Varying input data is just as important, if not more important, than varying the model being used.</p></li>
<li><p>Pseduo-labeling and train/test augmentation can work.</p></li>
<li><p>Models that are weak on level 1 can be the best contributors to level 2, and models that are weak on level 2 can still be the best contributors to level 3.</p></li>
<li><p>Feature engineering can be really useful.</p></li>
<li><p>DART mode on LightGBM can be really useful.</p></li>
</ul>
<h2>Code</h2>
<p>The complete code for the solution can be found here: <a href="https://github.com/peterhurford/kaggle-toxic_comment">https://github.com/peterhurford/kaggle-toxic_comment</a></p>
<p>Everything (including the NNs) was trained on an Amazon c5.9xlarge. The total runtime of all the files should be under a day. Nearly all the models execute five folds in under 20 minutes per fold, with the exception of a few of the NNs.</p>
<h2>Thanks</h2>
<p>In conclusion, thanks to everyone who shared their kernels. I was really impressed by how forthcoming everyone was in discussing their solutions. I'd be way down the leaderboard without you. Special shoutouts to Zafar, Olivier, Bojan, Tilli, and Yekenot. I'll endeavor to share more of my approach in kernels in future competitions to pay this forward.</p>
<p>Also thanks to Jigsaw for providing such an interesting competition. I've never seen AUCs in the >0.9 range before, so that was fun to play with. Also thanks for gracefully handling pretty much every possible way the competition could fail.</p>
<p>Ultimately, despite the difference in train and test, the competition ended up being pretty straightforward. Unlike for Porto Seguro, my CV mostly lined up with the Public and Private leaderboards, and I was able to successfully pick my two best private leaderboard submissions as my two submissions.</p>
<p>This competition allows you to upload CSV submission files generated from any platform, including <em>but not limited to</em> Kaggle kernels. I ran everything on an AWS c5.9xlarge and it completes end-to-end in ~20 hours. You can see all the code for generating the submission here: <a href="https://github.com/peterhurford/kaggle-toxic_comment">https://github.com/peterhurford/kaggle-toxic_comment</a></p>
<p>I'm not sure what you mean. You do have access to the private leaderboard. When you download <code>test.csv</code>, the CSV you get contains <em>both</em> the public and private test data... you just don't know which is which. You make predictions for <em>both</em> and re-upload them back to Kaggle as CSVs (either within a kernel or not). Kaggle then scores your submission CSV for both the public and private leaderboard... you just don't get to know your private leaderboard score until after.</p>
<p>I see you competed in the Mercari competition - as did I. My guess is that is why you are confused. That competition was special, in that you had to submit everything through a kernel and only kernels could be used to make submissions. This competition was not like that -- you can submit in a kernel or outside a kernel, using whatever you want. Just make a CSV. Most Kaggle competitions are like this and not like Mercari.</p> |
Toxic Comment Classification Challenge | 3rd Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is a very brief general overview of our model(s). We don’t go too deep into the technical details for several reasons: the most technically interesting model(s) have already been covered in <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/52644">Alex’s post</a>, most others are pretty straightforward variations on the themes seen in the kernels, and we’ll probably write a longer “official” Kaggle post soon, where we will likely go into some more detail about our model(s). </p>
<p><strong>Bojan’s Submodel:</strong></p>
<p>I have done my usual “kitchen sink” approach to this competition - tried out several different feature engineering approaches, algorithms, embeddings, etc. Most of these were very similar to the stuff that has been posted in the kernels - simple word preprocessing, word and character tokenization, use of LogisticRegression, GRUs, LSTMs, etc. Offline I tried to push that talkingdata’s LR script even more than in my Kaggle kernel. With just stacking, different choices of n-gram and total feature length, metafeature interaction terms, and several stacking layers (I think I managed to get to the 4th layer), I was able to get close to 0.9810 on public LB. I blended this with with several GRU and LSTM neural nets, and managed to get a score of 0.9870 on public LB/ 0.9865 on private. My models include:</p>
<ul>
<li>1-6 char and 1-4 word different tf-idf vectorizations with Logistic Regression</li>
<li>100-300 word length and 100-300 vector size embeddings using glove and fasttext </li>
<li>64-200 length GRU, LSTM and GRU + CNN network architectures</li>
<li>One of my models included two additional language translations, but, alas, I did not pursue this too far.</li>
</ul>
<p><strong>Andre’s Submodel:</strong></p>
<p>Andre used essentially three different models: A two-bidirectional-GRU-layer neural net, a two-bidirectional-LSTM-layer neural net, and an XGBoost model that was trained on about 100 “analytical” features - various features that were constructed using the statistics of the words and characters in the text. Andre also used his own set of preprocessing/language normalization rules. Each model was averaged over many seeds, and the final blend of these three models scored 0.9870 at the public LB.</p>
<p>When Andre and I joined forces, we were hoping for a small boost from our blended model. To our surprise, blending our models got us to 0.9876 on the public leaderboard. That was one of my first hints that blending different approaches and solutions can be very effective in this competition. That blend still managed to get 0.9874 on private leaderboard. </p>
<p><strong>Ryan and Alex Submodel</strong> </p>
<p>Ryan and Alex did a very extensive amount of feature engineering, text normalizations, and embeddings, which resulted in that awesome single 0.9872 model that <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/52644">Alex talked about in his post</a>. Ryan also talked a bit more about what they were doing with various models in <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/52692">his post</a>. Their models were the main backbone of our final model. With stacking several of their models they managed to get to 0.9878 on public LB. Their main three models were:</p>
<ul>
<li>Bidirectional two level model with LSTM as the first layer and GRU as the second layer with concatenated facebook fasttest and glove twitter vectors</li>
<li>Char model with the same architecture as the word model (different hyperparameters though)</li>
<li>Attention model. This model used weighted attention layer from DeepMoji project. It calculates weighted attention on input word sequence and also calculated it on input sequence after a few convolutions.</li>
</ul>
<p>Other models that they used:</p>
<ul>
<li>Tfidf model</li>
<li>Model based on their own trained fasttext vectors on wikipedia comment corpus.</li>
<li>Vowpal Wabbit model</li>
</ul>
<p>After merging, we worked on several different stacked and blended models, using between 20 and 35 submodels for stacking, and up to 20 total different L1 and L2 models for blending. We used 10-fold CV scheme for stacking and blending. Unfortunately, none of our stacking or blending solutions were able to break through the elusive 0.9880 barrier, but were getting frustratingly close to it. </p>
<p><strong>The Final Model</strong></p>
<p>The final model was a blend of our various solutions. We combined the following three components:</p>
<ul>
<li>A blend of our best scoring stacking and locally blending solutions, all of which scored about 0.9879 on public leaderboard, and in 0.99270-0.99308 on local CV. This component was 60% of our final solution.</li>
<li>Bojan’s best individual blend. About 20% of the final blend.</li>
<li>Andre’s best individual blend. About 20% of the final blend.</li>
</ul>
<p>We then set 1240 non-English comments to 0. This improved our blend slightly. The final blend scored 0.9881 on public leaderboard and 0.9880 on private. It was very rewarding to see that this blend was very reasonable and it hardly changed after the shakeup. I guess my blending skills are not that bad after all. :)</p>
<p>So what about “Adversarial Autoencoder”? Why did you guys call yourself that?</p>
<p>Good question. We wanted to rename the team “Generative Adversarial Denoising Autoencoder”, but that name was too long and wouldn’t fit in its entirety on Kaggle’s leaderboard. </p>
<p>So did you actually end up using the said Generative Adversarial Denoising Autoencoder?</p>
<p>No. We just thought it was a cool name. And would throw people off. It was a joke. There are a lot of really pretentious people in Machine Learning, and a simple name like “Blend of Blends of Blends” seems to be too beneath their lofty intellectual mindset. </p>
<p>“Generative Adversarial Denoising Autoencoder” is also a great term to toss around in a job interview. If someone asks you, for instance, how would you deal with imbalanced dataset, just say “I’d use a Generative Adversarial Denoising Autoencoder”. Missing data? Generative Adversarial Denoising Autoencoder. Dimensionality reduction? Yup, you’ve guessed it - Generative Adversarial Denoising Autoencoder! Try it! It really works!</p>
<p>Thank you Mark. Your comment really means a lot to me.</p> |
Toxic Comment Classification Challenge | 5th place Brief Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all congrats to the winners for a solid performance and obviously my teammates for this competitive top 10 finish. There was lots of buzz and sharing in this competition and we have learnt a lot - credit should be given to all people who shared stuff - you are all part of this solution :)</p>
<p>We have Implemented most of the kernels in this competition - Found the Neptune examples to perform very well on the leader-board, so thank you very much for the share!</p>
<p>The best linear model was around 0.9814 at LB and It was an LSVC (linear support vector machine) model.</p>
<p>Best lightgbm was around 0.9830 and had up to 2-gram words, stemming and up to 6 char char-grams along with some features generated from word2vec and pre-trained embeddings. </p>
<p>Our best NN was a 2-level bidirectional gru followed by max pooling and 2 fully-connected layers. Core improvements over baseline were exponential learning rate annealing, skip-connections around hidden layers, and <strong>900d embeddings that consisted of concatenated glove, fasttext with subword information and fasttext crawl</strong>. This scored 0.9865 (and 0.9861 private).</p>
<p>Other notable mentions include a char-level DPCNN and RNN trained over wordparts produced by byte-pair encoding. Other strong NNs were based on the implementation shared by Pavel Ostyakov</p>
<p>Also the lstm from Neptune with an additional input for chars (dual input) and stemming had a score near 0.9860.</p>
<p>In meta modelling we had 2 layers of stacking:</p>
<ol>
<li><p>The first layer consisted of more than 120 base models (all x6 columns), mostly Deep nns trained with slightly different architectures and different embeddings. We trained a 2-hidden layer NN, an ET and a lightgbm (per target-column) model. Then the same 3 models including some additional count features for uppercase words. And finally the same 3 models but all input data transformed into ranks as well as a linear combination of some of the input (120) models. So uplift of about .0012 (to 0.9877 from 0.9865) came from this layer of stacking.</p></li>
<li><p>Second layer is essentially a weighted average of all these models after transforming them to ranks. Got a further +0.0004 to 0.9881)</p></li>
</ol>
<p>Ps. Third Layer did not add value.</p>
<p>Also this is a photo of our ensemble:</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/300446/8841/stacking_5th_place.png" alt="5th Place 2-layer Stacking"></p>
<p>For the first part - when I said augmentations, I meant translations to different languages. For the second part, I will ask as I dont own that code. </p>
<ol>
<li>Yes , the same 5-fold cv I used all over the competition.</li>
<li>oof based folds.</li>
<li>Manually - a bit of leader board feedback too.</li>
</ol> |
Toxic Comment Classification Challenge | Blender | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Since this competition has so many blending kernels, i'd like to share my blender:).</p>
<p>Which you can call it blender for directory or SUPER-Blender. Wish you happy:)</p>
<pre><code>
import pandas as pd
import numpy as np
import os
WEIGHTED=False
submissions_path = "../newnn/fusion"
# submissions_path = "../allresults"
all_files = os.listdir(submissions_path)
print(all_files)
outs = [pd.read_csv(os.path.join(submissions_path, f)) for f in all_files]
print(len(outs))
index = 'final'
label_cols = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
from sklearn.preprocessing import minmax_scale
for sub in outs:
sub[label_cols] = minmax_scale(sub[label_cols])
blend = outs[0].copy()
blend[label_cols] = blend[label_cols]*0.0
weights=[5,6]
if WEIGHTED==True:
avgrate = 1 / np.sum(weights)
else:
avgrate = 1 / len(outs)
# weights=[1,1,1,1,1,0.8,1.2]
print(len(weights))
if WEIGHTED==True:
for sub,weight in zip(outs,weights):
blend[label_cols] += sub[label_cols] * avgrate * weight
else:
for sub in outs:
blend[label_cols] += sub[label_cols] * avgrate
blend.to_csv("../blendtemp/new-"+str(index)+".csv",index=False)
</code></pre> |
Toxic Comment Classification Challenge | Solution sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>All, thanks for a wonderful and a nail-biting contest yet again. Kagglers constantly keep pushing the boundary of what can be achieved with machine learning, and I enjoy being put into such a challenge. It's phenomenal how I once thought 0.9860 would be a good enough score but ended up getting close to 0.9870 on public lb, only because competitors here keep challenging us to squeeze out more from our work, and show that there is no end to improvements.</p>
<p>I'd love to see more solution sharing from the top 50 finishers. Currently there are just 2 (and a half, may be) solutions posted. Hopefully more will come. It's a NLP contest; pretty sure a wide range of interesting ideas were implemented from which we all can learn!</p> |
Toxic Comment Classification Challenge | code , solution and notes(without blend) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Toxic Comment Classification Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You're welcome~ , happy to hear that!</p>
<p>At the bottom is our team's solution, including detailed writing report. Our teams' solution is based on stacking,only weight average some high correlated model, used 7 models to ensemble.</p>
<p>And our preprocessing is very simple, maybe that is the point can be improved.</p>
<p>The detailed writing report including some discussion about the attention layers mostly used in this competition(At the page 5 in the writing report) .Yet, the experiment about it is done after the competition, maybe the result or inference can be extend to next competition or other implementation. I am also happy to discuss about this topic if anyone has deeper view about it :)</p>
<p>My writing report is put more on the model architecture and feature engineering(NLP), if you do not have much time, you can focus on these.</p>
<p><a href="https://github.com/JasonEricZhan/Kaggle-project-list/tree/master/toxic_comment">https://github.com/JasonEricZhan/Kaggle-project-list/tree/master/toxic_comment</a></p>
<p>And really thanks to Kaggle community and congrat to all the winners!</p> |
TrackML Particle Tracking Challenge | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi,
it's been a very interesting competition and as always these challenges are a very good place to learn.
I tried many different approaches but eventually, as many of you, I used clustering (dbscan) on unrolled helix with z shifting and track extension.</p>
<p>I don't want to explain here the tricks that I used on the clustering part (some of them has already been shared by @yuval and @CPMP) but I do want to explain the feature that allowed me to increase the score from approx. 0.68 (obtained using the clustering algo) to about 0.76, namely a supervised track extension. </p>
<p><strong>Supervised track extension</strong>
The base code is similar to the one shared by @HengCherKeng where I integrated a gradient boosting tree (LightGBM) to establish if a given hit belongs to a given track. As input it takes approx. 60 features of the track (constructed using the clustering algo) and the proposed hit, and it outputs the probability of the hit to belong to the track. In this way, I could take into consideration a larger number of potential hits and let the algorithm decide which one is the best candidate.
I trained the decision tree over only 10 events and I used pretty naive features, meaning that it can be improved much more.</p>
<p>From a performance point of view, the clustering algorithm takes about 1h per event on a single core instead, the extension algorithm takes about 20min (at inference time). </p>
<p>If you are interested, in the next days I will share the code.</p>
<p>Thanks to all of you who shared your ideas during the competition, cheers!</p>
<p>NEWS!</p>
<p>The code and documentation are available on <a href="https://github.com/andri27-ts/GoldTrackML">https://github.com/andri27-ts/GoldTrackML</a> </p>
<p>Yes, sure. I was talking more about the difficulty from a computational time point of view.</p>
<p>Technically at the end, I used gradient boosting tree (LightGBM). If you didn't use Heng's code, how did you choose the potential hits that could belong to a given track?</p>
<p>Yes, I agree that track fitting and track extension are basically the same concepts, but I think that can be hard to use supervision methods for track seeding. In this case you have to redefine the problem because it's not more a classification task as for track extension. Thanks, and congratulation to you too.. and wasn't my idea to catch you the last day but I didn't have time the previous weeks</p>
<p>The code and the documentation are on Github <a href="https://github.com/andri27-ts/GoldTrackML">https://github.com/andri27-ts/GoldTrackML</a> !!
Have a nice day!</p> |
TrackML Particle Tracking Challenge | 16th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks @YaGana Sheriff-Hussaini.</p>
<p>@CPMP I did explain my approach in the github pages. Here's the link:
<a href="https://robert604.github.io/">https://robert604.github.io/</a>
It's the webpage link to what I had posted originally. If there's something not clear in that explanation then let me know and I'll make some changes to it.</p>
<p>My solution used a dbscan clustering approach, like most others. I used ax,ay and inv_r0 as parameters which takes the helix circle into account. Many nested loops were used to get a lot of iterations for dbscan with different parameters, but I finalized each track after it was created by dbscan and then extended, rather than doing any reassigning of hits later. I've done a more detailed writeup here:</p>
<p><a href="https://github.com/robert604/robert604.github.io/blob/master/index.md">https://github.com/robert604/robert604.github.io/blob/master/index.md</a></p>
<p>The code is also available on github.</p> |
TrackML Particle Tracking Challenge | 1st place solution - with code and official documentation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! The nickname has nothing to do with the IceCube project, it was simply a nickname I took from the speedsolving forum :) . Combining "ice" from my cold home country, and "cuber" from my interest to solve the Rubik's cube quickly. Regarding profession, I'm just a student.</p>
<p>It is really interesting that you found so many of the same ideas! I haven't heard about Bayesian cuts. It would be really interesting to hear about the more detailed ideas you found, which would probably cover some I missed. However, I'm sure it makes more sense to you to keep that to yourself until after the throughput phase.</p>
<p>I'm studying a program called "physics and maths", with specialization in applied mathematics. Next year I will be writing my master's thesis on deep learning.</p>
<p>Thank you! I will share my source code soon, after I've set up a license and written some instruction on how to run it.</p>
<p>only one. I just run multiple events in parallel.</p>
<p>Thank you. I don't think the cell's data was very important, if I remember correctly I added it after I was at 0.9-something and it didn't give too much improvement.
The cells code itself wasn't too hard after I figured out what the data meant. I used regression on the plane of the cell, and then an analytical formula for the angle with the plane based on number of cells intersected. It's about 40 lines of code.</p>
<p>Hello everyone, thank you for a great competition! This was my first serious Kaggle competition, and I must say I'm impressed with how much fun the competition has been to me. I think the organizers have done a great job in making the scope of competition task large enough to be interesting, while not requiring much background knowledge from the field.</p>
<p>Edit: official documentation and code are now available at:
<a href="https://github.com/top-quarks/top-quarks/blob/master/top-quarks_documentation.pdf">https://github.com/top-quarks/top-quarks/blob/master/top-quarks_documentation.pdf</a>
<a href="https://github.com/top-quarks/top-quarks">https://github.com/top-quarks/top-quarks</a></p>
<p>I'm sorry, but I did not use much machine learning (only some logistic regression for candidate pruning), but rather classical mathematical modeling with statistics and 3d geometry. This, combined with the fact that I wrote everything in C++ with no dependencies, made the final code quite fast: about 8 minutes per event per cpu core for my final submission. So I believe my code could be a good starting point for the throughput phase.</p>
<p><strong>Now for my approach</strong></p>
<p>I divided my algorithm into several steps, and created a scoring metric after each step, so that I could easily tell at which step I could earn the most score. I also made load / score function after each step for rapid debugging and tuning.</p>
<p>There were 48 layers in the detector, each either an annulus or cylinder (approximately). I sorted these approximately so that each track would pass the layers in increasing order. I considered multiple hits of one particle on a single detector to be duplicate measurements, and only looked for a single hit per detector per track until step 4.</p>
<p><strong>1. Select promising pairs of hits.</strong></p>
<p>This was done by considering all pairs of hits on 50 pairs of adjacent layers that covered most of the tracks. These candidates were pruned heavily by a logistic regression model of several heuristics. Some of the heuristics were how far the line passing through the two hits passes from the origin, and the angle between the direction between hits and the direction given by the cells data for each of the hits.
This gave about 7 million candidate pairs covering about 99% of the score (meaning for tracks worth 0.99 had at least one pair on that track).</p>
<p><strong>2. Extend the pairs to triples</strong></p>
<p>This was done by extending the line passing through a pair, and looking where it hits the next adjacent detector layers using 3d geometry. I set the 10 closest hits to the intersection as triple candidates. Then I did another pass of pruning by logistic regression to get about 12 million candidate triples. In this step we had three points, so we could fit a helix through them, and we even had one degree of freedom left as a feature for the logistic regression. Other features were (the logarithm of) the radius of the helix, and again the deviation from the direction given by the cell data. The triples covered about 97% of the score (meaning for tracks worth 0.99 I had at least one triple on that track). And the remaining tracks were short, crooked (low momentum), and started far from the z axis.</p>
<p><strong>3. Extend triples to tracks</strong></p>
<p>We fitted a helix through the three hits, and extended it to the adjacent layers using 3d geometry. I always used the helix fitted by the 3 nearest hits on the track to the layer in question. Also here I added the closest hit to the intersection. The resulting (still about 12 million) tracks now contained about 60 million hits, and about 95% of the score (meaning if we optimally assigned tracks using the ground truth data, added all duplicate hits to each track, and ignored >50% coverage constraints, we could get score 0.95).</p>
<p><strong>4. Add duplicate hits</strong></p>
<p>For each track we added the hits closest to it on each layer it passed through. I'm not exactly sure how, but now we covered about 96% of the score :) and I'm not complaining.</p>
<p><strong>5. Assign hits to tracks</strong></p>
<p>Until now all tracks had been processed completely separately, so they were massively overlapping. The goal here was to pick the best paths, and resolve any conflicts between them. My algorithm for this step was based on taking the "best" track (I will come back to the metric), removing all hits contained in it from all conflicting paths, and then repeating until there was nothing more to do. This was done efficiently using a data-structure based on a priority queue and dynamic updating of track scores.</p>
<p>The scoring metric to determine the "best" tracks was originally based on a random forest and distance from helixes, but I later found something much better. I didn't manage to model the perturbed helix noise. At least, I didn't feel like I had enough quantitative information to do this properly. This meant modeling the probabilities accurately as needed f.ex in a Kalman filter was infeasible. So instead of modeling the inliers (actual helix track), I modeled the probability of outliers (that we would find this track by chance). This was based on the assumption that we could model outliers by the density of hits on a layer, which I assumed was independent of the angle around the z-axis. This outlier density idea was also used for thresholding in all previous steps, so f.ex. saying "I want 0.1 outlier duplicates on average from each hit" for making the thresholding distance for duplicates.</p>
<p>The full algorithm gave the final score of about 0.92, using about 90% of the hits.</p>
<p><strong>More important considerations</strong></p>
<p>Of course there were several very important implementation details, note that the above explanation is a simplification down to the most important parts. A crucial technique considering performance, was that I used an acceleration data-structure to quickly access points to close to the helix intersection with a layer. This data-structure based on quad-trees was highly efficient, supported elliptic queries, and took into consideration imprefectness of the layers (they are not exactly annuluses and cylinders), and used polar coordinates to make the maths tractable. I also made a O(1) lookup for close to analytic outlier probability densities in any elliptic region on a detector. A crude model of the magnetic field strength as function of z position of the detector ( "1.002-z'<em>3e-2-z'^2</em>(0.55-0.3*(1-z'^2))", where z' = z/2750) gave a 0.003 score boost. On top of that there were a lot of parameters to tune, which were what gave me the last 0.01, and I'm sure there is more to gain if I had the patience.</p>
<p><strong>My takeaways from the competition:</strong></p>
<ul>
<li>Kaggle has some really interesting competitions.</li>
<li>Loading bars are really cool! I used them everywhere :)</li>
<li>It's fun to submit to the leaderboard, even when it isn't strictly strategical considering winning chances.</li>
<li>Computational resources aren't everything. I got access to a supercomputer, but was unable to improve my score by increasing computational load.</li>
</ul>
<p>Edit: I added @ersol to the team, as he had experience with cloud computing services. However, in practice I didn't need that, so he didn't end up helping me.</p> |
TrackML Particle Tracking Challenge | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>find the circle on x-y plane by two hits and (0,0), then find the delta z by arc length.</p>
<p><a href="https://www.kaggle.com/asalzburger/pixel-detector-cells">reference this kernel</a></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/369970/10251/TrackML_Fig2.png" alt="enter image description here"></p>
<p>I assumed that the particle crosses the center of two end point cells.</p>
<p>It is trail-and-error. In the beginning, I do some experiments to make sure it can do some kind of curve fitting in proper precision. Then I start training with 4 layers, and it never overfit even when I add more neurons and one layer. Every time I extend the model size, the accuracy improve a little. I think there is a way to make the model more efficient but need time to do more experiments.</p>
<p>as the example, all the tracks are:</p>
<ul>
<li>1-2-4</li>
<li>2-1-4</li>
<li>3</li>
<li>4-1-2</li>
<li>5-2</li>
</ul>
<p>then I calculate the similarity of all tracks as merge priority, so the final submission is:</p>
<pre><code>hit_id,track_id
1,1
2,1
3,0
4,1
5,0
</code></pre>
<p>For a event with N hits, I predict NxN probability matrix. For example, a event with 5 hits, the prediction is:</p>
<pre><code> h1 h2 h3 h4 h5
h1 - 0.8 0.2 0.9 0.4
h2 0.8 - 0.5 0.7 0.7
h3 0.2 0.5 - 0.3 0.4
h4 0.9 0.7 0.3 - 0.4
h5 0.4 0.7 0.4 0.4 -
</code></pre>
<p>let us assume the threshold is 0.65, so:</p>
<pre><code> h1 h2 h3 h4 h5
h1 - 0.8 0. 0.9 0.
h2 0.8 - 0. 0.7 0.7
h3 0. 0. - 0. 0.
h4 0.9 0.7 0. - 0.
h5 0. 0.7 0. 0. -
</code></pre>
<p>pick <strong>h1</strong> as seed, then <strong>h4</strong> is the next most likely hit, and p(h1,h4)=0.9>0.65, so let's go on:</p>
<pre><code> h1 h2 h3 h4 h5
h1 - 0.8 0. 0.9 0.
h4 0.9 0.7 0. - 0.
---------------------------
- 1.5 0. - 0.
</code></pre>
<p>0.8 and 0.7 are all large than threshold, so the next hit is <strong>h2</strong>, then:</p>
<pre><code> h1 h2 h3 h4 h5
h1 - 0.8 0. 0.9 0.
h4 0.9 0.7 0. - 0.
h2 0.8 - 0. 0.7 0.7
</code></pre>
<p><strong>h3</strong> and <strong>h5</strong> are not qualify (all prod. large than threshold), so we stop here.</p>
<p>And the track we find is <strong>h1-h2-h4</strong></p>
<p>The next seed is h2 and so on, I reconstruct N tracks by N hits in one event.</p>
<p>In most kaggle competition I think I will win, so it is fine. :) I will explain my approach in detail as below.</p>
<p>Congratulations and thank you for open the gap so I can sleep well in the last two days. :) You did a great job! </p>
<p>You are welcome.</p>
<ol>
<li>It would be sure</li>
<li>selu. About input, 5 as the kernel [x, y, z, count(cells), sum(cells.value)], and two unit vector come from cells to estimate the hit's direction, random reverse when training. Now we have (5+6)x2 = 22. The rest 4 are assumed that the two hits are linear or helix with (0,0,z0), calculate the abs(cos()) with previous two estimated unit vector, and the last is z0.</li>
<li>I fixed it, thanks.</li>
</ol>
<p>I have no plan to join the second phase, thanks.</p>
<p>In the beginning, I want to build a model which can input all hits and output all tracks. But after simple calculation, it could not be done. So I split it to minimum unit: input two hits. output 1 if two hits are in the same track, 0 otherwise.</p>
<p>The difference with most other DL approaches is that they only do "connect the dots", if some dots lost, the connection break.</p>
<p><strong>I connect all the dots</strong>. <a href="https://www.kaggle.com/outrunner/trackml-2-solution-example">here is a example of kernel (update 08/16)</a></p>
<p>In my real case, the difference is: (read the kernel in detail)</p>
<ul>
<li>input size: 27 (x, y, z etc. and use cells to get hit's direction)</li>
<li>model size: 5 hidden layers with 4k-2k-2k-2k-1k neurons</li>
</ul>
<p>The well trained model can get 0.8 by only use the predictions to reconstruct tracks, just like the kernel. Add simple curve fitting (I use scipy.optimize.leastsq to fit circle in xy plane) can get 0.9, and add z-axis constrain (dr/dz) improve 0.003 in the end. I don't spend much time on curve fitting since I think CERN do it better, and someone can get much improvement from better curve fitting.</p>
<ul>
<li>attached is a prediction of event1001 and you may give it a try.</li>
<li>fig 01 shows the seed(large circle) and it's corresponding candidates(the same color)</li>
<li>fig 02 shows the sum of predict prob. of hits in direct ratio to diameter
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/369970/10058/TrackML_01.png" alt="enter image description here"></li>
</ul>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/369970/10059/TrackML_02.png" alt="enter image description here"></p> |
TrackML Particle Tracking Challenge | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is a link to the code: <a href="https://github.com/sgorbuno/TrackML_CombinatorialTracker">https://github.com/sgorbuno/TrackML_CombinatorialTracker</a>
Here is a description: <a href="https://github.com/sgorbuno/TrackML_CombinatorialTracker/blob/master/doc/TrackML_AlgorithmDescription.pdf">https://github.com/sgorbuno/TrackML_CombinatorialTracker/blob/master/doc/TrackML_AlgorithmDescription.pdf</a></p>
<p>Yes, I fit the z-component of the field only. </p>
<p>Basically, one can forget about the magnetic field and consider all this as some fixed scaling factor for helix radius during the prolongation. It is kind of a parameter in the track model, which is calibrated on the truth data. Every hit has this calibrated parameter. Actually, it has three different parameters - one for forward prolongation, one for backward prolongation, and one for helix construction. I forgot to mention that. </p>
<p>This parameters are calibrated i advance. I think it is very similar to what ML should do. ( but as i said, i'm not an expert here. I first need to learn the machine learning before making such statements )</p>
<p>Exactly. I think, if the measurement errors would be 0, it won't not change much. Tracks are measured with the infinite precision but then they scattered like hell right after the measurement. </p>
<p>Changing the field direction - yes, I also thought about this. It shouldn't be difficult to implement - one just rotates the track before the prolongation. But it will slow down everything on the other side.. I think, may be one should do it in some certain regions.. </p>
<p>Hi, thanks!</p>
<p>Yes, I'm working on the online tracking in ALICE experiment. It has completely different detector - it is one huge gas volume without any material inside. We have relatively unprecise measurements, but up to 160 of them per track and almost constant field and no scattering. We need to collect all the measurements along the trajectory in order to get good track position/momentum estimation. Here the situation is different. I get better knowledge about local trajectory when I ignore all other the other measurements. Interesting. </p>
<p>Concerning the missing field description - I think the organisers wanted to give ML algorithms a head start, because they can just learn the missing field by training (I'm not an expert here), and limit others in looking for some simple approaches - tricks and hacks. Perhaps they don't want to see here the Kalman Filter monsters as they already have them :)</p>
<p>But finding back the field is easy. You fit a circle, get its radius. Then you divide the radius by the truth momentum and you get the field. </p>
<p>Hello everyone and thank you for the nice competition!</p>
<p>Unfortunately I joined it late and didn't follow discussions on the forum, I had to concentrate on solving the task.</p>
<p>I developed a combinatorial algorithm, which is very similar to the 1-st place algorithm from icecuber. </p>
<p>In the amount of time I had I only managed to finish a combinatorial "engine" of the algorithm a day before the deadline. During the last day I just tried to run this engine many times more or less chaotically everywhere in the detector. The resulting code looks therefore horrible and is extremely slow. But the engine itself seems to work, the rest still needs to be developed.</p>
<p>Ok. The engine consist of the two parts: </p>
<p>-the first part converts hits to short tracklets in some local part of the detector. It's goal is to kill the hit-to-hit combinatorics in order to work later on on some structured data.</p>
<p>-the second part prolongates the tracklets through the detector and collects their hits. </p>
<p>I run this engine several times in different parts of the detector with different constraints.
Then I sort all the found track candidates in some simple way and choose the best tracks. Hits which belong to the found tracks I remove from the plane. Then the next round of the tracklet search starts in other detector parts or with other constraints.
This way I clean up the data slice by slice, so to say, until nothing is left on the plane. </p>
<p>The engine. </p>
<ol>
<li>The tracklet constructor.</li>
</ol>
<p>There are 2 options here. </p>
<p>option a)
It creates two-hit tracklets which are constrained to the event vertex (which is pretty much (0,0) point in XY and +-2.5 cm in Z )
Using the vertex constraint is the trick here. It significantly reduces amount of possible tracklet candidates. And, as 75% of the tracks are coming from the vertex, one can really clean up the data by removing the vertex tracks before doing anything else.</p>
<p>option b)
It creates 3-hit tracklets without vertex constraint. </p>
<p>The layers where the tracklets are created are predefined by the main program. Unlike icecuber I decided to not create all possible tracklets everywhere but rather save the compute time by developing some smart seeding sequence strategy, which is still need to be developed:) </p>
<ol>
<li>The tracklet prolongation. Similar to the winner algorithm, I don't have a global trajectory. </li>
</ol>
<p>To prolongate a track to the next layer, I use a local helix created by last 3 hits of the track. </p>
<p>Amazingly, it works pretty well. I think this is due to very precise measurements in silicon. One can follow all the local features of magnetic field and trajectory scattering in the material, and complicated energy losses, and god knows what else without even knowing the value of the magnetic field! And it doesn't cost any cpu time.</p>
<p>But what I found, the magnetic field is varying dramatically in the detector, from 20 kGaus to -10!!. Especially between the detector volumes. I realised it when I was checking - how much the local curvature changes along a track. Oh yes, it changes. </p>
<p>To investigate this, I have fitted the magnetic field value using neighbouring truth points and truth momentum vectors. </p>
<p>Once I realised that the field is non-constant in many regions, I decided to modify my track model. </p>
<p>It is still a helix, but it is parameterized not with its geometrical radius, but with a physical parameter Pt (transverse momentum).
These are just proportional: Pt = B*r. Now, having 1) the physical parameterisation and 2) the magnetic field values, I can fit the trajectory with one field value (i.e. inside a radial volume), but then prolongate it using another field value (i.e the value between radial and forward volumes). </p>
<p>Or, in the other words, I scale the helix radius according to the field change. (New Radius=OldRadius*NewField/OldField)</p>
<p>The magnetic field I parametrised for each detector layer individually using some polynoms.
Here is my formula for the field:
B(z,phi) = (c0+c1*z) + (c2+c3*z)*sin(phi) + (c4+c5*z)*cos(phi).
Here z, phi are hit angular and z coordinates on a layer, coefficients I have fitted with old good LSM method. The approximation is not very accurate, maybe one can replace it with just an average field value on a layer. To save the time during track search, I calculate the filed value for every hit and store it in the hit structure before the search starts. </p>
<p>As I remember, use of the physical model improved my accuracy of prolongation, but I can't tell now how big was the improvement. May be at the end it is not needed at all.</p>
<p>One big problem I have here - I have to manually set cuts for picking up hits. For the proof-of-concept it was fine, but then I wind up with a huge file with copy-pasted and slightly modified hardcoded numbers. My plan was to have this hits picking-up cuts to be set automatically from the test data. But due to the naive trajectory model, hit deviations from trajectories are not nicely distributed. At the end I had to look to every distribution and decide where to cut it. One should do something about that.</p>
<p>The search strategy. </p>
<p>First, I search tracks which are coming from the vertex, then the other tracks. First I find high-momentum tracks (applying angular and momentum cuts on the tracklets), then the low-momentum tracks. </p>
<p>Well, that is pretty much the algorithm. </p>
<p>Thank again for the nice competition!</p>
<p>Here is a link to the code: <a href="https://github.com/sgorbuno/TrackML_CombinatorialTracker">https://github.com/sgorbuno/TrackML_CombinatorialTracker</a>
Here is a description: <a href="https://github.com/sgorbuno/TrackML_CombinatorialTracker/blob/master/doc/TrackML_AlgorithmDescription.pdf">https://github.com/sgorbuno/TrackML_CombinatorialTracker/blob/master/doc/TrackML_AlgorithmDescription.pdf</a></p> |
TrackML Particle Tracking Challenge | 54th Place Solution for bkKaggle Team | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi,</p>
<p>I used the same threshold of 0.1 for all my track extension rounds. Your way of gradually increasing the limit for extending tracks to first extend higher quality tracks then extend lower quality tracks makes intuitive sense, and is something I should have done. It seems like using different thresholds for track extension is a way to determine the quality of a track candidate which could then be used to do more sophisticated merging techniques. </p>
<p>Would combining the first and third improvements in your comment by only extending a track if the extended track can pass through a lower threshold than the original track be a good way to prevent low quality extended tracks?</p>
<h1>54th Place Solution for bkKaggle team; 0.59229 private LB</h1>
<h2>Third Kaggle competition and a second bronze medal!</h2>
<p>Hi, I’m a 15 year old Kaggle beginner and this is my third Kaggle competition and second bronze medal! This competition was different from others since it wasn’t a straightforward supervised learning problem which made it challenging, and I wouldn't have achieved this score without all the ideas shared on the discussion forum. I know that my score doesn't compare to the winners's solutions, but I wanted to share it anyway.</p>
<p>For most the first two months of the competition, I mostly focused on deep learning based solutions but found them hard to train and they took a long time. It was only in the last month of the competition that I focused on implementing an unsupervised clustering based solution.</p>
<p>Some of my deep learning based ideas that I partially completed or didn’t attempt included: Using an mlp to find triplets, implementing a PointNet to cluster tracks, and creating word2vec style embeddings for each track and then clustering the embeddings. Unfortunately, I wasn’t able to get any of these approaches to work really well in the 2 months. After that, I shifted my focus to clustering based approaches.</p>
<p>My final solution was a DBSCAN clustering and helix unrolling with z shifting and track extension.</p>
<h3>Features</h3>
<p>The features I ended up using were:</p>
<p><code>cos(a), sin(a), z/rt, z/r, x/r, y/r</code> </p>
<p>where </p>
<p><code>a = arctan2(y, x) - arccos(mm * ii * rt)</code></p>
<p><code>r = sqrt(x^2 + y^2 + z^2)</code></p>
<p><code>rt = sqrt(x^2 + y^2)</code></p>
<p>These features were from <a href="https://www.kaggle.com/sionek/mod-dbscan-x-100-parallel">Grzegorz's</a> and <a href="https://www.kaggle.com/khahuras/0-53x-clustering-using-hough-features-basic">Kha A Vo's</a> kernels. Unfortunately, I didn’t have the time to find better features with the ideas shared in <a href="https://www.kaggle.com/c/trackml-particle-identification/discussion/61590">this</a> discussion post. I originally used <code>cos a, sin a, z/r, and z/rt</code> as my features; When merging based on track length, these features give you a score of 0.35 with no weights, and I couldn’t take it beyond 0.5 even after extensive Bayesian optimization. The addition of <code>x/r and y/r</code> and the default weights from the second kernel gives me a score of 0.54 without any z-shifting or track extension.</p>
<h3>Merging</h3>
<p>My merging strategy is very simple since I didn’t have time to improve it. I simply assign hits to the longest track with no more than 25 hits. </p>
<h3>Z-shifting</h3>
<p>I use 11 z-shifts uniformly distributed around the origin. I search +/- 5mm with steps of 1mm and the origin itself for a total of 11 z-shifts. The drawback of using so many z-shifts is that my running time increases by a factor of 11.</p>
<h3>Track Extension</h3>
<p>I use a modified version of track extension from <a href="https://www.kaggle.com/c/trackml-particle-identification/discussion/58194">Heng's track extension post</a> . I found that for me, the optimum number of track extension rounds is 6. I wasn't able to use more because each further track extension gave diminishing increases in score and took an extra 1-2 minutes.</p>
<h3>Hyperparameters</h3>
<p>I used Bayesian optimization on top of the weights given in the second kernel. Also optimizing the number of DBSCAN iterations and it's epsilon hyperparameter further increased the score. My final hyperparameters are:</p>
<pre><code>cos(aa) and sin(aa): 1.7
z/rt: 0.8
z/r: 0.2
x/r: 0.015
y/r: 0.015
</code></pre>
<h3>Compute and Parallel Processing</h3>
<p>For this competition, I used a preemptible 96 core 86 Gb RAM virtual machine from GCP. All together, I'm running 300 iterations * 11 z-shifts * 2 directions = 6600 iterations of DBSCAN and 6 rounds of track extension. Each event takes about 9-10 min when I use Python's multiprocessing.Pool to parallelize the iterations; After reading <a href="https://www.kaggle.com/c/trackml-particle-identification/discussion/62883">cpmp's post</a> , I see that most people parallelize over the events of the test set to reduce overhead.</p>
<h3>Things I Didn't Do</h3>
<p>Look for better features; I didn't have the math expertise of the higher ranking competitors and didn't have the time at the end of the competition when the " Criteria for Good Features" discussion post was posted.</p>
<p>Develop more sophisticated track merging; Focusing more on merging than z-shifting at first could have let me get a higher score out of my earlier features.</p>
<p>Optimizing more hyperparameters; I didn't optimize the helix unrolling or track extension hyperparameters.</p>
<p>My code is available at <a href="https://github.com/bkahn-github/TrackML">this</a> GitHub repository</p> |
TrackML Particle Tracking Challenge | 5th place solution - published on github | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all,</p>
<p>Shortly after the trackml accuracy deadline I got distracted in a very fortunate and overwhelming way: I married. Now I finally managed to get back to my solution and wrap up the code enough so that I could publish it as promised.</p>
<p>Given the nature of the competition, the code shows every sign of a bustling construction side, of course. You can find it on github: <a href="https://github.com/edwinst/trackml_solution">https://github.com/edwinst/trackml_solution</a></p>
<p>Thanks to all organizers and participants for the interesting challenge!</p> |
TrackML Particle Tracking Challenge | 7# solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Although I shared many of our algorithm's basic ideas, now is the time to share the full solution
I have posted a kernel with the major part of this solution. <a href="https://www.kaggle.com/yuval6967/7th-place-clustering-and-extending">#7 place solution</a>
Running this kernel on training event 1000 will score ~0.635 after the clustering stage, and 0.735 after expending stage.
Every stage takes about 8-10 min on Kaggle, and about half the time on my laptop.
In the clustering part of the kernel the algorithm 5500 pairs of z0, 1/2R (More of it below) by increasing the number to about 100,000 the score will plateau at about 0.765 (after expending).
How does it work:
In each clustering loop the algorithm try to find all tracks originating from <code>(0,0,z0)</code> and with a radius of <code>1/(2*kt).</code>
If a hit (x,y,z) is on a track the helix can be fully defined by the following features (1), (2)</p>
<pre><code>rr=(x**2+y**2)**0.5
theta_=arctan(y/x)
dtheta = arcsin(kt*rr)
(1) Theta=theta_+dtheta
(2) (z-z0)*kt/dtheta
</code></pre>
<p>To solve the +pi,-pi problem we use sin, cos for theta.
To make (2) more uniform, we use <code>arctan((z-z0)/(3.3*dtheta/kt))</code>
After calculating the features, the algorithm tries to cluster all the hits with the same features. This is done by sparse binning – using np.unique.
The disadvantage of sparse binning over dbscan is it’s sensitivity, the advantages are its speed and its sensitivity (almost no outliners).
After clustering every hit choose if his cluster is good according to the clusters length.
Every 500 loops all hits belonging to tracks which are long enough are removed from the dataset
If two hits from the same detector are on the same track, the one which is closest to the track’s center of mass is chosen.
The z0, kt pairs a chosen randomly
While running, the algorithm changes the bin width and the length of the minimum track to be extracted from the dataset.</p>
<p>Expending is done by selecting the un-clustered hits which are close to the center of mass of the track.</p>
<p>To get better then 0.765, we merged a few long runs together, this was done by scoring the tracks with a ML algorithm Trian wrote (please share below).
We also gained sum points by clustering from outside of the origin, starting the track from a hit (it was very efficient and slow)</p>
<p>We published our full detailed solution <a href="https://github.com/tx1985/kaggle-trackML/tree/master">here</a>
And a detailed description of the solution in PDF format <a href="https://github.com/tx1985/kaggle-trackML/blob/master/trackML_solution.pdf">here</a></p> |
TrackML Particle Tracking Challenge | Assign track_id trick from #19 solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The full solution is not worth reading given there are already many brilliant solutions shared in the forum. But I'd like to share a small trick that greatly helps speed up the algorithm: how to assign track_id properly to remove duplicate tracks fast. This is useful if you're using primitive clustering method.</p>
<p>Duplicate tracks (track with exactly the same hits) may be discovered in two ways:</p>
<ul>
<li><p>The same track is re-discovered under a different scanning parameter</p></li>
<li><p>Two tracks initially are different, but after outliers and two-hits-on-the-same-detector are removed, they end up with being the same track.</p></li>
</ul>
<p>If you use the track_id generating scheme from the public kernel, you may have noticed a lot of tracks (with different track_id) are actually the same track. </p>
<p>My way to generate track_id is to use the hash function to hash a list of hit_id that belongs to the same track. In Python it looks like</p>
<pre><code>track_id = hash(frozenset(hits.hit_id.values))
</code></pre>
<p>Then by just looking at the track_id one can immediately tell if two tracks are actually the same. Then you can remove duplicates tracks from a pool of track candidates in O(n) time. Just a quick example, in my first stage of the clustering with track length >= 14, there are 68960 tracks found by DBSCAN, but after duplicate tracks are removed there are only 4737 tracks left. (I probably scanned too much.....). Remove this many duplicates greatly helps speed up the curve fitting, outlier removal and merge processes later on. Merge is never a running time bottleneck for my case.</p>
<p>I hope this is useful to someone.</p> |
TrackML Particle Tracking Challenge | Ensembling Helix 42 - #12 Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Why 42? That's the largest internal DBScan helix cluster ID we merged. If you include each z-shift as a separate model, we actually merge a total of 45 models. That's a lot of merging!</p>
<p><a href="https://github.com/jliamfinnie/kaggle-trackml.git">[You can find all our code in this github repository]</a>(<a href="https://github.com/jliamfinnie/kaggle-trackml.git">https://github.com/jliamfinnie/kaggle-trackml.git</a>)</p>
<p><strong>Non-mathematicians solution from second-time Kagglers</strong></p>
<p>Nicole and I (Liam Finnie) started this Kaggle competition because it sounded pretty cool, however without a strong math or physics background, we quickly found ourselves at a disadvantage. So, we did what we know - write lots of code! Hopefully at least some of this will prove useful to someone, even as an example of 'what not to do!'.</p>
<p>Our solution consists of many DBScan variants with different features, z-shifts, etc. For post-processing, we use heavy outlier removal (both hits and entire tracks) and track extension individually on each of the DBScan results. We then split each of the results into 3 categories - strong, medium, and weak - before merging them.</p>
<p><strong>DBScan results</strong></p>
<p>Many thanks to @Luis for providing us our base DBScan kernel. Nicole did most of the math work on our team to develop our clustering features. I can't do the math justice, so if you understand advanced helix math, check out <code>hits_clustering.py</code>, class <code>Clusterer</code>, method <code>dbscan()</code>. We used several of the features discussed in the forum such as z-shifts and sampled <code>r0</code> values, as well as some of our own tweaks. Our raw DBScan scores tended to mostly be in the range of 0.35 to 0.55.</p>
<p><strong>Outlier removal</strong></p>
<p>Outlier removal is tricky - it lowers your LB, however allows for much better merging later on. Approaches we used for outlier removal:</p>
<ul>
<li>use <code>z/r</code> to eliminate hits that are out-of-place</li>
<li>look for hits with the exact same <code>z</code> value from the same <code>volume_id</code> and <code>layer_id</code>, remove one of them.</li>
<li>calculate the slope between each pair of adjacent hits, remove hits whose slopes are very different. </li>
</ul>
<p>The outlier removal code entry point is in <code>merge.py</code> in function <code>remove_outliers()</code>.</p>
<p><strong>Helix Track extension</strong></p>
<p>Many thanks to @Heng who provided an initial track extension prototype. From this base, we added:</p>
<ul>
<li><code>z/r</code> to improve the KDTree clustering</li>
<li>track scoring (length + quality of track) to determine when to steal hits from another track</li>
<li>different number of KDTree neighbours, angle slices, etc. </li>
</ul>
<p>The track extension code can be found in the <code>extension.py</code> file, function <code>do_all_track_extensions()</code>. This type of track extension typically gave us a boost of between 0.05 and 0.15 for a single DBScan model.</p>
<p><strong>Straight Track extension</strong></p>
<p>Some tracks are more 'straight' than 'helix-like' - we do straight-track extension for track fragments from volumes 7 or 9. To extend straight tracks, we:</p>
<ul>
<li>compute <code>z/r</code> for each hit</li>
<li>if our track does not have an entry in the adjacent <code>layer_id</code>, calculate the expected <code>z/r</code> for that adjacent <code>layer_id</code>, and assign any found hits to our track</li>
<li>try to merge with track fragments from an adjacent <code>volume_id</code>.</li>
</ul>
<p>This type of track extension typically gave us a boost of between 0.01 and 0.02 for a single DBScan model. Code is in <code>straight_tracks.py</code>, function <code>extend_straight_tracks()</code>.</p>
<p><strong>Merging</strong></p>
<p>When merging clusters with different z-shifts, we found the order mattered a lot - for example, we could merge better with bigger jumps between successive z-shifts, i.e.the order (-6, 3, -3, 6) works better than (-6, -3, 3, 6).</p>
<p>For our final merge at the end, we split each DBScan cluster into <strong>strong</strong>, <strong>medium</strong> and <strong>weak</strong> components based on the consistency of the <strong>helix curvature</strong>. Strong tracks are merged first, then medium, and finally weak ones at the end, getting more conservative at each step.</p>
<p>The main problem with merging is how to tell whether two tracks are really the same, or should be separate? We tend to favour extending existing tracks when possible, but will create a new track if there is too little overlap with any existing track. Some rough pseudo-code for our merging heuristics:</p>
<pre><code>foreach new_track in new_tracks:
if (no overlap with existing tracks)
assign new_track to merged results
elif (existing track is longer than new_track and includes all hits)
do nothing
else
determine longest overlapping track
if (longest overlapping track is track '0', i.e. unassigned hits)
consider second longest track for extension
if (too little overlap with existing longest overlapping track)
assign non-outlier hits from new_track to merged results
else
extend longest track to include non-outlier hits from new_track
</code></pre>
<p>Our merging code is in <code>merge.py</code>, function <code>heuristic_merge_tracks()</code>. We found simple merging ('longest-track-wins') hurts scores when there are more than 2 or 3 models, our current merging code was able to merge about 45 different sets of DBScan cluster results well.</p>
<p><strong>Acknowledgement</strong></p>
<p>Thanks to all Kagglers sharing in this competition, notably @Luis for the initial DBScan kernel, @Heng for the track-extension code, and @Yuval, @CPMP, @johnhsweeney, the chemist @Grzegorz, and many others for good discussions and DBScan feature suggestions.</p> |
TrackML Particle Tracking Challenge | Solution #17 - Searching for the perfect helix | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! I read your post describing your solution, and found a lot of good ideas I need to spend time digesting to fully understand.</p>
<p>I especially liked your method to choose "random" values for r0 and z0 based on observed values from training files. I did similar EDA and threw away the real data for an imperfect mathematical representation!</p>
<p>I look forward to competing with you again in the future 😀</p>
<p>Like many others I started with the dbscan solutions published in the public kernels and later extended with many excellent suggestions from @heng, @grzegorz, etc.</p>
<p>I eventually ended up with an ensemble of 3 models reaching a score of just over 0.66. Each model, with 10 z-shifts took several hours to run per event.</p>
<p>Then came Yuval and CPMP on the forums.</p>
<p>I realized that I was using poor, unreliable features - and I needed to go back to basics to understand the mathematics of a helix!</p>
<p>I essentially started over and re-designed my features using the basic mathematics of a helix as my design principle</p>
<p>A helix can be described by the following parameters:</p>
<pre><code> R - radius of curvature (radius of the circle projected on the x-y plane)
xc,yc - Center of the projected circle on the x-y plane
(These two can also be represented as R,theta0 in polar coordinates)
Pitch - a measure of how much z changes when the helix makes a full 360 degree rotation
Direction of rotation (i.e. does the helix turn to the right (clockwise) or to the left (counter-clockwise))
A single point the helix passes through - which, we assume is (0,0,z0) (for nearly all the helices we care about)
</code></pre>
<p>To reconstruct the tracks (helices), I scanned over values of R and z0 and clustered based on derived values of theta0 and Pitch (expressed as an angle)</p>
<p>I used the Hough Equation for finding theta0, given a value for R, which lets me then find the location of the center of the helix in the x-y plane</p>
<pre><code> The Hough equation is: r = 2R*cos(phi - theta0), where:
r is a hit's distance from x,y = 0,0
phi is arctan2(y,x) for a hit
R = radius of curvature of helix
theta0 = angle of inclination such that R,theta0 gives the center of the helix when projected on the x-y plane
</code></pre>
<p>After finding the helix center I shifted the x,y values of the hits so that the helix center is at 0,0 (new coord's for a hit are x'=x-xc, y'=y-yc,z)</p>
<p>Then I calculated what should be a constant based on the pitch of the helix (length of the arc of the helix/delta-z)</p>
<p>The length of the arc is R*angle of rotation from (0,0,z0) to current hit (x',y',z). You need to do some geometry to insure you get the right value for the angle of rotation. I know I didn't handle values >180 correctly...</p>
<p>Finally, use theta0 and (R*angle of rotation/delta-z) as features for dbscan.</p>
<p>With this approach I exceeded the score for my previous 3 models with a few minutes of processing (vs many hours) and eventually reached a score just under 0.7 (0.695) with a few hours of processing time per event.</p>
<p>See attached code (warning, it's ugly - but it worked)</p>
<p>Cheers! I hope you all enjoyed the competition as much as I did!</p> |
TrackML Particle Tracking Challenge | Solution #9 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TrackML Particle Tracking Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Right, i did not explain how I compute r0 for enought tracks to be able to compute the <code>C</code> constant.</p>
<p>Your questions (and all questions on this topic) will make my final document better. Thanks for that!</p>
<p>I use the conformal representation that was described in one of the documents shared on the forum, with coordinates <code>x/rt, y/rt</code> where <code>rt = sqrt(x^2 + y^2)</code></p>
<p>In that representation, circles going through the origin are straight lines. With little math I found that the distance of this line to the origin is <code>alpha0 = 1/r0</code>. That's how I found the radius of the helix.</p>
<p>There are many ways to compute the radius of a circle once you have 3 points on it (the origin, and 2 points on the track). I could have used another one, for instance: <a href="https://math.stackexchange.com/questions/133638/how-does-this-equation-to-find-the-radius-from-3-points-actually-work">https://math.stackexchange.com/questions/133638/how-does-this-equation-to-find-the-radius-from-3-points-actually-work</a></p>
<p>Another question you had I did not answered: I am not using standard scaler.</p>
<p>I will share my code before the deadline set by organizers, your quest will be over soon ;) Reason it takes time is that I am on vacation...</p>
<p>I don't know the physics here, better ask the organisers than me!</p>
<p>> WoW. plugged it in. used a short run (as is in my kernel) expanded, and got LB 0.785! </p>
<p>Great!</p>
<p>The downside is that track extension becomes less effective. In my case, my track extension, based on Heng's code, became totally useless. You're using a more elaborate way with a supervised ML approach. Maybe you can get some upside.</p>
<p>Anyway, it is good to know that this correction improves your code. You may bet better results than mine in a fraction of time.</p>
<p>I remember now, you must use the theta0 correction only for computing phi0, but not for computing zr. </p>
<p>I'll share my code soon, in case my writeup missed something relevant.</p>
<p>Good catch, thanks!</p>
<p>And thanks for rooting for me!</p>
<p>Glad you find it interesting. And even happier if some of my work is reused in stage 2. </p>
<p>Yes, track extension was definitely a miss for me. Congrats on your result, even if you passed me at the last minute ;)</p>
<p>I looked for functions that were similar to tan(), i.e. that had infinite limits in a closed interval. sinh() was the first function I looked at, and its distribution looked very similar to that of zr. </p>
<p>As I wrote: </p>
<blockquote>
<p>My code loops over z0, r0 pairs. Rather than estimating a distribution I just sample from the tracks in the first 100 train samples. </p>
</blockquote>
<p>I collected all the <code>(z0, pt)</code> pairs from the train events I'm considering, then turned these into <code>(z0, r0)</code> pairs via the use of the <code>C</code> constant above. Then I draw one pair randomly at each iteration of the main loop.</p>
<p>Following up on the <a href="https://www.kaggle.com/c/trackml-particle-identification/discussion/63250#371072">excellent find by Mark JD Hamilton</a>, here is a plot of <code>arcsinh(zr)</code>. While quite uniform, it is not as uniform as the scaled version I used.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/371307/10098/asinh2.png" alt="arcsinh"></p>
<p>Wow, thanks for sharing. I'm impressed I discovered something that is linked to a known physics law. In a way it is reassuring that this is not just a computation artifact.</p>
<p>The 0.7 factor makes the distribution even more uniform, but results without it were already quite good. I'll share plots with and without 0.7 scaling ASAP.</p>
<p>The 3.5 factor is just to scale the result to the same scale as the other two features I am using for clustering. </p>
<p>Yes, it was rescaled, I should have said it, sorry. Top one is rescaled by 1/10, bottom one by 1/50.</p>
<p>The most striking finding for me is that the variation is not symmetric when you change z sign.</p>
<p>I was also wondering about the peaks, which is why I also plot the number of hits. The peaks correspond to values of z with a high number of hits. It is the valleys that are misleading actually. The second plot is obtained by scaling z further, and removing values of z with a small number of hits. I did not investigate further than that, because a small number of hits can be safely ignored for my purpose.</p>
<p>I will release an EDA notebook to show how I produced these plots and other ones I found useful during the competition.</p>
<p>I also looked at magnetic field variation as a function of phi, and found some variation, like you. But I did not model it. I guess its effect is way smaller than the variation by z. </p>
<p>Mine was improved by 0.03</p>
<p>Not sure why it was more effective for me.</p>
<blockquote>
<p>I don't understand why you need this. </p>
</blockquote>
<p>You can do without. I use it when sampling. I sample pt and pz from the tracks in train data, compute r0 from pt instead of having to compute it from the hits in each track. It is faster.</p>
<blockquote>
<p>Probably this gave you a good boost, right? </p>
</blockquote>
<p>Everything I documented gave me a boost ;) This one gave me at least 0.02 and probably more. I did not measure its effect separately because I first replaced the number of hits by the number of layers, got a boost, then replaced the number of layers by the number of volumes, and again got a boost.</p>
<p>Mark,</p>
<p>bear in mind that we are dealing with simulated data. Maybe you just uncovered a discrepancy between the simulator and reality?</p>
<p>Final code and documentation is available on github: <a href="https://github.com/jfpuget/Kaggle_TrackML">https://github.com/jfpuget/Kaggle_TrackML</a></p>
<p>I found why my local score was higher than the LB: I forgot to use the magnetic field correction in one model... Once fixed local score and LB score become much closer, with a difference smaller than 0.001. I document and share code for the fixed models.</p>
<p>What is funny is that with unscaled arcsinh my local score climbs faster, but plateaus earlier. Maybe I need to tune the value of eps for DBSCAN again, but I won't ;) Just to say that this is not a clear win.</p>
<p>I have put some code on <a href="https://github.com/jfpuget/Kaggle_TrackML">github</a> </p>
<p>This is work in progress, the repo should be final by Monday. </p>
<p>The code computes something that should give a LB above 0.78. I'm running it as of now to see what score exactly this yields.</p>
<p>Cleaning my code revealed few little glitches that explain why my LB score was below my local score. I hope the code on github fixes that.</p>
<p>@Mark, I found that when I restrict the data to be the hits of tracks originating from the z axis, then non scaled arcsinh is best. It means my code can be improved by removing the scaling probably...</p>
<p>Thanks you Heng! </p>
<p>Your remark about supervised learning being mandatory for reaching 0.8 has been a driver for me!</p>
<p>It is a pity you did not had time to submit a good solution at the end. I was surprised, as many of us, to not see you above 0.8 at the end.</p>
<p>I am considering TGS or Airbus as my next ones ;)</p>
<p>I'll submit, I have started publishing my code. But I try to enjoy the end of my vacation as well ;), hence I will submit Monday probably.</p>
<p>Glad what i shared helped you. I'll try to share earlier next time ;)</p>
<p>Did I disclose I have been using DBSCAN? Well, now you know ;) </p>
<p>My approach is quite straightforward and has been almost fully disclosed by @yuval. An helix that starts from z axis can be described by 4 parameters:</p>
<ul>
<li>z at origin <code>z0</code></li>
<li>radius <code>r0</code></li>
<li>angle at origin in transverse plane (x,y plane) <code>phi0</code></li>
<li>slope <code>zr</code></li>
</ul>
<p>The last 3 can be expressed as functions of the momentum at origin, <code>px</code>, <code>py</code>, <code>pz</code>, and <code>pt = sqrt(px² + py²)</code>:</p>
<ul>
<li><code>r0 = C * pt</code></li>
<li><code>phi0 = arctan(py/px)</code></li>
<li><code>zr = pz/pt</code></li>
</ul>
<p>The formula to compute <code>C</code> is given in the documents from CERN but I estimated it from the train data via a linear regression. That's the only use of supervised learning I made ;)</p>
<p>My code loops over <code>z0, r0</code> pairs. Rather than estimating a distribution I just sample from the tracks in the first 100 train samples. For each pair I 'unroll the helix', i.e. compute <code>phi0</code> and <code>zr</code> as:</p>
<pre><code>phi0 = phi +- theta0
zr = (z - z0) / (2 * r0 * theta0)
</code></pre>
<p>where <code>rt</code> is the distance to origin in transverse plane, <code>phi</code> the angle in transverse plane, and <code>theta0</code> is the unrolling angle:</p>
<pre><code>rt = sqrt(x² + y²)
phi = arctan(y/x)
theta0 = arcsin(rt / (2 * r0))
</code></pre>
<p>In one iteration I add <code>theta0</code> to <code>phi</code>, and in the next iteration I subtract it.</p>
<p>In order to cope with the discontinuity at <code>pi</code> and <code>-pi</code> I use <code>cos(phi0)</code> and <code>sin(phi0)</code>. </p>
<p>The distribution of <code>zr</code> is highly skewed. Many have use <code>arctan</code> to unskew it, but I found that using <code>arcsinh</code> was way more effective. I actually use:</p>
<pre><code>arcsinh(zr / 0.7) / 3.5
</code></pre>
<p>The picture below gives the distribution of arctan(zr), and the one below the distribution of arcsinh. The latter is way more uniform.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/369949/10056/atan1.png" alt="arctan distribution"></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/369949/10055/asinh.png" alt="arcsinh distribution"></p>
<p>The last twist is to model the uneven magnetic field at high values of <code>z</code>. The picture below shows the relative difference between the theoretical angle, and the median of measured angle as a function of <code>z</code>, the picture below gives the number of hits in log scale:</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/369949/10052/ratio1.png" alt="ratio1"></p>
<p>The best way I found was to multiply <code>theta0</code> with a correction that depends on <code>z</code>:</p>
<pre><code>1.005 - (abs(z + 200) / 6000)**2.4
</code></pre>
<p>The picture below shows a smoothed average of the median measured angle deviation (in blue), and my correction function (in red):</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/369949/10053/ratio.png" alt="ratio"></p>
<p>DBSCAN is run at each iteration. Its output is merged with existing tracks in a simple way: for each track, or candidate track, I compute the number of volumes with hits from the track. Hits are then assigned to the candidate track with the most volumes. Using number of volumes is way more effective than using the number of hits in the track. </p>
<p>Another criteria is used for deciding which track wins. I assign a unique <code>vl_id</code> to each <code>volume_id, layer_id</code> pair. For each of the first 100 train events I represented each particle track by the sequence of its <code>vl_id</code> once data is sorted by <code>z</code>. I then compute the frequency of each sub sequence of 4 <code>vl_id</code> . The quality of each track candidate in test events is computed in a similar way: first create the sequence of its <code>vl_id</code> once data is sorted by <code>z</code>, then take the average of the log of the frequencies of its sub sequences of length 4, and multiply by the number of volumes of the candidate track. A hit is assigned to a new candidate track if the new candidate track has both more volumes and a better quality than the current track of the hit. This quality is very effective in removing tracks that do not make sense, for instance tracks that skip a layer entirely. </p>
<p>The above yields a LB score above 0.785 with about 33000 DBSCAN runs. It takes about 10 hours per event.</p>
<p>The extra mileage I got comes for a simple idea: run another, similar model, on the inner volumes only (7, 8, and 9). This model can be more conservative (smaller eps for DBSCAN) because tracks are closer to perfect helix. I ran this model for about the same number of iterations, then merged its output with the previous model: tracks that overlap significantly are merged, and for the rest, the track with most volumes wins.</p>
<p>The very last improvements (about 0.002) come from merging with a third model that is similar to the first one.</p>
<p>That's it, no fancy math, just lots of tuning. I hope I have not made errors in the equations, I'll check again tomorrow, but appreciate if you find typos. They must be correct in the code given the results: the code finds about 95% of the centered tracks.</p>
<p>Things I thought about but did not had time to finish implementing:</p>
<ul>
<li>Use direction information from cells data</li>
<li>Extend to tracks that do not pass near z axis</li>
<li>Fit helix to each track candidate to remove outliers and possibly add missing hits.</li>
</ul>
<p>I thought my approach would be a nice starting point for second phase, given its simplicity, but I no longer think it is, now that I saw <a href="/icecuber">@icecuber</a> solution!</p> |
Tradeshift Text Classification | Solution sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tradeshift Text Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I agree! I'm just wondering whether this can be used as calibrating the predictions of per-subject models as in the current seizure detection contest. https://www.kaggle.com/c/seizure-prediction/forums/t/10383/leaderboard-metric-roc-auc/54252#post54252</p>
<p>If I use new_y = 0.5 * ((2 * abs(y - mean(y))) ** beta) * sign(y - mean(y)) + 0.5, instead.</p>
<p>[quote=Jianmin Sun;57864]</p>
<p>If I understand it correctly, it should not change rank base metrics. For example, AUC will be same with or without it.</p>
<p>[quote=rcarson;57863]</p>
<p>[quote=Stanislav Semenov;57844]</p>
<p>It is kind of transformation y (in range of (0, 1)) just for optimising metric. You also can try other functions in the class of sigmoids.</p>
<p>[quote=Romain Ayres;57824]</p>
<p>[quote=Stanislav Semenov;57817]</p>
<p>Another model was RF on second level. But with some trick. You need every predictions of y replace to:</p>
<p>new_y = 0.5 * ((2 * abs(y - 0.5)) ** beta) * sign(y - 0.5) + 0.5 with beta around 0.5</p>
<p>[/quote]</p>
<p>What kind of sorcery is this ?</p>
<p>[/quote]</p>
<p>[/quote]</p>
<p>Thank you. Is this for log loss only or does it apply to rank based metrics? </p>
<p>[/quote]</p>
<p>[/quote]</p>
<p>[quote=Stanislav Semenov;57844]</p>
<p>It is kind of transformation y (in range of (0, 1)) just for optimising metric. You also can try other functions in the class of sigmoids.</p>
<p>[quote=Romain Ayres;57824]</p>
<p>[quote=Stanislav Semenov;57817]</p>
<p>Another model was RF on second level. But with some trick. You need every predictions of y replace to:</p>
<p>new_y = 0.5 * ((2 * abs(y - 0.5)) ** beta) * sign(y - 0.5) + 0.5 with beta around 0.5</p>
<p>[/quote]</p>
<p>What kind of sorcery is this ?</p>
<p>[/quote]</p>
<p>[/quote]</p>
<p>Thank you. Is this for log loss only or does it apply to rank based metrics? </p>
<p>[quote=Dmitry Efimov;57788]</p>
<p>Thank you for sharing and congratulations!</p>
<p>Could you share the best score of the individual xgboost model?</p>
<p>[/quote]</p>
<p>Sure, this is private LB</p>
<p>[0.0053053, 0.0052910, 0.0054101] random split with different {depth, trees, eta, minweight}</p>
<p>[0.0048854, 0.0048763, 0.0048978] 1st half as base and 2nd as meta (also improved base classifiers)</p>
<p>[0.0047103, 0.0047446, 0.0047313, 0.0047360] 2nd half as base and 1st half as meta</p>
<p>[0.0044595] 2nd half as base and 1st half as meta, (add xgb as base classifier)</p>
<p>There are some xgbs which just generates y33 or we don't submit them individually so they don't have score.</p>
<p>beluga always makes beautiful figures :D </p>
<p>Thank you!</p>
<p>[quote=Faron;57786]</p>
<p>First of all: congratz!</p>
<p>[quote=rcarson;57772]</p>
<p>1) Use the second half training data as base and the first half training data as meta, instead of random split. (this is key!)</p>
<p>[/quote]</p>
<p>What is the reasoning behind this?</p>
<p>[/quote]</p>
<p>I really don't know. We tried different split and this one just gave significantly improvement. It is 0.0001 better than any other split we tried.</p>
<p>edit: "I guess the first half is somehow more similar to the test set. " this is not correct. Using first half as meta also gives a better cv score. So this split provides better predicting power overall. </p>
<p>[quote=Abhishek;57776]</p>
<p>Which sparse features did you create? Can you share the parameters of your SGDClassifier?</p>
<p>[/quote]</p>
<p>Sure. We use the sparse features used in Dmitry's benchmark, which is encoding the 10 hashed features into a sparse matrix.</p>
<p>clf=SGDClassifier(loss='log',alpha=0.000001,n_iter=100)</p>
<p>Hi, sorry to disappoint you that there is no magic but brute forcing and many many machine hours. All our work are based on Dmitry and tinrtgu's great benchmarks, and Tianqi Chen's great tool Xgboost. https://github.com/tqchen/xgboost</p>
<p>Many many thanks! You are the true heroes!</p>
<p>Our winning solution ensembles 14 two-stage xgb models and 7 online models. Our best single xgb model gets 0.0043835/0.0044595 for public and private LB. It is generated as follows:</p>
<p>1) Use the second half training data as base and the first half training data as meta, instead of random split. (this is key!)</p>
<p>2) we use four base classifiers: random forest for numerical features, SGDClassifier for sparse features, online logistic for all features and xgb for all features.</p>
<p>3) For meta classifier, we use xgb with depth 18, 120 trees and 0.09 eta.</p>
<p>The xgb models could be memory intensive. We use a 8-core 32 GB memory server for most of our submissions. Thank my boss for the machine :P</p>
<p>We will make a formal description and code release after some cleaning up. Cheers!</p>
<p>===================================================================</p>
<p>Something we tried but it didn't work</p>
<p>1) bagging trees of different sub-sampling of columns of xgb trees by tuning "colsample_bytree". This trick is shown to work well in higgs contest but we have no luck. It only gives a very little improvement.</p>
<p>2) Add a third layer to Dmitry's benchmark. The score is not that bad but it just doesn't blend well with our existing submissions.</p>
<p>3) structured learning. We try to use pystruct, https://pystruct.github.io/, to predict a sequence rather than each label separately. This is our problem. we could find a way to make it work.</p>
<p>4) predict sequences rather labels. there are only 141 unique combinations of 33 labels in training sets, which means we can encode the 33 labels to 141 new labels and predict them. The score is really bad when we translate them back..</p>
<p>===================================================================</p>
<p>About Xgboost</p>
<p>I sincerely suggest everyone use it. It is fast, easy to customize and just giving really really good performance. It generates our best solution in higgs, liberty and this contest.</p>
<p>Please check this feature walk through, https://github.com/tqchen/xgboost/tree/master/demo</p>
<p>And this introduction http://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf</p>
<p>And how people enjoy it: https://www.kaggle.com/c/higgs-boson/forums/t/10335/xgboost-post-competition-survey</p>
<p>We'll publish xgb benchmarks in future contests :D</p>
<p>[quote=Stanislav Semenov;57817]</p>
<p>Another cool trick is postprocessing result. I calculated sum of final predictions y1...y32. (sum_y = sum(y1:y32)). Then if sum_y is bigger than 1, I replaced to 1. (sum_y = 1 if sum_y > 1)</p>
<p>And final y33 is the linear combination:</p>
<p>new_y33 = alpha * y33 + (1 - alpha) * (1 - sum_y) with alpha around 0.6</p>
<p>This gave an improvement on any solution.</p>
<p>Another model was RF on second level. But with some trick. You need every predictions of y replace to:</p>
<p>new_y = 0.5 * ((2 * abs(y - 0.5)) ** beta) * sign(y - 0.5) + 0.5 with beta around 0.5</p>
<p>It is very effective to fix predictions from RF.</p>
<p>The final solution is just linear combination of all of it.</p>
<p>Stanislav</p>
<p>[/quote]</p>
<p>This is so cool!</p>
<p>[quote=Romain Ayres;57797]</p>
<p>The last few hours of this competition were just epic, we didn't think we could come back when you submitted 0.00425 one day before the deadline :)</p>
<p>[/quote]</p>
<p>we were shocked seeing you catch up just overnight. Luckily, after we submitted that 0.00425 one, we immediately launched a variant of it, which took a whole day to train all 33 labels. It gave us the little advantage. Still we were forced to do a new round of CV within the last 6 hours and luckily we found adding all raw sparse features in meta layer could help a bit and we only had time to predict y33. We'll remember this for a long time! :D</p> |
Traveling Santa 2018 - Prime Paths | 3rd place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Traveling Santa 2018 - Prime Paths
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle for hosting this competition. It was a real fun. </p>
<p>Two months ago I was a noob in TSP, and like others started by running concorde exact solver first:) Then linkern for some time. Only later I found LKH. On the other hand I went from clumsy version of 2opt (without even knowning that this thing has a name) in Java to Lin-Kerninghan in C.
It was a long journey, and not at all straight. I really should have started with careful research before coding anything :)</p>
<p>In short my final submission went through these stages:</p>
<ol>
<li>Get pure TSP with LKH. Score 1516321.</li>
<li>Run penalty aware steepest descent Lin-Kerninghan. Score 1514683.</li>
<li>Try applying higher opts. Score 1514610, but it was helpful after kick sessions too.</li>
<li>Kick tour, reoptimize. Score 1514245.</li>
<li>Kick tour, reoptimize with tricks from 3, recombine with GPX2. Score 1514014.</li>
</ol>
<h3>First stage:</h3>
<p>Nothing special, LKH with GPX2. Later stages were running on raw score 1502618. I felt like there is room for small improvement still, but at that point it seemed more beneficial to focus on later stages.</p>
<h3>Second stage:</h3>
<p>I have a Lin-Kerninghan closely following K. Helsgaun <a href="http://akira.ruc.dk/~keld/research/LKH/KoptReport.pdf">An Effective Implementation of K-opt Moves for the Lin-Kernighan TSP Heuristic</a>. What a surprise it was to see him at the top of the LB right when I was reading it :) The only thing is you can't apply 'The positive gain criterion' for this problem (or actually you can, but a relaxed version of it). To calculate gain with penalty fast I used cumulative sums for all possible MOD 10s, forward and reverse (Pretty sure someone did explain it already or will). And yes, non-sequential opts are very important here. Unlike LKH I didn't have separate parameters limiting number of cycles and number of alternating cycles to join them. Instead I incorporated it all in parameter maxK, and was reducing complexity by setting a limit on size of smallest cycle to join (Call it CycleLen). </p>
<p>I spend a lot of efforts trying to find the best way to converge with the 4opt or later 5opt. In the end nothing worked better then applying best opt I was able to find during iteration (steepest descent).
Among things I tried gradually increasing prime penalty (like second place did), but it didn't work for me. Maybe it is not that good with steepest descent, or I evaluated it in a wrong way (running 2opt multiple times on my notebook). Eventually I ended up first searching whole base_opt space, then picking starting cities at which improvements where found and searching with higher opts starting from them. Also I gradually reduced CycleLen from iteration to iteration, which allowed me to go higher base opts, later going up to 8opt.</p>
<p>Since Nov 20 I was only running 3 - 5 stages, pushing local optima and never going back. So my 2 stage is kind of outdated, now I would have added tricks from stage 3 here.</p>
<h3>Third stage:</h3>
<p>Here my goal is to go into higher opts, in some cases more then 20opts. I had a couple of tricks to do that:</p>
<ol>
<li><p>Reintroduce 'The positive gain criterion', but sometimes it won't be positive. E.g. don't make raw tour worse then -1.</p></li>
<li><p>Reduced candidate set. Stage 2 used one generated by LKH with 5 candidates for each city. Limit it to only 3 candidates, or generate a new one having only edges I was using previously.</p></li>
</ol>
<h3>Fourth and fifth stage:</h3>
<p>For kicks I would do number of kopts that will try to either improve pure TSP tour, or make it worse by limited value. I would also limit them to some part of the tour to limit search of reoptimizations later on (again to go higher opts).</p>
<p>I used GPX2 for recombinations. Actually I would have certainly went for IPT, if I found description of it, which I didn't for some reason... Neither I found it for GPX2. So all I had is <a href="https://github.com/rtinos/gpx2">this source code</a> with little understanding of what it does. Yet I managed to incorporate this source code into mine, but only used subset of the components GPX2 was able to find.</p>
<h3>Code:</h3>
<p>I started competition in Java, but switched to C later on. Here is my <a href="https://github.com/NighTurs/kaggle-traveling-santa-2018-prime-paths">source code</a>. Beware, no cleaning. Also it doesn't have some python code I was mainly using to convert between file types, or change them a bit. Instead I provided those that you will need to run it.</p>
<p>I never committed some tweaks, e.g. I have one to merge improvements found on different machines, but that is basically kicks with limited candidate set.</p>
<h3>Tips how to get more cores:</h3>
<p>During most of the competition I only had one i5-3470 (4 cores). Later on I setup trial Azure account where you have 200$ for free to spend however you want. Azure has 'Low priority' instances (something like spot instance in AWS) which are significantly cheaper but can be terminated at any moment. I was running F64s v2 instance that costs $0.541/hour ($2.706/hour normal priority). So here you have 200 / 0.541 / 24 = 15.4 days of 64 cores, which should be enough if you use it wisely. But you can only use it once, e.g. I wasted my free trial in Google cloud long ago... Also originally free trial has quota of 10 cores running simultaneously, to increase it you will need to upgrade account to 'pay-as-you-go' (you will still be able to use your 200$), and fill a support request.</p>
<p>Just wanted to share this hack for anyone who doesn't have top hardware. Still new ideas are much more profitable here then CPU power, but it helps.</p>
<p>Indeed cores there are slower. I don't remember exactly but one core on this instance was like 1.5 times slower then on my i5-3470 (on my code). Yet sheer amount of them significantly outperforms what I have. I guess I will need a hardware upgrade :)</p>
<p>Thanks! I have my code linked in "Code" section.</p> |
Traveling Santa 2018 - Prime Paths | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Traveling Santa 2018 - Prime Paths
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, big congrats to the top teams and thanks Kaggle for hosting such a great competition. We learned a lot during this competition. Now we are going to share our strategy for your reference as following:</p>
<p>1)<strong>LKH</strong>
As our start, we ran LKH for several days and got a final score of 1516289.99 with considering prime cities scenario (pure tsp score 150269x).</p>
<p>2)<strong>Kopt</strong>
After the first step, we segmented our initial tour to several sub-tours and ran 2opt/3opt/4opt for each part, which helped us to achieve a score of 1515464.13.
As the improvement getting slower, we applied 5opt and reached to 1515325.46. Making random constructed sub-tours and applying several 6opt (sequential) cases helped us improve tens of points.</p>
<p>3)<strong>Finetuning</strong>
One thing we found interesting during applying Kopt is that, while the score considering prime cities scenario improved fast at the beginning, but meantime, the total cost of pure TSP increased badly worse, which lead us into a local minimum finally. The possible reason for that, based on our guesswork, is the total cost of pure TSP increasing too fast. This situation is also mentioned by several great Kagglers such as @Vlado BozaSo and @Luís Simões. Instead of increasing penalty weights strategy, which is a brilliant idea and we hope we can figure that out initially, we tried another strategy, which is to limit the increase speed of total cost for pure TSP. When found a move which improved our score considering prime cities a lot while caused pure tsp score increase too much, we refused to accept the move. As for your reference, we set the rate of prime improvement score to pure tsp changing score as 0.5 to -10^6. From my perspective, the intuition behind this strategy is similar to change the penalty weights. However, the performance of it is not as good as changing the penalty weights. The final score we got by applying this method is 1515118.03.</p>
<p>4)<strong>Insert and Kernel</strong>
We also applied some simple methods which were ignored by us initially, such as inserting and kswap methods, which brought us to a score of 1515024.30.
Thanks to great Kernels created by @Kostya Atarik <a href="/blacksix">@blacksix</a>. By running and modifying their kernels (Not a k-and-k-halves-opt/DP shuffle strikes back) gave us a score of 1514976.90.</p>
<p>5)<strong>EAX algorithm</strong>
Finally, our secret weapon, EAX, which helped us achieve the final big jump on the leaderboard. ^_^, @Vlado Boza, sorry for the scary part.
By applying this algorithm, we achieved our final improvement from 1514976.90 to 1514222.82.
Same idea for Kopt, we segment the tour randomly (500/800/1000 cites) and run EAX algorithm for each sub-tour.
First, the raw sub-tour and 199 random generated sub-tours were used as our initial input.
Secondly, we constructed AB-cycles (max number of 2000/5000/8000 AB-cycles) and E-set randomly.
Thirdly, we combined AB-cycles in E-set and chose the sub-tour which has the lowest pure tsp cost.
Finally, we calculated the score considering prime cities of new sub-tours and selected best top n as the input for the next generation.</p>
<p>The reference paper can be found as following:
Nagata Y. The EAX algorithm considering diversity loss[C]//International Conference on Parallel Problem Solving from Nature. Springer, Berlin, Heidelberg, 2004: 332-341.
Nagata Y. New EAX crossover for large TSP instances[M]//Parallel Problem Solving from Nature-PPSN IX. Springer, Berlin, Heidelberg, 2006: 372-381.</p>
<p>Actually, we finished the EAX code merely during the last 3 days of the competition and the result was way more better than our expectations. If there were few more days left for us to ran the code, we could achieve a better score. We got a score of 15141xx using several hours right after the competition. </p>
<p>We also tried to build mix integer problem model with penalty, did not succeed.</p>
<p>Thanks for reading. And big congratulations to the winners again. </p>
<p>Yes, we did not find way to involve prime penalty in AB-cycles combining. We did not expect such a huge improvement ,but checking prime penalty only during population selection turned out to be really helpful.</p> |
Traveling Santa 2018 - Prime Paths | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Traveling Santa 2018 - Prime Paths
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I have the source code that you can look into.
Shame to us we did EAX without considering penalty, hence it did not help much anyway :(</p>
<p>In my opinion, some of the public k-opt kernels were totally wrong, or inefficient. Indeed just 1 correct double bridge move and 1 type of 3 opt can get you to 15152xx easily. People usually confuse k opt moves in TSP with permutation of k cities. We search for a candidate, which is a set of feasible 3 cities in 3 opt, or 4 cities in 4 opt, quite fast. Then make the move. Even without the trick, that would not be so hard to get to 15152xx. I just shared the kernel to show that k opt (except 4 opt sequential and 5 opt) can be implemented easily. </p>
<p>For those who still cannot get what k opt is, here is the source we heavily depended on to write our code
<a href="http://tsp-basics.blogspot.com/2017/04/4-opt-and-double-bridge.html">http://tsp-basics.blogspot.com/2017/04/4-opt-and-double-bridge.html</a>
<a href="http://tsp-basics.blogspot.com/2017/03/3-opt-move.html">http://tsp-basics.blogspot.com/2017/03/3-opt-move.html</a></p>
<p>Code added!</p>
<p>You're welcome YaGana!</p>
<p>I've just updated the full code link in the post. The permutation code was written in kotlin language by Simon <a href="/simianware">@simianware</a>, which can easily exploit all cores by a few lines of threading procedures. To me that's really cool.</p>
<p>I don't know how did you do k--opt. In our case, we search for feasible candidates sequentially, then make a move immediately. For instance for 3-opt, for each node a going from 0 to 197769 (a here is the index, or position in the path), we search for node (b+1) which is a neighbor of a, then for each node (c+1) as a neighbor of node b. Finally if node (a+1) is the neighbor of node c, then a,b,c yield a candidate move (with condition a < b < c. Then we make a move. The new tour would be [0.....a, b+1, ...., c, a+1,....,b, c+1,....,end]. You can see our kernel to get the idea.
If you did the same, then it could be the quality of neighbor list. We use the neighbor list from LKH (popmusic) with maximum 7 and minimum 5 neighbors for each node.</p>
<p>Thank you all for this interesting, fun, hard, and harsh competition. Here is the short description of our team's method. We would like to present it in the chronological order, like a story of our progress during the whole competition.</p>
<p>Our kernel: <a href="https://www.kaggle.com/khahuras/super-fast-cumsum-trick-8th-place-demo-solution">https://www.kaggle.com/khahuras/super-fast-cumsum-trick-8th-place-demo-solution</a>
Our full code: <a href="https://github.com/voanhkha/Traveling-Santa-2018-Kaggle">https://github.com/voanhkha/Traveling-Santa-2018-Kaggle</a></p>
<p>1) LKH: We used LKH to get the raw tour of 1502600 (prime score 1516256) as the starting step.</p>
<p>2) 2.5-opt and 3-opt (just 1 type of sequential 3-opt with no flipped segment) got us to 15158xx.</p>
<p>3) Double bridge (one type of 4-opt non-sequential moves with no flipped segment) helped us to improve to 15156xx.</p>
<p>4) We discovered the gradually penalty increasing trick (the same as the 2nd place team). This helped us to decrease to 15154xx.</p>
<p>5) To this stage, every computation is slow. Then we discover a major breakthrough: THE CUMSUM TRICK to deal with prime scoring function. We got 200x faster?!!!. We don't need any GPU, or even numpy package. With pure python, we can straightly optimize from 1516256 to 15149xx in just under 1 hour (use pypy3 instead of python)! Using this trick, we were able to get to 15149xx. </p>
<p>6) Then we used "kicks", which is the same as "break and fix" that was discussed in the forum a few days ago, to escape local minima. Kicking with short segments by all types of our moves (from 5-opt to 2.5-opt), then quickly reoptimizing by all types of moves , by a diverse set of parameters, got us to 15147xx. We coded the 5-opt moves by LK heuristics, and running with cumsum trick. So 5-opt is implemented effectively! Please note that ALL types of k-opt (k<=5), both sequential and non-sequential, can be implemented. </p>
<p>7) When kicking with k-opt moves exhaused, we tried to kick by using the EAX algorithm (combining 2 different tours). This slightly improved our score to the lower end of 15147xx.</p>
<p>8) Then we use shuffling cities on segments up to 200 cities, using lower bound estimation. This hugely boost our score to 15145xx.</p>
<p>9) Finally, all four of us collaborate on the final push by using the shared Dropbox folder, and kicking that 15145xx tour restlessly in days. Whenever who found a new tour, it would be put immediately onto that shared folder, and the other members automatically get that tour and continue to kick. This simple method is extremely efficient, since we can exploit all of our computation resources. Our final score is 1514438.</p>
<p>During all optimization processes, the cumsum trick always plays the main part. It alone contributes at least 70% of our success. Kicking and reoptimizing had never been this fast and comfortable. </p>
<p>Thanks for reading,
Team [Zidmie | Kha | Marc | Simon]</p> |
Traveling Santa 2018 - Prime Paths | One part of my approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Traveling Santa 2018 - Prime Paths
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello all, at the end I couldn't get at least a gold medal.</p>
<p>Just to share the central part of my optimization idea:</p>
<p>Some evolutionary (genetic and pso) algorithms but not for evolving the path directly but instead to <strong>evolve the cities coordinates</strong> </p>
<p>Here "ind" is the array of 2 * 197769 cities coordinates X,Y
Also, wraping this evolved individual (array) inside the cos function has many many interesting effects. </p>
<pre><code>def fitness(self, ind, seed):
xcities = np.cos(np.reshape(ind, xyr.shape)) * xyr
x = xcities[:, 0] * 10000
y = xcities[:, 1] * 10000
solver = TSPSolver.from_data(x, y, norm="EUC_2D")
tour_data = solver.solve(blablabla)
</code></pre> |
Traveling Santa 2018 - Prime Paths | Our solution for 2nd place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Traveling Santa 2018 - Prime Paths
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>How much better was the GPX2?</p>
<p>Actually three years ago we had similar idea which did not work (there were multiple paths with limit on load of your sleight so we tried to alter the limit a bit).</p>
<p><a href="https://github.com/usamec/kaggle2018/blob/master/outputs/best.csv">https://github.com/usamec/kaggle2018/blob/master/outputs/best.csv</a></p>
<p>We would love to hear about your methods!</p>
<h1>TLDR</h1>
<p>LKH for TSP, then fine tuning using custom Lin-Kerninghan implementation on way too many cores.
Important things:</p>
<ul>
<li>non-sequential moves (especially 4-opt) are super important and make difference between 15152xx and 1514xxx teritory</li>
<li>do not optimize 10% right after LKH, but first optimize using lower penalty. In the end our schedule was like 1%, 2%, .., 9%, 10%.</li>
<li>after escaping local minima with kick and optimizing do not just compare result with best, but merge it with best (replace segments in best).</li>
</ul>
<h1>Long story</h1>
<h2>LKH</h2>
<p>Nothing special here, we used some random settings from internet and runned for a long time (couple days). Ended up around 1502611.8 pure score. In retrospect we could push a little more here amd play with parameters.
We also used candidates from LKH in further optimization.
(We also used pi values, but they did not bring any significant edge). </p>
<h2>Finetuning</h2>
<p>Custom solution written in Rust. Centered around standard LK move idea - remove/add k edges.</p>
<h3>Move evaluation</h3>
<p>Since there will be much more tries than successful moves, we opted for solution which gives us O(k) move evaluation and O(n) move execution (k - number of exchanged edges, n - tour length).
This can be done using simple prefix sums for penalties (calculated for each possible offset and also for possible reversals).</p>
<h3>Move finding</h3>
<p>Standard LK moves. We relied on random selection a lot. Start with random edge from tour. Remove it. Then pick vertex from that edge and pick random candidate, add that edge to tour. Removed edge there, ... at some point close the loop.
It is important to limit change to the base TSP length there. We usually limit it to 2 or 3. (We also do it during move finding similarly to LKH). </p>
<p><strong>Super important</strong>: When we find a move, which breaks tour into several cycles we try to patch it and make nonsequential move. We mainly patch case with 2 cycles.
Patching is easy, you just test each vertex on nonmain cycle and try to join it with main cycle using some candidate edge from that vertex. Patching more cycles is similar but recursive. Also patching more than 2 cycles did not bring very significant improvement.</p>
<h3>Escaping local optima</h3>
<p>We use kick to get our from local optima, where it is hard to find a good move.
We had two types of kicks:</p>
<ul>
<li>Make several 8-opt moves, which do not destroy path too much (each move can increase cost by at most 20). This was done globally on whole tour. Then optimize.</li>
<li>We pick some rectangular region. Each move should begin with removing edge from the region. We altered the penalty (usually by brinding it up or down, we had several configurations), optimized for a bit and than optimized with regular penalty.</li>
</ul>
<p>After optimizing we tried to merge the result with the best tour, which brings us to merging.</p>
<h3>Merging of tours</h3>
<p>Given two tour we try to find segments of paths, which start and end with same vertex and have same sets of vertices in between. Then you can exchange segments between paths and find possibly better tour. </p>
<p>We use a custom implementation of IPT (Iterative Partial Transcription, see Keld's LKH papers) which takes into the account penalty to merge the tours.
Our IPT implementation first tries to compress the paths by replacing vertices having the same neighbors in both tours with edges. A naive implementation would do this on a graph but we simply renumber cities so that tour 1 consists of cities 1,2.3,...n. One can then skip all vertices of Path 2 that have path2[i +-1 ] != i +- 1 (note: signs might be opposite if the segment was reversed). After that we simply go over all possible starts/ends, check whether the subset of vertices is the same (which is quite easy as subset on path 1 is just continuous range of numbers and so we just need to find min/max on path 2). Finally, given the subpaths with same vertices, we try to replace subpath from tour 1 into tour 2 and see whether it would improve the score.</p>
<h3>Multithreading</h3>
<p>We run stuff using many threads (on 72 or 48 core machines). Usually some threads were devoted to finding moves, some were devoted to kicking and then optimizing the tour. </p>
<h3>Penalty schedule</h3>
<p>We found out, that optimizing with 10% penalty right away is not the best things.
So we first optimize the tour with penalty 1%, then 2%, ... (there is different schedule in kernel, due to time constraints). We think that this schedule brings two possible things:</p>
<ul>
<li>it keeps pure TSP cost lower for longer time </li>
<li>it prefers to pick moves, which hit longer chunk of tour first</li>
</ul>
<h3>Overall running time</h3>
<p>Do not ask. Right now, we can produce something in 15141xy area in a day on 72 core machine.
Our best kernel ended with score 1514637. But we saw very big variation (from 1514637 to 15148xy) in results (even with same seeds, due to multithreading), so we assume, that there are multiple very different local optima.</p>
<h2>What did not work at all</h2>
<ul>
<li><strong>(Integer) linear programming.</strong> We could not find a good formulation.
We tried multiple formulations without success. We tried optimally solving 2x boxes of about 200-600 vertices using a variation of standard TSP formulations (but blown up to factor of 20 due to shifts+reverse), this doesn't seem work because subpaths out of the bounding box are too fixed into a local mininum and are hard to shift/reverse. We also tried recombination of two tours using ILP. Unfortunately, two runs can generate vastly different solutions which tend to have up to 30000 "diff" edges. Again, our attempts to optimize using ILP did not work there. In general, we believe our formulations of ILP would be able to solve (to optimality) subproblems of size of 100-200, unfortunately, this is too low to be of a practical usefulness given the time such optimization takes.</li>
<li><strong>Reduction to ATSP.</strong> Again, LKH did not want to talk to us after the reduction.</li>
</ul>
<h2>What we (mostly me) think would work</h2>
<p>Executing multiple moves at once. Sometimes one move can make cost higher, but combination of two or move moves, can improve the prime penalty.
We tried listing of all moves up to some k and combining them, but it did not bring any significant improvements. I think, that some clever penalty aware approach would work here, but we could not find one.</p>
<h2>What we should have done differently</h2>
<p>More restarts! We spend too much time on optimizing basically couple runs. We should try much more different starts.</p>
<h2>What made us angry</h2>
<p>Anyone hiding their true score (especially Vindar). Shame, ding ding ding, shame, ding ding ding!</p>
<p>Rudolf prize should really change to motivate people to submit immediatelly. Or Santa competitions should also have some prediction part, so people are motivated to probe the leaderboard. Or go for pure kernel competition, where kernels are run on some different input than public one.</p>
<h2>What made us happy</h2>
<p>Keld Helsgaun and William Cook :)
Also final push by "Random Travelling" team looked both scary and exciting.</p>
<p>Yep, this is pretty standard (although we did not use it). Typically you use edges of those tours as candidate edges and you try to find a best tour from that.</p> |
Traveling Santa Problem | Sasquatch (simple) solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Traveling Santa Problem
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Not as interesting as the winning team's many methods, but FWIW I've decided to write a summary of my solution. First part is online at http://recursivegoose.blogspot.com/.</p> |
Traveling Santa Problem | Solution of the prize winner (Rudolph) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Traveling Santa Problem
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is a description of my algorithm for the Kaggle travelling santa competition :</p>
<p>The algorithm is divided into three phases :</p>
<p><strong>Phase 1 - Initialisation</strong></p>
<p>Essentially produce two tours for the problem by using the linkern program in the concorde library (<a href="http://www.tsp.gatech.edu/concorde.html">http://www.tsp.gatech.edu/concorde.html</a>). At this stages the two tours have a lot of common edges. A
list of potential edges is constructed such that every edges in the initial two tours are present and also, for every nodes, the edges connected to its numneighbors closest neighbors.</p>
<p><strong>Phase 2 - Reparation</strong></p>
<p>The objective of this phase is to get a feasible solution. This is done by iteratively getting new tours by using the linkern program while increasing a penalty on conflicting edges. At some point (by default when there are less than 100 conflicts), to hasten
convergence, no new conflicts are authorized. The algorithm is essentially this.</p>
<p> </p>
<p> </p>
<hr>
<p style="padding-left:60px"><em><span style="line-height:1.4em">tour1 = initial_tour1</span></em><br>
<em>tour2 = initial_tour2</em><br>
<em>penalty[e] = 1.0 for every edge e in initial_edgelist</em><br>
<em>num_conflicts = compute_conflicts();</em><br>
<em>while (num_conflict > 0)</em></p>
<p style="padding-left:90px"><em>for every edges e in conflict</em><br>
<em>penalty[e] = penalty[e] * conflict_increase</em><br>
<em>edgelist = initial_edgelist</em><br>
<em>if num_conflict < lim_conflict</em></p>
<p style="padding-left:120px"><em>removed from edgelist every edge that is in tour2 but not in tour1</em></p>
<p style="padding-left:90px"><em>solve tour1 with linkern considering edge in edgelist with cost[e] = length[e]*penalty[e];</em><br>
<em>edgelist = initial_edgelist</em><br>
<em>if num_conflict < lim_conflict</em></p>
<p style="padding-left:120px"><em>removed from edgelist every edge that is in tour1 but not in tour2</em></p>
<p style="padding-left:90px"><em>solve tour2 with linkern considering edge in edgelist with cost[e] = length[e]*penalty[e];</em><br>
<em>num_conflicts = compute_conflicts();</em></p>
<p> </p>
<hr>
<p> </p>
<p> </p>
<p>At the end of the algorithm, with a good choice of parameters, the solution is typically around 6542000 but can be lower. Most of the time there is a gap between the length of the two tours for example something like 6542000 and 6535000.</p>
<p><strong>Phase 3 - Improvement</strong></p>
<p>In this phase we try to improve the solution generated in phase 2 by using a crude version of the Lin-Kernighan algorithm (Double-Lin-Kernighan) adapted to the two tours version and using the penalty used in phase 2. The idea, similar to what the Lin-Kernighan
algorithm does, is to get a construction R1(1)-A1(1)-R2(1)-A2(1)-R2(2)-A3(2)-R4(2)-A4(2), where Re(t) remove edge e from tour t and Ae(t) add edge e to tour t. The adaptation to the two tours version lie in the fact that when a conflict is introduced by closing
a tour in the substitution the algorithm can readily repaired it by switching to the other tour and removing the conflicting edge.</p>
<p>The substitution must have the following property:</p>
<p>1. It must form two tours (with possible conflicts)<br>
2. If the sequence is stopped before the end by a removed arc it could be closed (forming a tour) by adding an arc from the initial edge list.<br>
3. Given a metric, a sequence that start at the beginning and that end by a removed arc has always a positive value when evaluated according to the metric.</p>
<p>Two metrics are used in this phase. The first one (balanced/improved) aimed at closing the gap between the value of the two tours and decreasing both tours length while not augmenting the number of conflicts too much. The second one (feasibility) aimed at
reducing the number of conflict while not augmenting the worst tour too much. The metric are calibrated such that when executed one after the other the two approaches produced a solution with slowly decreasing penalty and slowly decreasing worst tour value.
The algorithm is:</p>
<p> </p>
<hr>
<p> </p>
<p style="padding-left:30px"><em>Make one pass of Double-Lin-Kernighan with metric 1 (balanced/improved) // get balanced solution with few conflicts</em><br>
<em>for i < num_iter</em></p>
<p style="padding-left:60px"><em>Make one pass of Double-Lin-Kernighan with metric 2 (feasibility)</em><br>
<em>Make one pass of Double-Lin-Kernighan with metric 1 (balanced/improved)</em><br>
<em>edgelist = initial_edgelist</em><br>
<em>removed from edgelist every edge in tour2 but not in tour1</em><br>
<em>solve tour1 with linkern considering edge in edgelist with cost[e] = dist[e]*penalty[e];</em><br>
<em>edgelist = initial_edgelist</em><br>
<em>removed from edgelist every edge in tour2 but not in tour1</em><br>
<em>solve tour1 with linkern considering edge in edgelist with cost[e] = dist[e]*penalty[e];</em></p>
<p style="padding-left:30px"><em>Do phase2 to repair the few conflicts that remains</em></p>
<p> </p>
<hr>
<p> </p>
<p> </p>
<p>The third phase is quite good at reducing the initial gap between the tours. However it is not very efficient at further reducing the worst tour value and, passed a certain point, stopped being efficient at all (probably when the penalties are too high).</p>
<p>The final solution is obtained by removing the longest arc from each tours.</p>
<p><strong>Source code</strong></p>
<p>It can be compiled by the command <em>make all</em>. To run you need the linkern executable from the concorde distribution (<a href="http://www.tsp.gatech.edu/concorde.html">http://www.tsp.gatech.edu/concorde.html</a>). It should compile on most unix-like
platform. I've tested it on ubuntu linux and macos X terminal.</p>
<p>tspsanta.cpp is the main program. It parsed the command line and called the whole algorithm or any phase individually.</p>
<p>PenLinkern.cpp and .h is the class that implement Phase I and Phase 2.</p>
<p>DblLinkern.cpp and .h is the class that implement Phase 3.</p>
<p>My best score was obtained by running the following command line (note that the parameter used are the default one and the score could vary since there is randomness the concorde linkern) :</p>
<p style="padding-left:30px"><em>santatsp doall santa_cities.csv tour1.txt tour2.txt edges.txt penalty.txt solution.csv 14 30 120 30 1.005 100 50</em></p>
<p>The command line parameters are:</p>
<p><strong>phase</strong> : (init, create, solve, improve, solution, doall)<br>
<strong>probname</strong> : the filename of the problem (default: santa_cities.csv)<br>
<strong>tour1filename</strong> : the filename of the tour1 (default: tour1.txt)<br>
<strong>tour2filename</strong> : the filename of the tour2 (default: tour2.txt)<br>
<strong>edgefilename</strong> : the filename of the edgelist passed to linkern (default: edges.txt)<br>
<strong>penaltyfilename</strong> : the filename of the penalty file (default: penalty.txt)<br>
<strong>solfilename</strong> : the solution filename (solution.csv)<br>
<strong>numneighbors</strong> : the number of closest neighbors considered in the init phase<br>
<strong>initduration</strong> : the duration of the linkern optimisation during phase 1<br>
<strong>solveduration</strong> : the duration of the linkern optimisation during phase 2<br>
<strong>improveduration</strong> : the duration of the linkern optimisation during phase 3<br>
<strong>penaltyincrease</strong> : the increase in penalty at every iteration an edge is in conflict<br>
<strong>conflictlimit</strong> : if the number of conflict is lower than this value, no new conflicts are permitted (default: 100).<br>
<strong>improveiter</strong> : The number of improvement iteration (default : 50).</p>
<p>To run a quick try (well not too long) I suggest :</p>
<p>solveduration : 30<br>
penaltyincrease : 1.025<br>
conflictlimit : 500<br>
improveiter : 5</p>
<p><span style="line-height:1.4em">Thanks to the Kaggle team for organizing this competition.</span></p>
<p> </p> |
Traveling Santa Problem | Source code of the prize winners | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Traveling Santa Problem
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The attached file is my final submission.</p>
<p>Finishing in 2nd place, I was required to publish my source code. It is kind of embarrassing compared to the detailed and amazing explanation provided by the winner team, but I will try to share the main points of my approach below.</p>
<p>I use only simple code written by me (a single Java file attached, with about 1,400 lines). This may be a disadvantage is this kind of competition, but a major part of the fun for me is building everything from the scratch, although usually suboptimal :-)</p>
<p>The main methods of my solution are:</p>
<p style="padding-left:30px"><strong style="font-size:13px; line-height:1.4em">1) buildNearestNeighbor()</strong><span style="font-size:13px; line-height:1.4em">: build an initial solution using nearest neighbor. At the end of this step and during all the other
steps the solution must remain valid, i.e., connect all points without using a connection between a pair of points more than once.</span></p>
<p style="padding-left:30px"><strong style="font-size:13px; line-height:1.4em">2) moveSegments():</strong><span style="font-size:13px; line-height:1.4em"> remove a segment (with one or more consecutives edges) and try to fit it somewhere else.</span></p>
<p style="padding-left:30px"><strong style="font-size:13px; line-height:1.4em">3) breakAndReconect()</strong><span style="font-size:13px; line-height:1.4em">: remove few edges from the same path and try to reconnect them in a different order (checking all permutations).</span></p>
<p style="padding-left:30px"><strong style="font-size:13px; line-height:1.4em">4) moveEnds():</strong><span style="font-size:13px; line-height:1.4em"> try to move the endpoints.</span></p>
<p style="padding-left:30px"><strong style="font-size:13px; line-height:1.4em">5) fixRegion():</strong><span style="font-size:13px; line-height:1.4em"> similar to breakAndReconnect, but working with the two paths at the same time. It uses a lot of pruning to
avoid useless permutations.</span></p>
<p><span style="font-size:13px; line-height:1.4em">Other observations:</span></p>
<ul>
<li><span style="font-size:13px; line-height:1.4em">The “movements” methods (2, 3 and 5) can all be tuned, changing the number of neighbors to consider, running time, enable/disable Simulated Annealing (SA) and its temperature, etc. It seems that SA helped
a lot.</span> </li></ul>
<ul>
<li><span style="font-size:13px; line-height:1.4em">Although methods 2 and 3 only work with one path at a time, they check, when there a movement that will improve the solution that can’t be made because of a single connection that is already in use by the
other path, if there is a simple movement that “releases” this connection with a cost smaller than the gain of the first movement.</span>
</li></ul>
<ul>
<li><span style="font-size:13px; line-height:1.4em">From time to time, if improvements have been found, it saves the new solution to a file, which can be loaded in the next execution.</span>
</li></ul>
<ul>
<li><span style="font-size:13px; line-height:1.4em">Usually I ran 3 or 4 instances with different parameters during the night and took the best one on the next day to continue from there.</span>
</li></ul>
<ul>
<li><span style="font-size:13px; line-height:1.4em">There is an auxiliary method that I called “equalize()” which swaps a pair of edges between the paths, making their length similar. For example: IF path 0 is :
<strong>... - A - B - ... - C - D - ...</strong> and path 1 is .<strong>.. - A - C - ... - B - D - ...</strong> AND len(path 0) > (len path 1) AND (dist(A,B) + dist(C,D)) > (dist(A,C) + dist(B,D)) THEN inverting everything between B-C in both paths is a valid
movement that usually will make the length difference smaller. This helped a lot because before implementing it I couldn’t control the big gap that other random movements were generating. So the goal was simplified to optimize any path, not only the longest
one, because it was easy to equalize their length. It also shuffles things a little bit, and that helped other methods finding improvements, so it was called from time to time.</span>
</li></ul>
<ul>
<li><span style="font-size:13px; line-height:1.4em">I kept a list of nearest neighbors for each point using a 4-quadrant division (it took the closest point of each quadrant and ordered them by distance, then the second closest and so on).</span>
</li></ul>
<ul>
<li><span style="font-size:13px; line-height:1.4em">Running the attached code, as it is (with the hard coded parameters), allowing 3 hours (one hour for each of the mains steps - 2, 3 and 5), gives a score around 6,580,000. The rest of the gain came from running
for a longer time and running again and again with different parameters.</span> </li></ul> |
Tweet Sentiment Extraction | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to my great teammate @wowfattie for another interesting competition journey. And congratulations to all the winners!</p>
<p>In this post, I will briefly describe our path in this competition and then go straight to our final model description.</p>
<h2>Motivation to join</h2>
<p>I've joined this competition relatively late (about a month before the deadline). Probably, it was even an advantage, keeping in mind all the data problems in this competition.</p>
<p>My motivation to join this competition was to make myself familiar with the recent NLP models and approaches. The last time I was working with the text data was pre-Transformers era.
<strong>Spoiler: I haven't learnt a lot about recent NLP trends</strong></p>
<p>I had read the basic theory of Tranformers and Bert-like models. Then it was very helpful to go through the code and Youtube videos shared by @abhishek. Thanks again! Basically I've built all my models based on this code.</p>
<h2>Data property (a.k.a. Magic)</h2>
<p>After some tweaking of the initial Roberta model, I've managed to get 0.714 on the Public Leaderboard. Then I've tried to delete the preprocessing that was doing <code>" ".join(text.split())</code> and unexpectedly Public LB score jumped to 0.716.</p>
<p>It was at a similar time moment when the magic post has been shared. So, I decided to dig into what is hidden in the space distribution. And it occurred that the majority of the labels noise could be explained by the extra spaces in the text.</p>
<p>First, thanks to @maxjon for creating a <a href="https://www.kaggle.com/maxjon/complete-tweet-sentiment-extraction-data">correspondence dataset</a> between the original and Kaggle datasets. I'm still not sure about the exact rules this data property follows, but here are our most frequent findings:
- Extra spaces could come either from the initial tweet text or from the deleted @mentions from the original dataset
- Start of the actual label is shifted left by the number of extra spaces
- End of the actual label is also shifted left by the number of extra spaces, but then shifted right by 2 characters</p>
<p>Seems like the initial labeling has been done with <code>" ".join(text.split())</code> preprocessing, and that caused the labels shift in the actual texts. Maybe someone else could shed more light on this. For example, (spaces are substituted with |):</p>
<p>```</p>
<h1>Original dataset text:</h1>
<p>"@blueskiesxj||i|like|yours|too|||i|enjoy|your|photography.|=]"</p>
<h1>Kaggle dataset text:</h1>
<p>"||i|like|yours|too|||i|enjoy|your|photography.|=]"
<code>``
So, we could observe 2 extra spaces in the beginning of the tweet, and 2 extra spaces in the middle, 4 in total. Probably, the actual selected text was</code>"enjoy"<code>, while in the dataset it is a shifted version:</code>"i enj"`.</p>
<h2>Final model</h2>
<p>After this finding problem is transformed from modeling into properly utilizing this data property. Firstly, we've built postprocessing that gave 0.721 Public LB for a single 5-fold Roberta.</p>
<p>Then, we changed a problem a little bit. Firstly, try to inverse this labeling transformation on the train data to get cleaner labels. Then fit the model on clean data, and transform predictions back to the noisy labels with a postprocessing. Such an approach allowed us to get 0.723-0.724 Public LB scores with a 5-fold Roberta model. And the validation score was closely following the Leaderboard.</p>
<p>Other parts of our solution include:
- Using weighted sum of cross-entropy losses (start and end) and lovasz loss for a span prediction
- Use pseudo-labels from the Sentiment140 dataset
- Ensemble multiple models. Our best submission includes 18 5-fold models: 8 Robertas; 4 Electras; 6 Berts Large</p>
<p>It allowed to get 0.731 Public LB and 0.728 Private LB.</p>
<p>To conclude, it was a nice experience cracking the problem, while I haven't tried a lot of new NLP stuff.</p>
<p>Also, I want to thank the <a href="https://www.hostkey.com/">hostkey</a> provider for giving me a <a href="http://landing.hostkey.com/grant_for_winners">grant</a> on using the machine with 4x1080TIs for this competition. It was a smooth experience, highly recommend it!</p> |
Tweet Sentiment Extraction | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="/cdeotte">@cdeotte</a> ! Same to you!</p>
<p>I quickly tried it, but it seems to always replace with full text. Any idea?</p>
<p>You have the original data so you can just check the indices of the mappings.</p>
<p>Thanks! Why would you need a spell checker for that? </p>
<p>Stress was highest when the "magic" post appeared and continued to rise when the golden line came closer and closer. Last few days were quite OK funnily :D</p>
<p>Thanks a lot to Kaggle for hosting this competition! It was quite a fun, but very exhausting ride and I am very happy with achieving my goal of solo gold!</p>
<h3>Data</h3>
<p>As we all have seen, there is quite some noise in the data. Many people found pre- or post-processing techniques for dealing with it. Others had really quite clever model adaptions to address it. I personally think there are actually two types of facets in the data. Let us discuss the following example:</p>
<p><code>That's awesome!!!</code> --> <code>s awesome!</code></p>
<ol>
<li><p>Let us first focus on the second selected word <code>awesome!</code>. Here, we can see stark differences in how the end (special) characters of words are selected. I believe this is due to human annotators having different preferences, and is real noise in the data. I personally would not directly know if I select all exclamation marks, only one, or none. So that there is difference in the data seems natural to me.</p></li>
<li><p>Unfortunately, there appears to also be a form of leak in the data (see first word). When the data was first fixed, we learned that the labeling service only provided the indices back, but did some preprocessing on their own. We learned that the escaped HTML tags and that caused a shift in the indices. Unfortunately, this was not the only thing they did, they also apparently replaced multiple spaces with single ones, which again lead to wrong indices being used by Kaggle.</p></li>
</ol>
<p>I personally think the first case is interesting to model, and the second case is based on wrong data processing upfront. I was identifying the second issue quite early, but just could not find a reliable way to find a PP that works. I don't think it is a consistent error, as it is not always the case, as for example imminent from neutral data where you have a nice baseline with selecting the full text. Many times if there is a space in the beginning, the selected text is still the full text. So maybe there are also two or more labeling processes in play, some are correct, some not. But I just could not figure it fully out.</p>
<h3>Tokenization</h3>
<p>I decided to focus on tokenization to address some of these issues. With default Roberta tokenization, you can always only predict <code>!!!</code> as those tokens are combined into a single one. This heavily limits your model to learn the noise and differences here, and if majority of people select a single exclamation mark, you won't be able to predict it, except you do some PP. What I did though is to modify the <code>merges.txt</code> file of the tokenization. This one has for example a line <code>!! !</code> which means that those subsequent character spans are always merged to a single token. I started removing all those lines with dots and exclamation marks, and got good boosts on CV quite away. In the end, I removed all special character merges:</p>
<p><code>
f = open("merges12.txt", 'w')
for line in open("merges.txt"):
cols = line.strip().split()
if cols[0] == "Ġ":
if cols[1].isalpha() or len(cols[1]) == 1:
f.write(line)
elif cols[0].isalpha() and cols[1].isalpha():
f.write(line)
</code></p>
<p>I wrote more about it in a separate thread: <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159364">https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159364</a> and feel free to try it out: <a href="https://www.kaggle.com/philippsinger/roberta-base-squad2">https://www.kaggle.com/philippsinger/roberta-base-squad2</a></p>
<p>By doing so, I tried to increase the jaccard score I can get on the original selected text with given tokenization, which is out of the box not perfect. You can remove as many merges as you want, and can get 1.0 jaccard score, by e.g., further removing merges like <code>Ġg onna</code>, but in the end this didn't boost my scores, so I sticked to just the special characters.</p>
<p>I believe my approach is very good at dealing with the real noise in the data (point 1 from above), but not so good with dealing with the artificial noise (point 2 from above). What helped with the artificial one is to keep all spaces intact, so the model learns it a bit. But top solutions handled this way better with either extra heads or post processing.</p>
<h3>Models</h3>
<p>I tried quite a bunch of models, but as for most of us Roberta base worked fine enough. I got a tiny boost by using a version pre-trained on Squad2. My head is pretty standard, with the exception that I added a trick I read in <a href="https://www.kaggle.com/c/tensorflow2-question-answering/discussion/127545">TF competition</a>: I concatenate the output of the stark token representations to the input of the end token representations, so the end layer knows what start token was predicted.</p>
<p><code>
start_logits = self.l1(o)
start_token = torch.gather(o,1,start_logits.argmax(dim=1, keepdim=True).repeat(1,1,o.size(2)))
o2 = torch.cat([o,start_token.repeat(1,o.size(1),1)], dim=2)
end_logits = self.l2(o2)
</code></p>
<p>I fit models with Radam or AdamW over 5-8 epochs. In epoch 1 I keep learning rate constant, and after that I do cosine decay. My starting transformer LR is 1e-4.</p>
<p>A "trick" I found is to only fit on all data on first epoch, and then drop neutral data and only train on positive and negative. For neutral data I fit separate models early stopping on neutral eval only. In general, I do early stopping, but sometimes also fixed epochs, and different types of cvs.</p>
<p>As I always do, I never looked at single model OOF scores to make judgements, but always fitted each fold at least 5 times and blended the result. This is way better indicative of test submissions you do (where you at least blend all folds) and also gave me a quite good understanding of the random range. </p>
<h3>Pseudo tagging</h3>
<p>I had high hopes in the beginning for pseudo tagging in this competition, but it was way less helpful than I thought it would be. At first I was hopeful when adding public test to training boosted me from 724 to 728 on the first attempt, but it was lucky. The models are in general really, really good in memorizing here, so for example if you add 728 to train, and then predict again on test, you will always be in the 728 range.</p>
<p>However, based on that I found out that you can use it well to compress models. So for example, you can run 5-bag CV, blend them, predict and then feed this together with train data to another model, and the model is good at predicting unseen data with fewer fits.</p>
<h3>Final subs</h3>
<p>My final subs are heavy blends on many, many model fits. To bring diversity to my subs, one of them is using a blend of regular models, and another one on only pseudo models. Both use the same models for neutral data. The first one is using also an adaption of <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/158613">JEM</a> that was posted. All diversity efforts were quite useless in the end, as both subs scored basically the same and both are under the top 4 subs of my potential private LBs.</p>
<h3>Things that didnt work</h3>
<p>There are too many things, and I am tired to even think of them, but to mention a few:</p>
<ul>
<li>changing the loss function</li>
<li>replacing <em>*</em>* with original swear words</li>
<li>other models</li>
<li>pretraining on twitter data or finetuning language models</li>
<li>including the extra sentiment data</li>
<li>any of my post-processing attempts</li>
</ul>
<p>Yes it was :)</p>
<p>Specifically as the original replacement was so silly... <code>I find that it is cool</code>. --> <code>I find tha**** is cool</code>. Or something along those lines...</p>
<p>Fit five different models with different seeds, average logits, check score.</p>
<p>So in case of 5-fold this would mean 25 model fits (5x5).</p>
<p>Yes, 25 models, or even more.</p>
<p>I think RAdam was pretty much the same as AdamW. Unfortunately, there was not too much to gain in this competition for me by playing with optimizer, scheduler, epochs, etc.</p> |
Tweet Sentiment Extraction | 12th place- predict joint probabilities for start & end indices | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Take a look at graphs above. Likelihood for substring 3 is not just 0.51 * 0.51</p>
<p><code>embeddings_start</code> - embeddings of all tokens as starting tokens
<code>embeddings_end</code> - embeddings of all tokens as final tokens
To find logit for each pair (<code>starting_index</code>, <code>final_index</code>) we do dot product of corresponding embeddings: sum(<code>embeddings_start[starting_index]</code>*<code>embeddings_end[final_index]</code>)
We should do this dot product for all combinations of starting index and final index. That is exactly what "ijk,idk->ijd" does (check some examples <a href="https://stackoverflow.com/questions/55894693/understanding-pytorch-einsum">here</a>). After that we have tensor with shape(i, j, d) - for each sentence we obtain square matrix of logits.
(i,j,k) - shape of <code>embeddings_start</code>
(i,d,k) - shape of <code>embeddings_end</code>
where i- batch size,
j=d - length of sentences in a batch,
k - dimensionality of embeddings
Actually j=d but we use two variables j and d to get rid off ambiguity.</p>
<p>Yes, first we understand values for horizontal arrows, they can be only 0.49, after that we can fill diagonal one. </p>
<p>First I want to thank <a href="/abhishek">@abhishek</a> for his great <a href="https://www.kaggle.com/abhishek/roberta-inference-5-folds">kernel </a>. I learnt a lot from this work.</p>
<p>List of things that gave significant boost:
1. to use roberta fine-tuned on squad2 dataset (+ <strong>~0.003</strong> in comparison with vanilla roberta)
2. to select not just the most likely starting and final index, but smarter approach (+ <strong>~0.003</strong>)
3. of course ensemble works, in my case it was an ensemble of 2 roberta-based models (+ <strong>~0.004</strong>)
4. I also did post processing, something close to approach described <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159499">here</a> (+ <strong>~0.008</strong>) </p>
<p>I am not going to describe in details all things that I tried because most of them are already covered well in other discussions. I just want to tell about the 2nd point. </p>
<h2>Motivation</h2>
<p>Let's imagine that our model returns vector of probabilities for starting index and vector of probabilities for final index. In the picture below vertical green lines represent probability for starting index and red ones for final index. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F440336%2Fe3e839fa749e2915e501fd4c703768fe%2Fsituation1-2.png?generation=1592506019218143&alt=media" alt=""></p>
<p>In this case we have only 3 possible answers. It is easy to see that likelihoods for 1, 2 and 3 substrings are respectively 0.49, 0.49 and 0.02. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F440336%2F9ee5c2c9c3ced00ec533d860a3bb04a7%2Fsituation1-3.png?generation=1592506579363108&alt=media" alt=""></p>
<p>And if we select the most likely start and end it will mean that we prefer substring 3. We can calculate expectation of Jaccard score for all 3 cases:</p>
<p>substring 1: 0.49 * 1 + 0.49 * 0 + 0.02 * 0.1 = 0.492
substring 2: 0.49 * 0 + 0.49 * 1 + 0.02 * 0.1 = 0.492
substring 3: 0.49 * 0.1 + 0.49 * 0.1 + 0.02 * 1 = 0.118</p>
<p>Obviously the 3rd substring is the worst answer and it is better to select 1 or 2 substring. So, if we can find likelihood for each combination of start index and end index then we can maximize expectation of Jaccard score. The problem is that we cannot always find this likelihood for
each pair of starting and final indices. For example, consider the following case:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F440336%2F4ca0cfae6756861f7ac4f9bd7fe37472%2Fsituation2.png?generation=1592507712450034&alt=media" alt=""></p>
<p>There are 4 possible answers here and we cannot estimate likelihoods correctly for them.</p>
<h2>Proposed solution</h2>
<p>I modified top level of my model in such way to make it return pairwise logits instead of separate logits for starting and final indices.</p>
<p>```
class TweetModel(transformers.BertPreTrainedModel):
def <strong>init</strong>(self, conf):
super(TweetModel, self).<strong>init</strong>(conf)
self.base_bert = BASE_MODEL.from_pretrained(MODEL_PATH, config=conf)
self.drop_out = nn.Dropout(0.1)
self.linear_start = nn.Linear(HIDDEN_DIM * 2, INTERNAL_EMBEDDING)
self.linear_end = nn.Linear(HIDDEN_DIM * 2, INTERNAL_EMBEDDING)
torch.nn.init.normal_(self.linear_start.weight, std=0.02)
torch.nn.init.normal_(self.linear_end.weight, std=0.02)</p>
<pre><code>def forward(self, ids, mask, token_type_ids):
_, _, out = self.base_bert(
ids,
attention_mask=mask,
token_type_ids=token_type_ids,
)
out = torch.cat((out[-1], out[-2]), dim=-1)
embeddings_start = self.linear_start(self.drop_out(out))
embeddings_end = self.linear_end(self.drop_out(out))
logits = torch.einsum("ijk,idk-&gt;ijd", (embeddings_start, embeddings_end))
return logits
</code></pre>
<p>```</p>
<p>Now output of the model can be visualized as a matrix. Each pixel is responsible for one possible substring.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F440336%2F25517635b115af2cb1327bbe583d4a9b%2Foutput.png?generation=1592508993879675&alt=media" alt=""></p>
<p>Profit! Now we do not need to estimate likelihood, our model does it instead of us and we can simply optimize mathematical expectation of Jacсard score.</p>
<p>As for loss function, I also used cross entropy as in original kernel. But with a small fix, because now output has another shape:</p>
<p>```
def loss_multiclass(logits, start_positions, end_positions):
loss_fct = nn.CrossEntropyLoss()
positions = start_positions * logits.shape[1] + end_positions
logits = logits.view(logits.shape[0], logits.shape[1] * logits.shape[2])
total_loss = loss_fct(logits, positions)</p>
<pre><code>return total_loss
</code></pre>
<p>```</p>
<p>Yes, It is a good question. Probably it is easier to consider it in terms of flow. These cases can be represented by these graphs:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F440336%2Ffd0a6171e56ee83e805182310bb4437a%2Fflows.png?generation=1592543509611650&alt=media" alt=""></p>
<p>I assume here that starting index cannot be greater than final one. With this assumption the first case has the only solution, at the same time the second case still has infinite number of possible solutions.</p> |
Tweet Sentiment Extraction | 13th Place - From LB 710 to LB 726 - How To | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes it only adds one dimension but it helps. And the error from wrong starting token prediction gets back propagated through end token weights.</p>
<p><code>tru</code> indicates where the special tokens are. So after randomly converting 5% tokens to mask, the line <code>ids[tru] = self.ids[indexes][tru]</code> converts all the special tokens back to their special tokens (in case they were turned into masks).</p>
<p>Thanks. Perhaps 3 and 9 need the presence of some of the others to work. Also note that you need to run 10 CVs to know whether it worked. Maybe you tried and ran one CV but got a low unlucky CV.</p>
<p>Thanks. Good point, we was aware of that problem but decided to ignore it :) This only happened to 72 rows from the 27481 rows of train which seems insignificant. Perhaps addressing this would increase CV and LB
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F40f1215ef0f179df63fe1f823e4ee85b%2FScreen%20Shot%202020-06-17%20at%2011.49.26%20AM.png?generation=1592419871473800&alt=media" alt=""></p>
<p>You can also run PP algorithm backwards to correct the train targets, then tokenize, then train, then apply PP forwards. We tried this on some of our models, but forgot to try this on our best model.</p>
<p>If you don't remove white space, then <code>MAX_LEN = 128</code>. I used four Nvidia V100 GPUs as <code>strategy = tf.distribute.MirroredStrategy()</code> for experiments. Each epoch took 2 minutes. So 5 folds took around 30 minutes for 3 epochs or 50 minutes for 5 epochs. I'm not sure what hardware my teammates had.</p>
<p>(I used float32 instead of mixed precision because early experiments seemed more accurate that way. If i used mixed precision then each epoch would be faster than 2 minutes).</p>
<p>Thanks. Congrats on Silver. I also think this helped in shakeup. All our models that broke up tokens increased on private LB. And models that didn't break up tokens decreased.</p>
<p>Thank you to my wonderful teammates Shai <a href="/sgalib">@sgalib</a>, Yasin <a href="/mykttu">@mykttu</a>, Cooleel <a href="/cooleel">@cooleel</a> . Together we achieved Gold and I am so excited to receive my 5th competition Gold and become <strong>Competition Grandmaster</strong> ! </p>
<p>Also congrats to Shai <a href="/sgalib">@sgalib</a> for becoming Competition Grandmaster. And Cooleel <a href="/cooleel">@cooleel</a> for becoming Competition Master!</p>
<h1>Single TensorFlow Roberta-base</h1>
<p>Our best final submission is a single TensorFlow RoBERTa model. We start with my public notebook <a href="https://www.kaggle.com/cdeotte/tensorflow-roberta-0-705">here</a> which has CV 0.705, Public LB 0.709, Private LB 0.713. Then we make 10 changes to increase to CV 0.718, Public LB 0.724, Private LB 0.726</p>
<h1>Validation</h1>
<p>We tested dozens, maybe hundreds of ideas. Since training data was small, for each idea, we ran the local CV 10 times with 10 different K Fold random seeds and averaged the scores (that's 5 folds times 10 equals 50). Each change below increased CV average by at least 0.001</p>
<h1>1. Do not remove extra white space.</h1>
<p>The extra white space contains signal. For example if text is <code>"that's awesome!"</code> then selected text is <code>awesome</code>. However if text is <code>" that's awesome!"</code> then selected text is <code>s awesome</code>. The second example has extra white space in the beginning of text. And resultantly the selected text has an extra proceeding letter.</p>
<h1>2. Break apart common single tokens</h1>
<p>RoBERTa makes a single token for <code>"..."</code>, so your model cannot chose <code>"fun."</code> if the text is <code>"This is fun..."</code>. So during preprocess, convert all single <code>[...]</code> tokens into three <code>[.][.][.]</code> tokens. Similarily, split <code>"..", "!!", "!!!"</code>. </p>
<h1>3. Underestimate train targets</h1>
<p>Jaccard score is higher is you underestimate versus overestimate. Therefore if text is <code>" Matt loves ice cream"</code> and the selected text is <code>"t love"</code>. Then train your model with selected text <code>"love"</code> not selected text <code>"Matt love"</code>. All public notebook do the later, we suggest the former.</p>
<h1>4. Modified Question Answer head</h1>
<p>First predict the end index. Then concatenate the end index logits with RoBERTa last hidden layer to predict the start index.</p>
<pre><code># ROBERTA
bert_model = TFRobertaModel.from_pretrained('roberta-base')
x = bert_model(q_id,attention_mask=q_mask,token_type_ids=q_type)
# END INDEX HEAD
x2 = tf.keras.layers.Dropout(0.1)(x[0])
x2b = tf.keras.layers.Dense(1)(x2)
x2 = tf.keras.layers.Flatten()(x2b)
x2 = tf.keras.layers.Activation('softmax')(x2)
# START INDEX HEAD
x1 = tf.keras.layers.Concatenate()([x2b,x[0]])
x1 = tf.keras.layers.Dropout(0.1)(x1)
x1 = tf.keras.layers.Dense(1)(x1)
x1 = tf.keras.layers.Flatten()(x1)
x1 = tf.keras.layers.Activation('softmax')(x1)
# MODEL
model = tf.keras.models.Model(inputs=[q_id, q_mask, q_type], outputs=[x1,x2])
</code></pre>
<h1>5. Use label smoothing</h1>
<pre><code>loss = tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.2)
</code></pre>
<h1>6. Mask words</h1>
<p>Use data loader to randomly replace 5% of words with <code>[mask]</code> token 50264. Within your dataloader use the following code. We also maintain where the special tokens are so that they don't get replaced</p>
<pre><code>r = np.random.uniform(0,1,ids.shape)
ids[r&lt;0.05] = 50264
ids[tru] = self.ids[indexes][tru]
</code></pre>
<h1>7. Decay learning rate</h1>
<pre><code>def lrfn(epoch):
dd = {0:4e-5,1:2e-5,2:1e-5,3:5e-6,4:2.5e-6}
return dd[epoch]
lr = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose=True)
</code></pre>
<h1>8. Train each fold 100% data for submit</h1>
<p>After using normal 5 fold and early stopping, note how many epochs are optimal. Then for your LB submission, run your 5 folds with the fixed epoch number you found using 100% data each fold.</p>
<h1>9. Sample weight positive and negative</h1>
<p>In TensorFlow Keras it is easy to make certain training samples more important. The normal output from <code>class DataGenerator(tf.keras.utils.Sequence)</code> is <code>(X,y)</code>. Instead output <code>(X,y,w)</code> where weight is the same shape as <code>y</code>. Then make <code>w=2</code> for all the positive and negative targets and <code>w=1</code> for all the neutral targets. Then train with the usual TensorFlow Keras calls</p>
<pre><code>t_gen = DataGenerator()
model.fit(t_gen)
</code></pre>
<p>And volia! CV and LB increase 0.001</p>
<h1>10. Post process</h1>
<p>The above 9 changes already predict much of the noise. For example the above has no problem with the following 2 examples. Text is <code>" that's awesome!!!"</code> with selected text <code>"s awesome!"</code>. And <code>" I'm thinking... wonderful."</code> with selected text <code>". wonderful"</code>. In each case, the model sees the leading double white space and extracts the single proceeding character.</p>
<p>However the model cannot break a single letter off a word like text <code>"went fishing and loved it"</code> with selected text <code>"d loved"</code>. This would require breaking a <code>"d"</code> off of the word <code>"and"</code>. For these difficult cases, we use post process which increase CV 0.0025 and LB 0.0025</p>
<pre><code># INPUT s=predicted, t=text, ex=sentiment
# OUTPUT predicted with PP
def applyPP(s,t,ex):
t1 = t.lower()
t2 = s.lower()
# CLEAN PREDICTED
b = 0
if len(t2)&gt;=1:
if t2[0]==' ':
b = 1
t2 = t2[1:]
x = t1.find(t2)
# PREDICTED MUST BE SUBSET OF TEXT
if x==-1:
print('CANT FIND',k,x)
print(t1)
print(t2)
return s
# ADJUST FOR EXTRA WHITE SPACE
p = np.sum( np.array(t1[:x].split(' '))=='' )
if (p&gt;2):
d = 0; f = 0
if p&gt;3:
d=p-3
return t1[x-1-b-d:x+len(t2)]
# CLEAN BAD PREDICTIONS
if (len(t2)&lt;=2)|(ex=='neutral'):
return t1
return s
</code></pre>
<h1>Other ideas</h1>
<p>Our team tried tons of more ideas which may have worked if we spent more time to refine them. Below are some interesting things we tried:
* replacing <code>****</code> with the original curse word.
* using part of speech information as an additional feature
* using NER model predictions as additional features
* compare test text with train text using Jaccard and use train selected text when <code>jac &gt;= 0.85</code> and <code>text length &gt;= 4</code> . (This gained 0.001 on public LB but didn't change private LB).
* pretrain with Sentiment140 dataset as MLM (masked language model)
* pseudo label Sentiment140 dataset and pretrain as QA (question answer model)
* Train a BERT to choose the best prediction from multiple BERT predictions.
* Stack BERTs. Append output from one BERT to the QA training data of another BERT.
* Tons of ensembling ideas like Jaccard expectation, softmax manipulations, voting ensembles, etc</p>
<h1>Thank you</h1>
<p>Once again, thank you to my wonderful teammates <a href="/sgalib">@sgalib</a> <a href="/mykttu">@mykttu</a> <a href="/cooleel">@cooleel</a> . And thank you to Kaggle for another fun competition.</p>
<p>The training data are NumPy 2D array of size <code>(26000,MAX_LEN)</code>. And the sample weight is NumPy 2D array of size <code>(26000,1)</code>. You add a weight for each training row. Every row of the train data that is positive or negative gets weight 2 and every row that is neutral gets weight 1.</p>
<p>No, <code>x[1]</code> is sized <code>(batch_size,1,768)</code>. It is just the classification token. We want <code>x[0]</code> which is sized <code>(batch_size,MAX_LEN,768)</code> which includes all the input tokens. (The variable <code>MAX_LEN</code> is the length of all the input tokens).</p>
<p>I forget, only a little i think. Both <code>tf.keras.layers.Dropout(0.1)</code> and <code>r = np.random.uniform(0,1,ids.shape); ids[r<0.05] = 50264</code> are similar and help the model generalize.</p>
<p>Good questions. </p>
<p>We used 10 different seeds for the 10 different CVs,</p>
<pre><code>skf = StratifiedKFold(n_splits=5,shuffle=True,random_state=SEED)
</code></pre>
<p>During validation, callback <code>sv</code> saves the best epoch. And we observe that it is usually epoch 3</p>
<pre><code>t_gen = DataGenerator(ids[idxT,],att[idxT,],tok[idxT,],tru[idxT,],tar1[idxT,],tar2[idxT,])
model.fit(t_gen, epochs=5, verbose=1, callbacks=[lr,sv],
validation_data=([ids[idxV,],att[idxV,],tok[idxV,]], [tar1[idxV,],tar2[idxV,]]))
</code></pre>
<p>During submission we use fixed 3 epochs and notice we don't use the train indices <code>idxT</code>.</p>
<pre><code>t_gen = DataGenerator(ids,att,tok,tru,tar1,tar2)
model.fit(t_gen, epochs=3, verbose=1, callbacks=[lr],
validation_data=([ids[idxV,],att[idxV,],tok[idxV,]], [tar1[idxV,],tar2[idxV,]]))
</code></pre>
<p>This is like cutout data augmentation that is used in image classification. It makes your model smarter to train with some of the input missing.</p> |
Tweet Sentiment Extraction | 14th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p> First of all, i want to thank the Kaggle team for holding this interesting competition, This is my first time to take part in the kaggle competition And is the first time to participate in the discussion, I am very happy and lucky to get the gold medal. The only regret is that I used the fast submission method before, on the last hours, I had to upload my model weights. The upload speed was very slow, so I gave up the model ensemble.</p>
<p> my solution is not special. I tried many tricks, but most of them can't improve my score, such as adversarial training, language model pretraining, highly dropout, train two model and concat embedding, joint training add classification loss, train different model albert, xlnet, bert, roberta large, roberta base squad. </p>
<h1>Summary of my useful solution</h1>
<ul>
<li>preprocess<br>
when i use preprocess ,it gives me 0.712->0.717.</li>
<li>pseudo labeling<br>
when i use Pseudo labeling, it gives me 0.717->0.720, Pseudo labeling should use higher <br>
learning rate and voting through three models to generate labels. my base model's learning rate is 2e-5, when i use labeled data , it changes to 4e-5.</li>
<li>postprocess<br>
when i use postprocess, it gives me 0.720->0.729. </li>
</ul>
<h1>postprocess code</h1>
<pre><code>def get_orgin_tweet_idx(origin_tweet, selected_text):
selected_text = selected_text.strip()
selected_start = selected_text.split()[0]
for num, i in enumerate(origin_tweet):
if origin_tweet[num:num + len(selected_start)] == selected_start:
if ' '.join(origin_tweet[num:].split())[:len(selected_text)] == selected_text:
return num
def get_pp_result5(orign_tweet, selected_text):
selected_text = selected_text.strip()
standard_orig_text = ' '.join(orign_tweet.split())
standard_selected_text = ' '.join(selected_text.split())
if standard_orig_text == standard_selected_text:
return selected_text
if selected_text == None or selected_text.strip() == '':
return orign_tweet
idx = get_orgin_tweet_idx(orign_tweet, selected_text)
if idx == None:
return orign_tweet
if idx == 0:
return selected_text
if selected_text[-1].isalpha():
selected_text = selected_text + ' '
else:
if selected_text.endswith('.') and orign_tweet[0] == ' ':
lst = selected_text.split()[-1]
tp_idx = -2
for sst in range(len(lst) - 1, -1, -1):
if lst[sst] == '.':
tp_idx = sst
if tp_idx != 0:
selected_text = ' '.join(selected_text.split()[:-1] + [selected_text.split()[-1][:tp_idx + 1]])
else:
selected_text = selected_text
if orign_tweet[idx - 1] == ' ':
selected_text = ' ' + selected_text
if orign_tweet[:idx - 1].split(' ').count('') < 2:
if orign_tweet[:idx - 1].split(' ').count('') == 1 and orign_tweet[idx - 1] == '.':
return '.' + selected_text.strip()
return selected_text
else:
if orign_tweet[idx - 1] == '.':
selected_text = '.' + selected_text.strip()
selected_text = orign_tweet[idx - 1 - (orign_tweet[:idx - 1].split(' ').count('') - 1):idx - 1] + selected_text
selected_text = selected_text[:len(selected_text) - (orign_tweet[:idx - 1].split(' ').count('') - 1)]
return selected_text
</code></pre>
<p>For any questions, you can contact me, and I am willing to communicate and learn. </p>
<p>thanks this is my pseudo labelling code</p>
<p>The notebook will be released soon.</p> |
Tweet Sentiment Extraction | 18th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey, sure. Bascially my "find_all" function returns all the valid answers in the text string, so for example. </p>
<p><code>
text = "im so happy happy happy"
selected_text = "happy"
targets = find_all(text, selected_text)
assert targets == [6, 12, 18]
</code></p>
<p>It then gets passed with the attention mask to the smoother which returns a weight which sums to one,
and for which the targets sum to 90 % of (1 - smoothing)</p>
<p>```
class SmoothTarget(torch.nn.Module):
def <strong>init</strong>(self, smoothing=0.0, istraining=True, model_type="roberta", device="cpu"):
super(SmoothTarget, self).<strong>init</strong>()
self.q_ln = 4 if "roberta" in model_type else 3
self.device = device
self.training = istraining
self.smoothing = smoothing
if not istraining:
self.smoothing = 0</p>
<pre><code>def forward(self, targets, atten, prop=0.9):
if not self.training:
prop = 1
targets = targets[:1]
n_targets = len(targets)
max_idx = atten.argmax(dim=0)
atten[max_idx] = 0
atten[:self.q_ln ] = 0
weight = torch.zeros_like(atten, dtype=torch.float)
weight[atten == 1] = prop * torch.FloatTensor([self.smoothing]) / (atten.sum() - n_targets)
weight[atten != 1] = (1 - prop) * torch.FloatTensor([self.smoothing]) / (len(atten) - atten.sum())
weight.scatter_(-1, targets, (1. - self.smoothing) / n_targets)
return weight
</code></pre>
<p>```</p>
<p>Congratulations to all the winners and thanks to Kaggle for the competition. </p>
<h3>Summary of my solution</h3>
<ul>
<li><strong>Model:</strong> RoBERTa base as the only model, my score is single 5 fold ensamble (Abhishek folds)</li>
<li><strong>Data cleaning:</strong> Recovered the original labels with "magic" being inclusive with words</li>
<li><strong>Preprocessing:</strong> Replaced incorrect words, repeating characters, urls, and "..n" -> ". . . n" <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/150840#847641">explained</a></li>
<li><strong>Label smoothing:</strong> 90% on the text tokens and 10 % on padding and the first four tokens, also did 0.90%/n for each occurrence of the label</li>
<li><strong>Dropout:</strong> Changed all dropout to Multi-Sample Dropout from <a href="https://arxiv.org/pdf/1905.09788.pdf">paper</a> and <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129840">Google QUEST Q&A 1st place</a></li>
<li><strong>Input Shift:</strong> Shifting the input tokens around the padding randomly while training</li>
<li><strong>Postprocessing:</strong> Rather simple "magic'" script to recover the original labels (will post kernel)</li>
<li><strong>Improved Find All:</strong> There are many bugs introduced by just naively getting the label index</li>
</ul>
<h3>What didn't work (excluding most things)</h3>
<ul>
<li><strong>Pseudo Labeling:</strong> The labels were to easy for the model and it just resulted in overfitting </li>
<li><strong>Different learning rates:</strong> Tried different rate for the head, for each layer etc. Nothing worked </li>
<li><strong>Language Model pretraining:</strong> <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/146262">Discussed in detail here</a></li>
<li><strong>Different models:</strong> Tried Roberta Large, Bert, Electra and BERTweet, they all did much worse </li>
<li><strong>Adversarial Training:</strong> <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/143764">Discussed here</a></li>
<li><strong>Under/Oversample:</strong> Tried various configurations such as constant amount of each sentiment, removing short tweets, under-sampling based on sentiment etc. No good results</li>
</ul>
<p>It's too much code to paste here so I attached the model file I used for this. </p>
<p>Regarding the padding it's a lot simpler, this is my pre-processing for Roberta:</p>
<pre><code>def preprocess_roberta(tweet, selected_text, sentiment, tokenizer, max_len, training, do_shift):
bechars = "ABCDEFGHIJKLMNOPQRSTUVWXYZÅÄÖabcdefghijklmnopqrstuvwxyzåäö'0123456789"
tweet = " " + " ".join(str(tweet).split())
selected_text = " " + " ".join(str(selected_text).split())
len_st = len(selected_text) - 1
idx0 = None
idx1 = None
b_cnt = 0
sentiment_ids = {
'positive': 1313,
'negative': 2430,
'neutral': 7974
}
starts = find_all(tweet, selected_text[1:])
all_starts = []
all_ends = []
all_cnts = []
for start_ in starts:
start_
cnt = 0
end_ = start_ + len_st
if len(tweet[end_:]) == 0 or len(tweet[end_:]) != 0 and tweet[end_:][0] not in bechars:
cnt += 1
if start_ == 0 or start_ != 0 and tweet[:start_][-1] not in bechars:
cnt += 1
if cnt == 2:
if len(tweet[end_:]) == 0 or len(tweet[end_:]) != 0 and tweet[end_:][0] == " ":
cnt += 1
if start_ == 0 or start_ != 0 and tweet[:start_][-1] == " ":
cnt += 1
if cnt >= b_cnt or b_cnt == 0:
idx0 = start_
idx1 = end_ - 1
b_cnt = cnt
if cnt >= 2:
all_starts.append(start_)
all_ends.append(end_ - 1)
all_cnts.append(cnt)
if len(all_cnts) > 1 and b_cnt >= 3:
new_starts = []
new_ends = []
for cnt_idx in range(len(all_cnts)):
cnt = all_cnts[cnt_idx]
start_ = all_starts[cnt_idx]
end_ = all_ends[cnt_idx]
if cnt >= b_cnt:
new_starts.append(start_)
new_ends.append(end_)
# else:
# print(start_)
# print(end_)
# print(tweet)
# print(selected_text)
all_starts = new_starts
all_ends = new_ends
if len(all_starts) == 0:
all_starts.append(idx0)
all_ends.append(idx1)
tok_tweet = tokenizer.encode(tweet)
input_ids_orig = tok_tweet.ids
tweet_offsets = tok_tweet.offsets
padding_length = max_len - (5 + len(input_ids_orig))
before = 0
if training and do_shift and random.random() < 0.20:
before = int(random.random() * padding_length)
after = padding_length - before
targets_start = []
targets_end = []
for idx0, idx1 in zip(all_starts, all_ends):
char_targets = [0] * len(tweet)
if idx0 != None and idx1 != None:
for ct in range(idx0, idx1 + 1):
char_targets[ct] = 1
if len(tok_tweet.overflowing) > 0:
print("OVERFLOWING", tweet)
target_idx = []
for j, (offset1, offset2) in enumerate(tweet_offsets):
if sum(char_targets[offset1: offset2]) > 0:
target_idx.append(j)
targets_start.append(target_idx[0] + 4 + before)
targets_end.append(target_idx[-1] + 4 + before)
input_ids = [0] + [sentiment_ids[sentiment]] + [2] + [2] + [1] * before + input_ids_orig + [2] + [1] * after
token_type_ids = [0, 0, 0] + [0] + [0] * before + (len(input_ids_orig)+1) * [0] + [0] * after
attention_mask = [1, 1, 1] + [1] + [0] * before + (len(input_ids_orig)+1) * [1] + [0] * after
# for idx0, idx1 in zip(targets_start, targets_end):
# if tokenizer.decode(input_ids[idx0:idx1+1]).strip() != selected_text.strip().lower():
# print(tokenizer.decode(input_ids[idx0:idx1+1]).strip())
# print(selected_text.strip())
# assert tokenizer.decode(input_ids[idx0:idx1+1]).strip() == selected_text.strip().lower()
return { 'input_ids': input_ids,
'attention_mask': attention_mask,
'token_type_ids': token_type_ids,
'targets_start': targets_start,
"targets_end" : targets_end,
"offsets" : 0,
}
</code></pre>
<p>Yes, almost like that, one sentence could be modified to be something like this:</p>
<p>[cls][sentiment][sep][pad]..[pad][sep][tweet][sep][pad]…[pad]</p>
<p>This actually helped a lot for this dataset. </p>
<p>Another thing which I don't mention here was I tried a sort of latent mixup where I took one two positive<br>
sentences and mixed them together at some ratio at some random layer [6-12]. To do this properly I had to find the overlap between the two tweets padding so they could start at the same token. Also had to modify the attention masks when I did the mixup. </p> |
Tweet Sentiment Extraction | 21st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>thanks, i will update the code soon.</p>
<p><a href="/udaygurugubelli">@udaygurugubelli</a> thanks, first of all, we have selected the text where number of extra spaces is greater than 1, then we find the number of extra_spaces and then adjust the start and end offset.</p>
<p>For Example:</p>
<p>```
text: is back home gonna miss every one
selected_text: onna
model_predicted_text: miss</p>
<p>splitted_text = text.split(model_predicted_text)[0]
extra_spaces = len(splitted_text) - len(" ".join(splitted_text.strip()))</p>
<h1>in this case we will have extra_spaces = 5</h1>
<p>start = text.find(model_predicted_text) - extra_spaces
end = start + len(model_predicted_text)
selected_text = text[start: end]</p>
<h1>this will return onna</h1>
<p>```</p>
<p><a href="/renqinnlp">@renqinnlp</a> There are texts in the train data with length greater than 64, taking such a short MAX_LEN doesn't make sense, i have with MAX_LEN <code>128</code>, <code>168</code> and <code>192</code> out of all 168 works best for us. </p>
<p>No, i didn't find any case in train or public test where <code>MAX_LEN > 168</code>, didn't know why but MAX_LEN 192 worked for some people.</p>
<p>Hello everyone, first of all i would like to thank my great teammates <a href="/ryoya0902">@ryoya0902</a>, <a href="/mks2192">@mks2192</a>, <a href="/shahules">@shahules</a> and <a href="/kpriyanshu256">@kpriyanshu256</a> for such an interesting competition journey. And congratulations to all the winners!</p>
<p>It's been a great competition, and we have spend a lot of time in this competition and finally glad to share that all the hard work paid off.</p>
<h2>Quick Summary of things that worked</h2>
<ul>
<li>RoBERTa base as our base model with MAX_LEN = 168</li>
<li>Used preprocessing (this boost our local CV from 0.711 to 0.717)</li>
<li>Trained for 8 folds (8 fold was giving better CV in our case)</li>
<li>At last we used the so called magic (Post Processing) (boost our LB from 0.717 to 0.724)</li>
<li>Also validating multiple times per epoch has also improved our CV.</li>
</ul>
<h2>Things that didn't worked</h2>
<ul>
<li>We tried to solve this problem as NER, unsupervised text selection, they didn't worked.</li>
<li>We tried various model architectures, but none of them worked well except <code>roberta-base</code>.</li>
<li>We tried training seperate model for each sentiment but that didn't worked as well.</li>
<li>We also tried a lot of preprocessing techniques like removing all the noisy samples, cleaning text, etc but none worked.</li>
<li>We also tried BERTweet, it also didn't work for us.</li>
<li>We also tried augmentations like Synonym replacement, etc.</li>
<li>Augmentation kernel can be found <a href="https://www.kaggle.com/rohitsingh9990/data-augmentation-by-synonym-replacement">here</a></li>
</ul>
<h2>Preprocessing that worked</h2>
<p>We know that there is huge amount of noise in samples where some extra space is present in between words, for example:</p>
<p>```
text: is back home gonna miss every one
selected_text: onna
after_preprocessing: miss</p>
<p>```
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3982638%2F374c9e6d4fa5521778f8268b26bb4f53%2FScreenshot%202020-06-17%20at%207.46.08%20AM.png?generation=1592360225973868&alt=media" alt=""></p>
<ul>
<li>The catch was you need to shift start and end indices by the number of extra spaces.</li>
<li>preprocessing kernel can be found <a href="https://www.kaggle.com/rohitsingh9990/preprocessing-that-worked?scriptVersionId=36556534">here</a></li>
</ul>
<h2>Training</h2>
<ul>
<li>After applying preprocessing, we trained our model for 8 folds</li>
<li>We used the same model architecture as shared by <a href="/abhishek">@abhishek</a>
> Note: Need to clean the training kernel, will share soon.</li>
</ul>
<h2>Inference</h2>
<ul>
<li>after getting the <code>selected_text</code> from model we apply our <code>post processing</code>, it's nothing but just the reverse engineering of our preprocessing kernel shared above.</li>
<li>it's almost similar to what's been discussed <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159245">here</a></li>
<li>And finally the inference kernel can be found <a href="https://www.kaggle.com/rohitsingh9990/roberta-prepost-0-726">here</a></li>
</ul>
<p>At last i would like to thank <a href="/abhishek">@abhishek</a> and <a href="/cdeotte">@cdeotte</a> for their amazing starter kernels and their spirit of public sharing, which helped other kagglers a lot.</p>
<p>Thanks:)</p>
<p>Thanks:)</p>
<p><a href="/kurianbenoy">@kurianbenoy</a> it doesn't matter much if you know one of them very well.</p>
<p>All of our team members know both Pytorch and Keras, we did experiment in both of them.</p>
<p>I mainly work with pytorch.</p> |
Tweet Sentiment Extraction | 22nd place solution with codes | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Congrats to all winners, and thanks to my teammates</strong>! . And silver medal again😂 .</p>
<h2>Our solution could be summarized as:</h2>
<ol>
<li>Post processing, as <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159260">Firas' kernel</a> showed, we applied this postprocessing which helps us to reverse engineering some samples. This pp increases our single model from 0.716 to 0.724. Thanks for Firas finding the idea of postprocessing!</li>
<li>Ensemble, we trained 3 sets of roberta-base models with different random seeds and one albert-large model and one xlnet-base model. We calculate token level offsets and word level offsets. After we get logits, we transform token level logits into word level logits (using the maximum probability of token inside word as word probability), then we average word level logits of all models to get word level bounding boxes, then we view roberta-base as base model, and further select start and end tokens from start and end words.</li>
<li><p>Pseudo labeling, by using original dataset for pseudo labeling, we trained 2 rounds of pseudo labels for roberta-base and albert-large. CV of roberta-base increases from 0.722 to 0.724, and cv of albert-large increases from 0.720 to 0.722, as for xlnet-base, since JianJian's server is down😂, we don't have time to train that.</p>
<h2>Things we thought about but haven't tried:</h2></li>
<li><p>CNN head for model.</p></li>
<li>Char level model.
<h2>Our kernel:</h2></li>
</ol>
<p><a href="https://www.kaggle.com/jionie/preprocessing-new-pipeline-pseudo-model-ensemble">https://www.kaggle.com/jionie/preprocessing-new-pipeline-pseudo-model-ensemble</a></p>
<h2>Our codes:</h2>
<p><a href="https://github.com/jionie/Tweet">https://github.com/jionie/Tweet</a></p>
<p>Thanks! I haven't found the magic too, I know that feeling. 😂 </p> |
Tweet Sentiment Extraction | 2nd place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, thank you for asking!
Our codes are now available, so you can check implementation here.
<a href="https://www.kaggle.com/hiromoon166/tsepipeline">https://www.kaggle.com/hiromoon166/tsepipeline</a></p>
<p>Thank you and you too!! I'll learn a lot from your solution😊 </p>
<p>Thank you!!</p>
<p>SequenceBucketing is a really popular method. You can refer to the below links.
<a href="https://www.kaggle.com/seesee/faster-2x-tf-roberta">See--'s kernel</a>
<a href="https://youtu.be/HCg_ewbNEss">dott's presentation</a></p>
<p>SentimentSampler is much simple. You can refer to the approach for imbalanced data etc...</p>
<p>We are planning to publish postprocessing code soon. Please check it👍</p>
<p>Our postprocessing code available here.
<a href="https://www.kaggle.com/futureboykid/2nd-place-post-processing">https://www.kaggle.com/futureboykid/2nd-place-post-processing</a></p>
<p>Thank you! Sequence bucketing is not necessary in this comp because dataset is too small. But it can save train & inference time, so we can try more experiments😄</p>
<p>P.S.
I especially recommend dott’s presentation in KaggleDaysTokyo.</p>
<p>First of all, we'd like to thank the Kaggle team for holding this interesting competition. And, I want to thank my teammates( <a href="/yuooka">@yuooka</a>, <a href="/futureboykid">@futureboykid</a>) for their hard work.</p>
<p>This post is an overview of our solution. Our solution consists of four major parts(preprocessing, base-model training, reranking-model training, postprocessing).</p>
<h1>Preprocessing</h1>
<p>We simply apply the postprocessing method to preprocessing refer to <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/154415#864905">this post</a>. Using this method, we can train models with less noise.</p>
<p>ex) selected_text "e fun" -> "fun"</p>
<h1>Base-model training</h1>
<p>Our training method is fundamentally the same as <a href="https://www.kaggle.com/abhishek/roberta-inference-5-folds">this great kernel</a>. </p>
<h2>Model Architecture</h2>
<p>We have tried so many model architectures... However, we finally got to use these two RoBERTa pretrained on SQuAD2.</p>
<ol>
<li><p>Using the 11th(base) or 23th(large) hidden layer as output. (<a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/148417#833085">This post</a>)</p></li>
<li><p>Using a trainable vector and apply softmax over it & multi dropout. (<a href="https://github.com/oleg-yaroshevskiy/quest_qa_labeling/blob/master/step5_model3_roberta_code/model.py">Google Quest 1st</a>) (We will refer to it hereafter as MDO.)</p></li>
</ol>
<h2>Loss</h2>
<p>We tried a lot, but finally, choose simple CrossEntropyLoss.
```
class CROSS_ENTROPY:
def <strong>init</strong>(self):
self.CELoss = nn.CrossEntropyLoss(reduction='none')</p>
<pre><code>def __call__(self, pred, target):
s_pre, e_pre = pred.split(1, dim=-1)
s_tar, e_tar = target.split(1, dim=-1)
s_loss = self.CELoss(s_pre.squeeze(-1), s_tar.squeeze(-1))
e_loss = self.CELoss(e_pre.squeeze(-1), e_tar.squeeze(-1))
loss = (s_loss + e_loss) / 2
return loss
</code></pre>
<p>```</p>
<h2>Training Strategy</h2>
<p>We suffered from learning instability. There are two main ideas to get a stable result.</p>
<h3>SentimentSampler</h3>
<p>Most of the public kernel using stratified-KFold by sentiment. But, we decided that wasn't enough. Seeing <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/138520">this post</a>, there is a huge difference among the sentiment.</p>
<p>So, we adopt SentimentSampler to equalize the imbalance within the batch.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1497263%2F4a7afe89246c305a5d8e1f648ead65ef%2F2020-06-17%2010.32.33.png?generation=1592357573339723&alt=media" alt=""></p>
<h3>SWA</h3>
<p>We found the instability of the validation score while training (~±0.001 varies with just 1 iteration!)</p>
<p>So, we adopt <a href="https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/">SWA</a> to stable the result.</p>
<p>This makes it possible to get stable results with 10~50 iteration monitoring, so that we could get results not overfitted to validation set (additionally validation time efficient.)</p>
<h2>Ensemble</h2>
<p>Using two different seeds (seed averaging)</p>
<p>RoBERTa Base 11th layer + RoBERTa Large 23th layer + RoBERTa Base MDO + RoBERTa Large MDO</p>
<p>4 models * 2 seeds = Total 8 models </p>
<h1>Reranking-model training (public +0.002, private +0.003~5)</h1>
<h2>Approach</h2>
<p>We believe this part is the unique and the most different from other teams. This approach is based on the following idea. "Creating multi candidates and choose best one."</p>
<p>Step1 Calculating top n score based on the start & end value(apply softmax) from base-model.</p>
<p>Step2 Creating candidates based on the Step1 score. Candidates include selected_text & Jaccard_score & Step1_score.</p>
<p>Step3 Training RoBERTa Base model. (Target is candidate's jaccard_score)</p>
<p>Step4 Sorting candidates by Step3 predicted value + Step1 score * 0.5. And choose the best one as the final answer.</p>
<h2>SequenceBucketing</h2>
<p>We have built over 50 models until above (base-models & reranking models). In order to finish inference within the limited time, we have decided to chose SequenceBucketing even some models had not been trained with it.</p>
<p>In this case, each batch contains the same text and little different candidate. </p>
<p>Therefore, inference time speed up x2 and surprisingly got a better result than not using. We need to find out why...</p>
<h1>Postprocessing (public +0.01, private +0.012)</h1>
<p>Just like any other team found magic, we focused on the extra space.
```
def pp(filtered_output, real_tweet):
filtered_output = ' '.join(filtered_output.split())
if len(real_tweet.split()) < 2:
filtered_output = real_tweet
else:
if len(filtered_output.split()) == 1:
if filtered_output.endswith(".."):
if real_tweet.startswith(" "):
st = real_tweet.find(filtered_output)
fl = real_tweet.find(" ")
if fl != -1 and fl < st:
filtered_output = re.sub(r'(.)\1{2,}', '', filtered_output)
else:
filtered_output = re.sub(r'(.)\1{2,}', '.', filtered_output)
else:
st = real_tweet.find(filtered_output)
fl = real_tweet.find(" ")
if fl != -1 and fl < st:
filtered_output = re.sub(r'(.)\1{2,}', '.', filtered_output)
else:
filtered_output = re.sub(r'(.)\1{2,}', '..', filtered_output)
return filtered_output
if filtered_output.endswith('!!'):
if real_tweet.startswith(" "):
st = real_tweet.find(filtered_output)
fl = real_tweet.find(" ")
if fl != -1 and fl < st:
filtered_output = re.sub(r'(!)\1{2,}', '', filtered_output)
else:
filtered_output = re.sub(r'(!)\1{2,}', '!', filtered_output)
else:
st = real_tweet.find(filtered_output)
fl = real_tweet.find(" ")
if fl != -1 and fl < st:
filtered_output = re.sub(r'(!)\1{2,}', '!', filtered_output)
else:
filtered_output = re.sub(r'(!)\1{2,}', '!!', filtered_output)
return filtered_output</p>
<pre><code> if real_tweet.startswith(" "):
filtered_output = filtered_output.strip()
text_annotetor = ' '.join(real_tweet.split())
start = text_annotetor.find(filtered_output)
end = start + len(filtered_output)
start -= 0
end += 2
flag = real_tweet.find(" ")
if flag < start:
filtered_output = real_tweet[start:end]
if " " in real_tweet and not real_tweet.startswith(" "):
filtered_output = filtered_output.strip()
text_annotetor = re.sub(" {2,}", " ", real_tweet)
start = text_annotetor.find(filtered_output)
end = start + len(filtered_output)
start -= 0
end += 2
flag = real_tweet.find(" ")
if flag < start:
filtered_output = real_tweet[start:end]
return filtered_output
</code></pre>
<p>```</p>
<h3>This post is a brief version of our solution.</h3>
<h3>Please looking forward to the detail explanation from my teammates :)</h3>
<p>Reranking-model <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159315">https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159315</a>
Postprocessing <a href="https://www.kaggle.com/futureboykid/2nd-place-post-processing">https://www.kaggle.com/futureboykid/2nd-place-post-processing</a></p>
<p>OK. Our team found that the records start with " " are originally start with "@".</p>
<p>You can check it, using <a href="https://www.kaggle.com/maxjon/complete-tweet-sentiment-extraction-data">this dataset</a>.</p>
<p>This is the difference between them, but I can't explain why this method works well...</p>
<p>Hi, thanks for asking. That is <em>not</em> a typo.</p>
<p>The only difference between them is postprocessing.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1497263%2Fbdbc507a7bbf2f70c79abd35cc5a4d32%2F2020-06-17%2014.08.43.png?generation=1592370636513626&alt=media" alt=""></p>
<p>We've published our final submission notebook. Please check it together :)
<a href="https://www.kaggle.com/hiromoon166/inference-8models-seed100101-bucketing-2-ver2">https://www.kaggle.com/hiromoon166/inference-8models-seed100101-bucketing-2-ver2</a></p>
<p>You can check our presentation at Kaggle Days Meetup Dallas.<br>
<a href="https://docs.google.com/presentation/d/e/2PACX-1vRegL1fDnQ5LDGNoZCOvmMtSVfdzkOWUJdf-Ry2XH01YPbesWNT4lPfTfZLTUf4BkbNeJD_NERwDibM/pub?start=false&loop=false&delayms=3000&slide=id.g8a8ae9ca50_0_50" target="_blank">Slide</a><br>
<a href="https://www.youtube.com/watch?v=e95JcK7N_NM" target="_blank">Video</a></p> |
Tweet Sentiment Extraction | 42nd place submission | jumped 880 places = lucky seed? | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My public LB score was 0.712 and, when the private leaderboard was released, I jumped 880 spots with a private LB score of 0.719. This was surprising indeed! Because my work relied heavily on <a href="/cdeotte">@cdeotte</a>'s, I kept his original seed of 777. </p>
<p>My submission was very similar to the BERTweet public notebook I shared a few weeks ago ( <a href="https://www.kaggle.com/davelo12/bertweet">https://www.kaggle.com/davelo12/bertweet</a> ). This submission scored a 0.718 private LB (silver position) and 0.709 CV.</p>
<p>The only change I made to this workbook was to use predictions from RoBERTa instead of BERTweet when there was an unknown token (i.e., the word was missing from the BERTweet vocabulary). This 42nd place submission scored a 0.719 private LB, 0.712 public LB, and 0.709 CV.</p>
<p>I elected to use BERTweet over RoBERTa, because I found BERTweet to be more stable (i.e., lower standard deviation across folds). I found that both BERTweet and RoBERTa performed similarly in terms of CV and public LB.</p>
<p>Special thanks to <a href="/cdeotte">@cdeotte</a> for his public notebook. I learned a great deal from this. Congrats to all of the winners! </p> |