Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
Severstal: Steel Defect Detection
Lessons Learned - The Good and the Bad!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners. Thank you Kaggle and Severstal for a fun competition. </p> <h1>Lessons Learned</h1> <p>I learned two important lessons this competition. One was frustrating, one was exciting. * Always submit one final submission that does not overfit public LB * You can train segmentation on crops and predict on full images</p> <h1>Don't Overfit Public LB - (The Bad Lesson)</h1> <p>Like many others, I believed that the private test images would be more similar to the public test images than similar to the training images. Therefore I knowingly used bad practices to increase my public LB. In hindsight, one should always make at least one final submission that is not overfit to the public LB (even if you have reason to overfit public LB). One of my models scores public LB 0.913 and private LB 0.905. The other scores public LB 0.915 and private LB 0.900. I chose two versions of the second one :-(</p> <h1>Train on Crop, Predict on Full - (The Good Lesson)</h1> <p>This was my first time making segmentation models. Thank you everyone that shared their knowledge. The most exciting thing that I learned is that you can train on crops and predict on full. </p> <h2>Train on Crop</h2> <p>My model was trained on 256x256 crops where 20% of crops have no defects, 20% of crops have defect one, 20% of crops have defect two, 20% of crops have defect three, and 20% of crops have defect four. My custom dataloader outputs an equal ratio of defect crops (by oversampling from training images): <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F5b7b45cb70b9b478e4c8981c82e45d1d%2Ftrain102519.png?generation=1572523450102792&amp;alt=media" alt=""></p> <p>Then I built the following Keras model:</p> <pre><code>from segmentation_models import FPN from segmentation_models.losses import bce_jaccard_loss model = FPN('inceptionv3', input_shape=(256, 256, 3), classes=4, activation='sigmoid') model.compile(optimizer='adam', loss=bce_jaccard_loss, metrics=[dice_coef]) </code></pre> <p>This model achieved validation of <code>Dice = 0.73</code> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F6f3a4f20292dbfae351c63d8325d9b78%2Ftrain2102519.png?generation=1572523497768713&amp;alt=media" alt=""></p> <p>Then you can use the trained model to predict crops <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fcdfcc3dcca7ac76331db167c46f2bf89%2Fcrops102519.png?generation=1572523540768947&amp;alt=media" alt=""></p> <h1>The Magic - Predict on Full</h1> <p>Now here's the magic. You can transfer these learned weights to a model that predicts on full images as follows: </p> <pre><code>model2 = FPN('inceptionv3', input_shape=(256, 1600, 3), classes=4, activation='sigmoid') model2.compile(optimizer='adam', loss=bce_jaccard_loss, metrics=[dice_coef]) model2.set_weights(model.get_weights()) </code></pre> <p>Now you can predict on full images. And your trained model has seen lots of defect 1, 2, 3, and 4 types so it has no problem segmenting all types: </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F2394a213f911f70ce695a21ff0fb04e7%2Ffull102519.png?generation=1572523584795279&amp;alt=media" alt=""></p> <h1>Examples</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F616dc87c1868b955e76d92b21ab92dbe%2Fd1102519.png?generation=1572523664901311&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fee390d0b1bf3bacf0121c497a065dbdf%2Fd2102519.png?generation=1572523707429032&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F9755ee4c3529fb05b042ac7d4071bfd9%2Fd4102519.png?generation=1572523751935264&amp;alt=media" alt=""></p> <h1>Pre-Classification Stage</h1> <p>To achieve public LB 0.913 and private LB 0.905 with this model, I added a standard pre-classification stage before segmentation to remove false positives. And I added an innovative post-classifcation described below.</p> <h1>Post-Classification Stage</h1> <p>Here's a cool trick. I trained a classifier on 256x256 crops where the crops where just random crops from images. (The proportions were not altered to 20%, 20%, 20%, 20%, 20%). Then after predicting a mask on a test image, I would "scan" each image by moving the crop window from left to right and compute a moving average of classification prediction. In any area where probability is less than 0.6, I would remove the mask within that 256x256!</p> <h1>Only Predict Private Test Images</h1> <p>To maximize the one hour inference time (and ensemble more models), you can skip predicting the public test images and only predict the private test images. You will notice that the LB score below doesn't contain a public LB score, it only contains a private LB score. Here is the code to get a list of private test images:</p> <pre><code>all = pd.read_csv('../input/severstal-steel-defect-detection/sample_submission.csv') all = all['ImageId_ClassId'].map(lambda x: x.split('.')[0]).values[::4] public = pd.read_csv('../input/dataset/submission.csv') public = public['ImageId_ClassId'].map(lambda x: x.split('.')[0]).values[::4] private = np.setdiff1d(all,public) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F427300bf00ec486c535c4e262548ab38%2Fprivate2102419.png?generation=1572523789973493&amp;alt=media" alt=""></p> <h1>Model Details</h1> <p>Pre-Classification - 3 models * xception 256x256 crop (then scan image) * xception 256x1600 * efficientNetb4 256x400 crop (then scan image)</p> <p>Segmentation - 3 models * Unet resnet34 256x256 crop * FPN inceptionv3 256x256 crop * FPN resnet34 224x384 crops</p> <p>Post-Classification - 2 models * xception 256x256 crop (then scan image) * efficientNetb4 256x400 crop (then scan image)</p>
Severstal: Steel Defect Detection
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Overall approach</strong> 1) Separate models for different defects (since the number of classes is small - it is feasible) * + can use completely different architectures/pipelines/samplers etc * + can combine trained models for different defects instead of retraining a single model * + encourages to decompose the problem (4-defect detection) into smaller subproblems (4 single-defect detections) – it allows to focus on perfecting them separately, without the fear of spoiling results of a different subproblem * + easier to reuse code for binary segmentation * + no need to balance defect class losses * - wastes valuable submission time → smaller size/number of models can be ensembled * - more training time * - no potential for synergy between detectors of different defect types</p> <p>Doing this was a “greedy solution” - I already had experience with binary segmentation, and I’m a bit sad that I missed out on a chance to learn more new stuff within the scope of this competition. On the other hand, it maximized my chances of winning, so if I had to make a decision again – I would do the same thing. Besides, I can still try to match/beat the benchmark of my current approach with a single model for multiple defects after the end of the competition, even though there would be less motivation for doing that :)</p> <p>2) Not predicting defect2 * + less models → less training / research / inference time * + trying to predict defect2 was always decreasing the pubLB for me, so not predicting it → higher score * - if private test set contains more instances of images with defect2 – score would degrade</p> <p>Initially I was going to predict only defect3 and defect4, but at some point I hit a wall in trying to improve my score, and tried predicting defect1 and defect2. Surprisingly for me, defect1 allowed a noticeable gain in score, even though it was less than a half from theoretically possible one, so I decided to keep it. Defect2, on the other hand, was even more rare, and I have been unable to get any gain from predicting it. Anyway, to protect myself against a scenario where it is more frequent in private test set, I have included a model for defect2, tuned to produce predictions only when it is very confident, and reserved a second selected submit for it. After the competition deadline it turned out that private test set didn’t seem to contain more of defect2, but I think that it was still a reasonable use of a second submit selection.</p> <p>3) Training on full size images * + model has more context → easier to make a correct decision (avoid a false positive/negative) * + no need for an extra fine-tuning stage on full images / less domain mismatch if no fine-tuning is done * - more time and GPU RAM is required → less experiments, smaller batch size * - less “unique” samples seen by model → more potential overfitting</p> <p>I’ve tried training on crops as well as on full sized images, and in all my experiments training on crops was inferior. I’ve seen that some solutions mention that when they were cropping images with defects they constrained the crops to contain defect – it might be the key to proper training on crops, but I haven’t tried it.</p> <p>4) 3-fold validation * + larger size of validation set → more stable metric estimates * +“less similar” models → ensembling is more beneficial * +- less models produced – this is ok, since I am anyway constrained in how many models I can fit into prediction time quota * - smaller size of training set → potentially weaker individual models <br> * - longer validation time</p> <p>This might have been important for avoiding negative consequences of shake-up in this competition. Keeping in mind that my validation set is 2x larger than public test set, I was highly reluctant to base any decisions about my solution on the impact on public leaderboard score, and was “trusting“ only changes that improve both CV and pubLB. At some point I started to worry that I’m overfitting to pubLB anyway, since adding more models to my ensemble decreased pubLB, indicating that I got lucky with my submit, and small variations of predictions caused a drop in pubLB. I decided to add RandAugment-type (more details in Augmentations section) to the validation set, hoping that this way I would select models/hyperparams that are more robust under different conditions, besides it would allow artificially increasing the size of validation set.</p> <p><strong>Encoder</strong> SE ResNeXt-50 / EfficientNet-b3, pretrained on ImageNet. First model still seems like a good tradeoff between size/speed and accuracy, and efficientnet is relatively new and I wanted to try using it for segmentation tasks. Ironically, its implementation in PyTorch is not very efficient (at least, compared to the one in TensorFlow, see <a href="https://github.com/lukemelas/EfficientNet-PyTorch/issues/19">this github issue</a>). This limited usage of “upscaled“ versions of it, and I settled on b3, which was close to seresnext50, although the batch size that I was able to fit into GPU RAM was smaller (3 vs 4) and it potentially decreased batchnorm performance.</p> <p><strong>Model</strong> Experimented with Unet, FPN and PSPNet from great <a href="https://github.com/qubvel/segmentation_models.pytorch">Segmentation models</a> by <a href="/pavel92">@pavel92</a> . Unet and PSPNet performed noticeably worse with default settings and were discarded from the experiments at the early stages of competitions.</p> <p>I compared FPN with several variations that I considered to be reasonable, listed below. FPNA: vanilla FPN. FPNB: take not the 4 uppermost layers of encoder, but 4 lowermost. Intuition was that if a defect is a local artifact, then using stronger semantic information is unnecessary, and using higher-resolution layers instead would allow to better locate small defects. In my experiments it outperformed FPNA at 128x800 resolution, but underperformed at full resolution – perhaps receptive field was insufficient. FPNC: take all (5) levels of encoder, this might help to handle different scale levels properly. Unfortunately the increase in GPU RAM usage and training time didn’t translate to improvement in performance in my tests. FPND: Instead of summing outputs of encoder layers, concatenate them (reducing the number of channels in decoder layers’ output to keep final number of channels the same) - this is similar to hypercolumns, which worked well in TGS competition. This version seemed to be comparable with FPNA, and outperformed it in defect4 detection, perhaps due to the fact that this defect had significantly larger scale variation compared to other 3.</p> <p>I experimented a bit with deep supervision (with a aux output for each defect class presence), and even though I initially have gotten some improvement using this scheme, I have later managed to match this result without DS, and decided to drop it to avoid additional complexity.</p> <p>I have also tried to train classifiers for rejecting empty images before classification to avoid false positives, but haven’t been able to get more accuracy than I already had with segmentation models, in the end the only classifier that I used was for defect1 (taken from Heng CherKeng’s starter guide) – it seemed to have a better precision than my defect1 models.</p> <p><strong>Augmentations</strong> Did a “grid search” for optimal augmentations, training same small model with a single augmentation for a fixed number of epochs and validating without augmentations. Surprisingly, “base augmentation” (no augmentations) was always better, even compared to horizontal flip. This was not something I have encountered before, and main guess was lack of model capacity to handle the variation after introducing augmentations. Proper way of testing it was running the experiment again with a bigger model, but I didn’t have time/resources to do it properly, so I decided to make an assumption that horizontal flip doesn’t destroy important information and shouldn’t lead to overgeneralization, and other “generic” augmentations should be treated carefully.</p> <p>I also tried few “custom“ augmentations: “SteelShift” (cyclically shift image, handling steel sheet border separately) to simulate a capturing steel sheet at a different time on a moving conveyor belt, and “BorderLoc” (encode distance to steel sheet border in a separate channel, black areas and images with no black areas are handled separately). Motivation for adding 2 custom augmentation, targeted at images with steel sheet borders was the fact, that even though there was correlation between CV and pubLB, CV was always better, and the only significant difference between 2 sets was the fraction on images where steel sheet border was visible.</p> <p>Impact of these 2 “custom” augmentations was somewhat hard to asses, I included some models with “SteelShift” into final ensemble to increase diversity of predictions, and didn’t include “BorderLoc”, because it would require either applying it to all images (and discard models trained without it), or rewriting inference code to apply different augmentations to different models in ensemble)</p> <p>I ended up with follwing augmentation “modes”:</p> <p>a) only horizontal flip b) <a href="https://arxiv.org/abs/1909.13719">RandAugment</a> - up to 2 augmentations chose randomly from the list below (<a href="https://github.com/albu/albumentations">albumentation</a> names): * HorizontalFlip * VerticalFlip * ISONoise * IAAAdditiveGaussianNoise * CoarseDropout * RandomBrightness * RandomGamma * IAASharpen * Blur * MotionBlur * RandomContrast</p> <p><strong>Loss</strong> 0.75 * BCE + 0.25 * LovaszHinge Lovasz worked significantly better than Dice. Perhaps a good idea was a separate fine-tuning stage, where only Lovasz was used, but I never got around to testing that. Also wanted to try replacing BCE with Focal, but didn’t have time and also was worried about proper coefficient balancing – focal is usually significantly smaller than CE, and magintude might change during training.</p> <p><strong>Post-processing</strong> - binarize averaged predictions using threshold, determined on validation set - zero masks with number of predicted pixels less than a threshold, determined on a validation set</p> <p>I have also thought about filling holes and removing small components in predictions, but after looking at predictions on the public test set decided that it will not change the score much, but might backfire in unexpected way on unseen data.</p> <p><strong>Pseudo-labeling</strong> After many teams started to score more than 0.920 on pubLB it became evident that at least some of them are using semi-supervised learning. It didn’t seem to be extremely beneficial, because size of public test set in this competition wasn’t very big compared to train set size, but it should at least help to reduce to domain mismatch between train and public test sets, so I decided to give it a go, after plateauing in score on pubLB. First I’ve tried to include all predicted test negative samples (this was easier, since I didn’t need the masks) into the train set (keeping folds unchanged and not including pseudo-labeled images into validation set, to be able to track improvement) and fine tune the models I had using this data. It didn’t help much, so I moved on to including all predicted test set images the same way into training set and training from scratch. This allowed to get a single model score equal to the ensemble score on pubLB, but ensembling such models didn’t provide any improvement – it made sense, since most likely the models were just “remembering” masks for all the samples from the public test set, and there was no variation between predictions on the public test set images. Finally, I decided to do it the “proper” way, and included only confident samples into the training set. I measured confidence similar to <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/69291#latest-592781">approach of winners of TGS competition</a> - count the number of pixels with confidence &lt; 0.2 or &gt; 0.8, and consider images that have more than N confident pixels to be reliable. I’ve noticed, that no matter how certain/good the predictions are, there is always an uncertain region along the predicted defect boundary (which makes sense, since the ground truth annotation was fairly arbitrary in terms of boundary). To avoid this uncertain area affecting the “prediction confidence” estimate, I’ve applied morphological operations to “grow” confident regions – this allowed to “close” uncertain areas along the boundaries, and still kept the “really” uncertain defect predictions. This was done differently for each defect type, since they had different areas and boundary uncertainties. This approach helped me to move past 0.917 on pubLB to 0.919, with most of this change attributed to improved predictions for defect3. I have also applied pseudo-labeling to defect1 and defect4, but been unable to get improvement for defect1 (makes sense, since my predictions for it were far from being perfect), and unclear results for defect4 (not really sure why, maybe due to more skewed defect/nondefect distribution of data).</p> <p><strong>Closing thoughts</strong> Thanks to organizers of this competition and all participants - it was a really fun and intense experience! This is a second competition where I have been using <a href="https://dvc.org/">DVC</a> and <a href="https://github.com/albu/albumentations">albumentation</a> libraries, and first for using <a href="https://github.com/catalyst-team/catalyst">Catalyst</a>, <a href="https://github.com/qubvel/segmentation_models.pytorch">Segmentation Models</a> and <a href="https://app.wandb.ai/">Weights &amp; Biases</a> - really enjoying these, if you haven't tried them yet - I strongly encourage you to. </p>
Predicting Molecular Properties
Newcomer paying attention - 26th place, suggestions?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, <strong>tons of thank you</strong> to the Kaggle community, the Kaggle team and competition hosts for the event. I am just starting out in the field (doing my bachelors now) and so if I made any mistake here or you have any suggestions, please let me know. I took part in the competition to learn and be comfortable with Pytorch with a hands on approach. I did lots of tensor manipulation. I did not have much expectation(it is intimidating to see so many data science experts, also I joined the competition late). And then I got a medal... weird.</p> <p><strong>Model details</strong> Too much talk, real deal is as follows: I trained <strong>one model responsible for calculating J-coupling</strong>(referred as JC next) <strong>of all types</strong>. The model <strong>produced 8 outputs, one each for a type, of which 7 of them were in don't care condition for any input</strong>. I directly optimized the evaluation metric, using it as loss. Like many teams I used an <strong>attention based NN model</strong> (Fast.ai's multi-head-attention of text module was an very important piece), to generate features and concatenated it with other engineered features like from 'distance is all you need' kernel. Wikipedia helped me generate few more features. Overall, it is just a <strong>variant of BERT.</strong> Atoms of the molecule whose JC was to be found were fed to the model one by one as input steps, zero padded 30 input steps. I added a classification token(well, actually a series of numbers that represented the learned token) as the first input step (time step sounded weird and atom step weirder) about which I did not know what would be suitable so I trained a small NN based on simple features like input type. All other input step consisted of features that was concerned with a particular atom. Each time step had a binary feature which told the model if the atom for which JC was to be computed. Features were collected from the attention network by concatenating output of first input step (say special input step, due to lack of better name, I think I need to be better with names) maximum and average of its output along all input steps with outputs from steps of atoms whose JC was to be calculated, all total 5X length of encoded output of any input step). These with hand engineered features (after going through few residual layers) were passed through another residual layer and then to a feed forward layer to generate 8 outputs, which was then trained with Adam optimizer using cyclic learning rate.</p> <p><strong>Training details</strong> At first I trained a models on the Kaggle kernel itself, and the best I was able to reach was -2.015 LB score in those 9 hours, batchsize of 1024 was used. I trained the final model on GCP (thank you for those free credits) for around <strong>30 hours with a batchsize of 2048 on a V100 GPU</strong>, gradients were noisy with smaller batches. I did not do oversampling, the loss function took care of it. But, I did scale the embedding gradients based on their frequencies. I could not train it any more, credits were gone, and the competition was ending too. But 30 hours of training gave lb score of <strong>-2.607 from a single model.</strong> Later, I fine tuned the model for first three types(time was up) on Kaggle kernel by accordingly adjusting the weights for the respective type of JC. It gave me boost of around -0.015, not that it would have changed my final rank😁. </p> <p>I learned a lot, read research papers, blogs, watched Jeremy Howard's lecture videos. It was fun.</p> <p><strong>Again, thanks to everyone.</strong></p> <p>Also, if you have any suggestions, or any thing at all, leave a remark.</p> <p>PS: wish me luck for my upcoming mid-semester examinations, I will need it.</p>
Severstal: Steel Defect Detection
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, and congratulation to all the participants!</p> <p>We all entered this competition a little less than a month ago, right after the end of the APTOS competition. We were solving it separately and merged only about a day before the Merger deadline because of the emptying feeling of despair. It was the first competition, where the 'fit_predict' could not get into the bronze zone. Besides, heavy encoders were virtually as successful as simple resnet34. At first, here are the things that we tried, and that did not work at all:</p> <p><strong>I tried:</strong> - Adding hard negative mining by resampling dataset every epoch. Weights were chosen as the inverse of the Dice score per image. Absolutely no difference; - Adding ArcFace to embeddings. It should help to distinguish classes better. One more nope; - Training multi-stage network (inspired by pose detectors). It should help mimic bad markings. Performed worse than single-stage; - Adding label smoothing. Nope again.</p> <p><a href="/bloodaxe">@bloodaxe</a> <strong>tried:</strong> - Adding mixup and Poisson blending to increase the number of images with defects; - Trainig on double-sized crops: 256x1600 -&gt; crop(256x512) -&gt; resize(512x1024). Nope; - Adding result of anisotropic segmentation to the input of the neural network, no gain; - Training HRNetV2 with full resolution. 22 hours with 8xV100 and no better than ResNet34.</p> <p><strong>Now, things that actually worked:</strong> - Training two-headed NN for segmentation and classification. - Combine heads at inference time as with soft gating (mask.sigmoid() * classifier.sigmoid()) - Focal loss / BCE + Focal loss - Training with grayscale instead of gray-RGB - FP16 with usage of Catalyst and Apex</p> <p>I trained only single-fold models and <a href="/bloodaxe">@bloodaxe</a> trained 5-fold CV.</p> <p>Our individual solutions were no more than at the end of the silver zone of the Public LB, but then we teamed up... and got a bit higher in the silver zone!</p> <p>Our best (and final) ensemble consisted of 9 models with densenet201, efficientnetb5, resnet34, seresnext50 encoders, some of them with FPN decoders, and some with UNet. We added 3-flip TTA and averaged logits of the models, and soft gating applied. We binarized masks with a 0.55 threshold and zeroed out masks less than 256 pixels. Our total runtime for private+public is ~30 min.</p> <p><strong>Hardware</strong> We used servers from FastGPU.net with 8xV100 and 4xV100, that greatly reduced our experiment cycle length.</p> <p><strong>Some speculation about our rise on private LB:</strong> - We did not overfit to the pubic LB; - We chose our models by the Dice score without empty masks. We relied on our classifiers and soft-gating for empty masks. - We had different seeds/folds/models. Out private scores before merge are not that great (0.89-0.90). But they are much stronger when combined.</p> <p><strong>Some moral by</strong> <a href="/bloodaxe">@bloodaxe</a> - Keep going. Even if it seems that everything is lost and there is no hope, you still can push a little bit forward and learn something useful along the way; - Team up! Exchanging ideas is really beneficial!</p>
IEEE-CIS Fraud Detection
Summary of my competition (43rd)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to congratulate @kyakovlev and @cdeotte . Even before the end results you were the stars of this competition with your amazing kernels and topics. Well deserved first prize, I'm really happy for you. Congratulations as well for all the other medalists, specially @fatihozturk for his solo gold medal and for becoming GM.</p> <p>Second, I want to thank all the people (including hosts) who have contribute with their kernels, topics and commentaries to make this a brilliant competition. I learned so much and I feel so proud to be part of this great community that I want to apologize for not giving anything back. I was so absorbed, with no time enough to read about all the new methods, to create new features and to run the models that I wasn't able to participate as I would had wanted. I'll try to do better in the next one. </p> <p>On a personal note, I feel very happy not only for the silver medal but because I learned a lot. In my previous (and first) <a href="https://www.kaggle.com/c/santander-value-prediction-challenge">competition</a> I also got silver but it was because I took advantage of a made public data leak. I worked a lot trying to exploit that leak but I didn’t better my data science skills at all. I ended the competition very tired and, even after the medal, a bit demotivated so it took me more than one year to came back here. With that experience in mind, my idea when I entered this competition was to didn’t care about the score but use it as a way to learn and practise (for example Phyton). As I got to a good position in the leaderboard very early on I didn’t stick to the plan (I keep using R) but I think that I got to know very valuable methods (specially around CV). </p> <h3>Finding the users</h3> <p>Since I read this commentary from the organizers:</p> <blockquote> <p>The logic of our labeling is define reported chargeback on the card as fraud transaction (isFraud=1) and transactions posterior to it with either user account, email address or billing address directly linked to these attributes as fraud too.</p> </blockquote> <p>I knew that the key to a good score was to be able to single out unique users. I started to create a script to identify users using the card variables and TransactionAmt, address, email, D1, D3, V279, V280, V306, V307 and V308. For the rest of the competition I kept improving the script to get as many users as possible without getting false positives (countless users for the same card and address combination and the month gap between train and test sets made it difficult). You can see it <a href="https://www.kaggle.com/artmatician/script-for-finding-unique-card-transactions">here</a>. It includes a part where imputes D1 NAs values using the same logic as for finding users. </p> <p>It’s a not very good coded script, not optimized and very slow (It takes almost 3 days to get all the users) as it’s checks each transaction looking for a possible next one. But I think it does a good job identifying same card transactions. Using my <a href="https://www.kaggle.com/artmatician/users-ids">list of userIds</a> to postprocess (average of predictions for the same ID) the preprocess solution of the winners (<a href="https://www.kaggle.com/kyakovlev/ieee-submissions-and-uids">lgbm_meta_model.csv</a>) gets a 0.0004 boost over their postprocess solution. It doesn’t find users with different cards as others did it (<a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111320#latest-642831">Topic1</a>, <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111306#latest-642921">Topic2</a>) though.</p> <h3>"Magic features"</h3> <p>Using a public R kernel and the list of users to postprocess it I was able to get to a good position, even fifth after a script upgrade. Then I start to create my own models but I wasn’t able to improve much and a lot of people was getting ahead so I became obsessed trying to find the “magic features”. Only at the very end of the competition I realized that I already had them and only needed to use the userID to create aggregated features. I don’t know why I didn’t do it earlier as I always figured that as a possible boost. When I think about it, and how this steal me time to create more features and diverse models, do blending and get me to a better position I feel very stupid so I prefer to concentrate on the positive: the experience, the medal and for example to had chosen well the final submissions (not the ones with better public LB but worse private LB).</p> <h3>Cross validation</h3> <p>I tried different approaches during the competition but at the end I used a <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/108616#latest-634925">downsampled</a> last 20% holdout for trying new features and a month GroupKfold to validate and create submissions. I gave great importance to the results of the first fold (December).</p> <h3>Blending</h3> <p>I kept improving my single model till the very end so I had no time (and experience) to create new models and do proper blending. I only used <a href="https://medium.com/data-design/reaching-the-depths-of-power-geometric-ensembling-when-targeting-the-auc-metric-2f356ea3250e">power blending</a> with some very similar models (change of hyperparameters or one or two different features). Choosing 2 or 4 to be the power I got better public LB and OOF AUC but not better score for the December fold.</p> <h3>Postprocessing:</h3> <p>I did vertical blending by userId using max of predictions or averaging this value with the individual prediction. Booth increased the score but the former got better results for the December fold and the latter for the OOF AUC and the public LB. I also replaced the submit prediction to 1, if the last user transaction was fraud in the train dataset. I tried to do the same for not fraud but I got worse results. </p> <h3>Final submission</h3> <p>A single LGB model with 140 features (21 of them aggregations by userid). AUC = 0.956246 / 0.930972 </p> <p>with postprocessing: Using max of predictions by userid AUC = 0.957749 / 0.935836 Replacing with 1s for train fraud users AUC = 0.958303 / 0.936062</p> <p>I chose it because it was the one with better AUC for the first fold.</p>
IEEE-CIS Fraud Detection
Survived the shakeup and earned a silver medal!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am pretty sure this post means so little to most of you when there are Top of the top solutions being published out there as I am wirting these words. For me though, this comp meant really a lot!! First of all, I survived the shakeup: Around me the vast majority of people have gained hundreds of positions which means that the ones who were there at the public LB are now much much worse. I only lost 30 positions. Secondly, I earned my first solo Silver medal and I am super exhited about it! I spent so many days and nights studying the analysis of others (and doing my own stuff) and all that paid of! </p> <p>Thank you all for your wonderful, crazy and inspired EDAs and models. Congrats to the top performers and good luck to all next time! </p> <p>As for the model? I will not get in much detail since there will be more interesting solutions out there but I'll say this: My first submission was at 0.9534 and was a blend of my own LightGBM and XGboost solutions blended with a highly ranking public kernel at ratios 0.25, 0.25, 0.5 respectively. My second solution (at 0.9417) was the same LightGBM (that scored around 0.949), another version of the XGboost model (that scored around 0.947) and again the same public kernel, in ratios 0.4, 0.3, 0.2. </p> <p>At this point I should say that I don't know which solution actually brought me to this Private LB position - if anyone knows who I can see that, please let me know! Thanks!</p>
Severstal: Steel Defect Detection
30th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Wow, I've not expected such a big shake! This 2~3 weeks I almost gave up this competition because my public LB score was so poor, but it seems to be better not to throw away and I'm so lucky!</p> <hr> <p>My solution was not so special, but I want to share it for the case it's useful for beginner or someone who wants to start segmentation competition.</p> <ul> <li>apply 5 class classification including background class, and then 4 class segmenteation.</li> <li>For classification task, I trained resnet50, efficientnet-b3 and se-resnext50.</li> <li>For segmentation task, Unet with resnet18, PSPNet with resnet18 and FPN with resnet50.</li> </ul> <p>What worked: - random crop by 256x800 or full 256x1600 - 5 class classification (including background) - 4 class segmentation - BCE + Dice loss</p> <p>What didn't worked: - random crop by 256*400 - pseudo labeling - softmax and cross entropy loss - bigger model(encoder) was just waste of inference time - mixup - cutmix - thresholds higher then 0.5</p> <hr> <p>My code for this competition specific part was mainly base on <a href="https://www.kaggle.com/rishabhiitbhu/unet-starter-kernel-pytorch-lb-0-88">this great starter kernel</a> by <a href="/rishabhiitbhu">@rishabhiitbhu</a>. And I also borrowed many many idea from <a href="/hengck23">@hengck23</a> 's discussion.</p> <p>Thank you for all competitors and congrats for all finished in gold zone!</p> <hr> <p>Here is my code:) <a href="https://github.com/bamps53/kaggle-severstal.git">https://github.com/bamps53/kaggle-severstal.git</a></p>
Severstal: Steel Defect Detection
12th place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Team: <a href="/delpro">@delpro</a> <a href="/pavel92">@pavel92</a> <a href="/ilyadobrynin">@ilyadobrynin</a> We joined finally the last day of merging deadline and start woking on joint solution. At that time we have quite popular approaches with first step as classification (to remove most of non defective images) and second step as segmentation for defects localization. But at the end came up with 3-step solution. Our multilabel FPN and PSPNet segmentation models perform not so good in pixel-level segmentation, but works awesome to reduce number of false positive predictions after first classification step, so we decided use them as intermediate classification step. </p> <p>Following you will find more detailed explanation of each step.</p> <h3>Stage 1. Multilabel classification</h3> <p>Implemented by <a href="/ilyadobrynin">@ilyadobrynin</a> On the Stage 1 there was a multilabel classifier to detect images with at least one type of defect.</p> Model: <ul> <li>Best 3 of 5 folds Senet154 trained on resized images (128x800) with simple BCE loss</li> </ul> Augmentations: <ul> <li>Normalization</li> <li>Resize</li> <li>h-flip, v-flip</li> <li>no TTA</li> </ul> <p>After training we have found thresholds for binarization with maximizing f1-score for the given class, so each class have its own threshold. It allowed us to exclude almost half of the images and speed up inference.</p> <h3>Stage 2. Multilabel segmentation</h3> <p>Implemented by <a href="/delpro">@delpro</a> <a href="/ilyadobrynin">@ilyadobrynin</a> On the Stage 2 there was some overfit magic. There we have mean ensemble of the multilabel segmentation nets: - 4 folds of PSPNet on se_resnext101_32x4d (Awesome Qubvel implementation), trained on full images with BCE + Jaccard loss; - Custom FPN on senet154 trained on crops (256x256) with simple BCE loss - Custom PSPNet on senet154 trained on crops (256x256) with simple BCE loss</p> Training <ul> <li>Pretrain on crops (256x256), batch size 32, BCE + Jaccard</li> <li>Fine-tune on full size with 0.1 * lr, batch size 4 with gradients accumulation up to 20 images</li> </ul> Augmentations: <ul> <li>Normalization</li> <li>CropNonEmptyMaskIfExists: crop image with defect area if one exists, else make random crop (we have contributed this transformation to <a href="https://github.com/albu/albumentations/pull/342">Albumentations</a> during competition)</li> <li>h-flip, v-flip</li> <li>only h-flip on TTA</li> </ul> Details: <p>There was a trick in the custom FPN and PSPNet training. First of all, we have trained a multilabel Senet154 classifier on the crops. After that we use this classifier as a backbone for the segmentators. It speed up training significantly with the same quality.</p> Post processing: <p>First of all, we removed small objects and holes from the image. Then we assume that there is no objects if the sum of positive pixels is less than a minimum threshold (unique for the each class). It boosted our score on the public leaderboard, but could cause overfit. On this stage we have no thresholds optimization for the ensemble of models, since the default thresholds give us better result. Second stage allows us to remove many of the False Positive images.</p> <h3>Stage 3. Binary segmentation</h3> <p>Implemented by <a href="/pavel92">@pavel92</a> On the Stage 3 we use binary segmentation models, trained on non-empty masks for each class: 2 Unet(seresnext50), 2 FPN(seresnext50), 1 Unet(SeNet154)</p> Training: <ul> <li>loss: SoftDice + BinaryFocal(gamma=2.)</li> <li>optimizer: RAdam</li> <li>sheduler: reduce on plateau</li> <li>sampling: non-empty</li> <li>crop: 256x768</li> </ul> Augmentations: <ul> <li>Normalization</li> <li>h-flip, v-flip</li> <li>rotate180</li> <li>random brightness/contrast</li> <li>jpeg compression</li> <li>random scale (limit=0.1)</li> </ul> <h3>Tips:</h3> <ul> <li>Pseudolabels. For the classification step we use the most confident predicts from the previous submission. The confidence score for each image: <code>np.mean(np.abs(np.subtract(prob_cls, 0.5)))</code> where prob_cls - probability for each class on the image</li> <li>Use <code>torch.jit</code> to serialize your models (it helps a lot to transfer models to kaggle without pain)</li> <li>Trick that helps to improve models quality for about 0.3-0.5 % points for all models - best checkpoints weights average (<strong>weights!, not predictions</strong>). During traning 5 best checkpoints have been saved and then differnet combinations of them evaluated to find the best candidates to average</li> </ul> <h3>What did not work:</h3> <ul> <li>binary classification on the Stage 1</li> <li>multilabel segmentation on the Stage 3</li> <li>multiclass segmentation with aux classification output</li> <li>gated multiclass segmentation with aux classification output and <code>mask = mask_output * cls_output</code> (during traning and inference)</li> <li>we have also tried to fuse conv+batchnorm to reduce models inference time, but get only minor improvements, for <code>se</code> models it is just about ~5% speedup :(</li> </ul> <h3>Usefull libs you have to know:</h3> <ul> <li>[PyTorch] <a href="https://github.com/qubvel/segmentation_models.pytorch">https://github.com/qubvel/segmentation_models.pytorch</a></li> <li>[PyTorch] <a href="https://github.com/qubvel/ttach">https://github.com/qubvel/ttach</a></li> <li>[Keras] <a href="https://github.com/qubvel/segmentation_models">https://github.com/qubvel/segmentation_models</a></li> <li>[Keras] <a href="https://github.com/qubvel/efficientnet">https://github.com/qubvel/efficientnet</a></li> <li>[Keras] <a href="https://github.com/qubvel/tta_wrapper">https://github.com/qubvel/tta_wrapper</a></li> <li>[Everywhere] <a href="https://github.com/albu/albumentations">https://github.com/albu/albumentations</a></li> </ul> <h3>Useful papers:</h3> <ul> <li>Gradients accumulation: <a href="https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255">https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255</a></li> <li>SWA: <a href="https://towardsdatascience.com/stochastic-weight-averaging-a-new-way-to-get-state-of-the-art-results-in-deep-learning-c639ccf36a">https://towardsdatascience.com/stochastic-weight-averaging-a-new-way-to-get-state-of-the-art-results-in-deep-learning-c639ccf36a</a></li> <li>Pseudolabels implementation: <a href="https://arxiv.org/abs/1904.04445">https://arxiv.org/abs/1904.04445</a></li> </ul> <p>Special thanks to my awesome teammates!</p>
IEEE-CIS Fraud Detection
The REAL UserID, different from CardID
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I do not agree with many discussion, calling the CardID UserID. It is very different. A user can use mutliple cards, and a fraudulent user will probably use more cards than a regular one.</p> <h3>Part 1, convincing you to follow me in this idea</h3> <p>If you are ready to assume that the following transactions have been made from a single user, despite using many different cards, then this discussion (and my notebook) will be of your interest.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2Fe4cc0c2e2dd227970a1021c4068e2840%2FCapture%20dcran%20de%202019-10-04%2012-30-47.png?generation=1570214279323977&amp;alt=media" alt=""></p> <p>You can see that the transactions are grouped by color (same color for the same card). You can also see, that probably these transactions have been made by the same user, because of the very specific amounts, the sort period, the DeviceInfo, the "IP" (id19 id20) and the browser.</p> <p>The cardID column was created thanks to an algo that i share in a kernel (<a href="https://www.kaggle.com/tuttifrutti/isolating-a-cardid">https://www.kaggle.com/tuttifrutti/isolating-a-cardid</a>)</p> <p>After running the algo that i'm sharing in another notebook, we were able to group these transactions, using some Transactions as "bridges" between groups. Here, the bridge between the green group and the yellow one, is the amount/Device/Browser/IP of one of the green group transaction that is the same as the one of the yellow one. The same applies to the pink group.</p> <p>We've managed to create an algo that reconstruct UserId based on its transactions. See the results for this example:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2Fc18ecdd814b52527141530f778828814%2FCapture%20dcran%20de%202019-10-04%2012-50-34.png?generation=1570214327951966&amp;alt=media" alt=""></p> <h3>Part 2, some results</h3> <p><strong>1st example</strong> On this first screenshot we can see: - Card1 14276, was group by CardID with another card, we cannot see the second on the screenshot, but the variable cardID_count indicates that this group contains 2 transactions (and 2 of them are fraudulent (cardID_sumFraud)) - Same for Card1 8755. - These two cards were used by the same User. And are grouped with other cards. (12 transactions in total by the User, with 12 frauds)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F413dc1fe22a09a630a7e30394de39b17%2FCapture%20dcran%20de%202019-10-04%2014-54-32.png?generation=1570214408842410&amp;alt=media" alt=""></p> <p><strong>2nd example</strong> Here we can see that a fraudulent card had 10 transactions (incl. 10 fraudulent), two others had one. <br> The User grouping succeeded in grouping these cards together along with other that don't appear on this screenshot. </p> <p>However we can see, that this user has 2 non fraudulent transaction out of 30 (error of aggregation?)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F0ea890306f3ba6bb5fbc1d8cea44c1f1%2FCapture%20dcran%20de%202019-10-04%2014-54-10.png?generation=1570214447409779&amp;alt=media" alt=""></p> <p>*<em>3rd example *</em> This one is also nice, we can see that a card with 9 transactions, has been grouped with a single transaction card. It will be obviously easier to spot single transaction frauds thanks to this grouping!</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F87232e396cd06955e7fb50ed42975474%2FCapture%20dcran%20de%202019-10-04%2014-54-18.png?generation=1570214547396949&amp;alt=media" alt=""></p> <p><strong>4th example : most spectacular</strong></p> <p>According to the grouping, the User made 275 transactions, out of which 231 were fraudulent. This high value is unique in the whole dataset</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F18c01c090b38fcb6454ae231b1bf93f6%2FCapture%20dcran%20de%202019-10-04%2014-54-14.png?generation=1570214677717746&amp;alt=media" alt=""></p> <p><strong>5th: most useful example</strong></p> <p>The UserID that was more represented on the test set, and very fraudulent on the train, had 144 lines on the test.</p> <p>This is the training data of a very big fraudulent group. You can see that some are misclassified (look at colum isFraud), tou can easily spot them following the pattern on the C1/C2 columns.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F10d4018c64b80e3d051b444da6c84322%2FCapture%20dcran%20de%202019-10-01%2000-40-15.png?generation=1570214732899130&amp;alt=media" alt=""></p> <p>This user had around 150 transactions in the test set. Here there is no column isFraud, but i guess you can easily spot the not-necessarily frauds looking at the C columns</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2Fd883038b0fed483e4a1659b6bbf1d74e%2FCapture%20dcran%20de%202019-10-01%2000-41-23.png?generation=1570214753990242&amp;alt=media" alt=""></p> <p>I hope that you'll find some value in it! a kernel about how to compute these groupings is on its way. Imagine the opportunities of FE that it can give you (counting the number of cards, creating features to see ho hard a fraudster tried to stay hidden (changing devices, IP, emails ect...)</p> <p>here is the kernel: <a href="https://www.kaggle.com/tuttifrutti/find-users-multiple-cardids-from-the-same-user">https://www.kaggle.com/tuttifrutti/find-users-multiple-cardids-from-the-same-user</a></p>
Open Images 2019 - Object Detection
1st place solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Please find our solutions <a href="https://arxiv.org/abs/2003.07557">here</a>: <a href="https://arxiv.org/abs/2003.07557">https://arxiv.org/abs/2003.07557</a> The full description of our core model, <a href="https://arxiv.org/abs/2003.07540">TSD</a> (improves the mAPs for all backbones by 3~4%) is also released today at <a href="https://arxiv.org/abs/2003.07540">https://arxiv.org/abs/2003.07540</a> Our code for the TSD has been released <a href="https://github.com/Sense-X/TSD">here</a></p>
IEEE-CIS Fraud Detection
19th Place Solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2><strong>Introduction</strong></h2> <p>First of all, I would like to thank Kaggle and host for hosting such an interesting competition! I would also like to thank my awesome teammates, <a href="https://www.kaggle.com/masatomatsui">e-toppo</a>, <a href="https://www.kaggle.com/kurupical">kuripical</a>, <a href="https://www.kaggle.com/shogonagano">pondel</a>.</p> <p>Our team name ‘クソザコやねん’ means ‘We are very very very weak’ in Japanese. But, in the end of the competition, our team turned out to be a very strong team, and finally got a gold medal. That’s why we changed the team name to ‘クソザコちゃうねん’ which means ‘We are not weak’.</p> <p>By the way, we met each other in the on-site competition called <a href="https://atma-cup.atma.co.jp/">atmaCup</a>, and formed a team at a social gathering for this competition. The next atmaCup will be held in Osaka, Japan on November 23, 2019. Let's participate with us!</p> <h2><strong>Model pipeline</strong></h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F623370%2F1c5ab565e368ae60b9c59d9456d95b8e%2F2019-10-06%206.02.12.png?generation=1570338484162909&amp;alt=media" alt=""></p> <h2><strong>Feature Engineering</strong></h2> <ul> <li>UserID (Combining card features with the date D1 days before the TransactionDT)</li> <li>Aggregaion by UserID (count,mean,var)  - C1-C14  - D1-D15  - TransactionAmt etc.</li> <li>max/min values, max/min columns between similar C,V features  - example: &nbsp; &nbsp; &nbsp; &nbsp;V212, V274, V317 &nbsp; &nbsp; &nbsp; &nbsp;V213, V275, V318 &nbsp; &nbsp; &nbsp; &nbsp;V167~V171, C2</li> <li>decompose D, C, V feature (use PCA)</li> <li>nan pattern(D, C, V, M)</li> <li>isDecember Flag</li> </ul> <h2><strong>Validation Strategy</strong></h2> <p>We first used stratified 5 fold with no shuffle for validation (<a href="https://www.kaggle.com/kurupical">kurupical</a> kfold instead of Stratified). Then we started using time series splits to avoid huge shakes.Our time series splits has 7folds. The | indicates the train/val split. &nbsp; 1) 0 1 2 3 | 4 &nbsp; 2) 0 1 2 3 4 | 5 &nbsp; 3) 0 1 2 3 4 5 | 6 &nbsp; 4) train all data using 1.1x number of rounds early stopped in 3 Blend: using weighted average 1) * 0.1 + 2) * 0.1 + 3) * 0.25 + 4) * 0.55.</p> <h2><strong>Models and ensemble</strong></h2> <p>In this competition, we decided to choose our final submission with the following 2 different strategy in order to prevent the risk of shake(which didn’t occur). - submission1: High LB model (Full model in the slide.) - submission2: Conservative model (sumission of k2_samp model in the slide)</p> <p>Finally, we made 10 models (we count as the same model for changing the validation method or k2_samp), included Only ProductCD=W models, without MAGIC features, and so on. Our best single model scored LB: 0.9588 by LGBM.</p> <p>For the optimization of blending weight, we used Spicy Nelder-Mead which were used in the <a href="https://speakerdeck.com/hoxomaxwell/home-credit-default-risk-2nd-place-solutions?slide=55">2nd place solution of Home Credit Default Risk</a> . We decide not to optimize the weight of timeserie based model and Stratified based model and chose the following weihgt as the final submission (0.4*timeserise + 0.4*stratified + 0.2*timeseries(k2_samp)).</p> <h2><strong>Post Process</strong></h2> <p>We made unique user ID which is concatination of card1-card6 and difference between Day-D1, Day-D10, and confirm the count of record and isFraud’s mean. If the user is all “Fraud” transaction in the train datasets, we replaced the submit prediction to 1, if the user is all “not Fraud” transaction in the train datasets, we replaced the submit prediction to 0. This postprocess boosted +0.0007 in Public, +0.0004 in Private. if you want to know the detail, please refer to my kernel: <a href="https://www.kaggle.com/kurupical/postprocess-19th-place-solution-team">https://www.kaggle.com/kurupical/postprocess-19th-place-solution-team</a></p> <h2><strong>Conclusion</strong></h2> <p>In this post, we briefly summarised our model pipeline. Please feel free to ask us for any questions you have for our solutions. </p> <h2><strong>Appendix (Methods that didin’t work for us)</strong></h2> <ul> <li>Neural network</li> <li>Pseudo labeling</li> </ul>
IEEE-CIS Fraud Detection
40th place solution, my part.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>Preamble</h3> <p>First of all I want to congratulate all the winners, as you will get paid for your hard work. Also personal congratulation to <a href="/cdeotte">@cdeotte</a> with his 'promotion' to competitions master. 3 more gold medals and we will have one more triple GM. Congratulations to <a href="/fatihozturk">@fatihozturk</a> for becoming competitions GM. And congratulations to all other participants because the best prize you are getting out of every competition is what you have learned.</p> <h3>Solution</h3> <ul> <li>We did find the same thing as, I suppose, any other team from top 1-2% has found, which is uid by card1, addr1, email and TransactionDT - D1. </li> <li>So then comes a lot of different groupby's on that uid</li> <li>Also my teammate <a href="/nvarganov">@nvarganov</a> has engineered some fancy shifting features, which added around 0.0005 to my best single model when I used them</li> <li>All in all my best model had ~1000 features. It was a LightGBM on 10 KFolds with no shuffle and it scored 0.956 Public and 0.932 Private</li> <li>Our best submit was a blend of all of our best models, which contained LightGBM and XGBoost.</li> </ul> <h3>C features interaction</h3> <p>And also here is a code for some C features interaction which I used in my best model and which improved it's score by a little bit. I believe it was around 0.0002-0.0003.</p> <p>A list of C features <code> c_cols = [col for col in X.columns if col.startswith('C') and len(col) &amp;lt;= 3] </code> Multiplication and subtraction interactions each-to-each <code> for i in range(len(c_cols)): for j in range(i + 1, len(c_cols)): X['{}_mul_{}'.format(c_cols[i], c_cols[j])] = X[c_cols[i]] * X[c_cols[j]] X['{}_sub1_{}'.format(c_cols[i], c_cols[j])] = X[c_cols[i]] - X[c_cols[j]] X['{}_sub2_{}'.format(c_cols[j], c_cols[i])] = X[c_cols[j]] - X[c_cols[i]] test['{}_mul_{}'.format(c_cols[i], c_cols[j])] = test[c_cols[i]] * test[c_cols[j]] test['{}_sub1_{}'.format(c_cols[i], c_cols[j])] = test[c_cols[i]] - test[c_cols[j]] test['{}_sub2_{}'.format(c_cols[j], c_cols[i])] = test[c_cols[j]] - test[c_cols[i]] </code></p> <p>Computing a correlation matrix for all new generated features <code> sum_cols = [col for col in X.columns if '_mul_' in col or '_sub_' in col] corr_matrix = X[sum_cols].corr() </code></p> <p>Then removing all the features with a high correlation. Keeping those which correlate with target value better. ``` to_drop = list()</p> <h1>Iterating over rows starting from the second one, because position [0, 0] will be self-correlation which is 1</h1> <p>for i in range(1, len(corr_matrix)): # Iterating over columns of the row. Only going under the diagonal. for j in range(i): # See if the correlation between two features are more than a selected threshold if corr_matrix.iloc[i, j] &gt;= 0.98: # Then keep the one from thos two which correlates with target better if abs(pd.concat([X[corr_matrix.index[i]], y], axis=1).corr().iloc[0][1]) &gt; abs(pd.concat([X[corr_matrix.columns[j]], y], axis=1).corr().iloc[0][1]): to_drop.append(corr_matrix.columns[j]) else: to_drop.append(corr_matrix.index[i])</p> <p>to_drop = list(set(to_drop)) ```</p>
APTOS 2019 Blindness Detection
78th place solution and thoughts (first medal!)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>My experience</h2> <p>Congratulations to all the participants! I am very excited as this is my first medal!</p> <p>Before you read my post, please upvote my teammate's ( <a href="/agscin">@agscin</a> ) post over <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/107940#latest-620900">here</a></p> <p>Just a little bit of my background before I started this competition. I started on Kaggle 3 years ago. I started working on some competitions seriously, but I was a complete beginner, and basically was forking kernels and playing around with hyperparameters. I started taking Kaggle more seriously when I saw the TGS salt competition. I saw some of <a href="/iafoss">@iafoss</a>'s kernels using fastai 0.7 and it introduced me to the fast.ai course. I was not able to compete seriously in TGS salt, but I started taking the fast.ai course. Since I was very busy, I went to about 3 lessons before the end of the year.</p> <p>But then in January, fast.ai released the third iteration of the course. I was very excited, and started taking the course more seriously. To practice, I started turning to Kaggle. I practiced my image classification skills on Histopathological Cancer Detection competition. I got within the top 14%, which was the best I had done in a Kaggle competition. While studying for the fast.ai course, I tried to do small projects in the form of Kaggle Kernels and competitions. For example, to practice using fast.ai tabular and understanding the usage of NN for tabular data, I tried it out on the LANL competition dataset. I wasn't taking the competition seriously, and it turned out that submission would have gotten me a silver medal, which was sad. To put my skills to the use, I then joined on the Freesound Audio Tagging 2019 competition, working on it seriously. Although it was an audio classification competition, it very quickly became an image classification competition. I was doing well, at one point almost close to a silver medal. However, I reached a dead-end, and my position started dropping rapidly. Eventually, I was only 8 positions away from getting a bronze medal, which was really disappointing to be so close, yet miss the medal.</p> <p>While working on Kaggle and studying with the fast.ai course, I decided to work on a small practical project to help improve my skills. Coincidentally, I had chosen the previous competition diabetic retinopathy dataset and worked on it during my free time for a couple months before this competition started. When I saw this competition, I was really excited, as I already had some idea regarding the type of data, a little bit about the literature, etc. Because of this, I repurposed my exiting kernels to publish a starter kernel and was able to share the resized version of the previous competition dataset, which I already had. </p> <p>At this point, I would like to share my solution and a little bit about what I and my teammate did:</p> <h2>Solution</h2> <p>I had worked already with ResNet50 using this dataset so I decided to try that. I used a model I trained on the previous dataset before the competition started as the pretrained model for my experiments. The training for that model used oversampling and mixup.</p> <p>Apart from the models I submitted from my starter kernel, I did most of my experiments using a 5-fold CV setup. Later I will describe some of the EfficientNet models (which were unsuccessful), in which I submitted single models.</p> <p>My CV was <em>not</em> stratified. Typically, one would take the out-of-fold predictions, put them all together, and calculate the metric on that. Unfortunately, since I was training one fold per kernel, I took the out-of-fold predictions for each fold, calculated the metric, and averaged them. This is not the way people typically do a CV, but it was the best and fastest way for the constraints I had.</p> <p>Unless otherwise noted, all models are ResNet50: 1. Taking the classification model from my starter kernel, adding seed and adding 5-fold CV --&gt; public LB 0.749, private LB 0.893 2. Adding <a href="/ratthachat">@ratthachat</a> <code>crop_image_from_gray</code> function, I got public LB 0.784, private LB 0.903 3. Tried adding mixup, only got public LB 0.777, but private LB was 0.906. I did not pursue mixup further 4. Added label smoothing, got public LB of 0.786, private LB of 0.905. 5. Trained for more epochs, got public LB of 0.793, private LB of 0.903. <strong>This is very surprising.</strong> Training longer improved public LB, but did not improve private LB. In fact, it was the same!! I am unsure why this is the case!</p> <p>This was essentially my model for the competition. Further experiments did not succeed on the public LB. But I will review them anyway, as it is interesting to compare the private LB: 1. Trained label smoothing model longer, got public LB of 0.769, and private LB of 0.905. Once again, <strong>the private LB stayed the same!!</strong> 2. I tried regression, and the public LB was 0.765, and private LB was 0.903 3. I tried ResNet152, and I got public LB of 0.775, and private LB of 0.901 4. I tried oversampling with a single model, with poor results. This is interesting, as oversampling was quite useful for the previous dataset based on previous experiments. </p> <p>Couple other things I tried that failed: - I tried training an XResnet50 from scratch on the previous competition dataset using LSUV initialization and LAMB optimizer. - I tried the LAMB optimizer to train a ResNet50 (with Imagenet weights) on the previous competition dataset.</p> <p>At this point, I was stuck with 0.793 and wanted to see what the hype was about EfficientNets. However, as you can see <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/105563#latest-617654">here</a> and <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/106559#latest-614441">here</a>, I significantly struggled. I finally was able to get 0.780 with an EfficientNetB3 and 0.781 with EfficientNetB4. These did surprisingly well on the private LB, obtaining 0.910, and 0.912. At this time, I teamed up with <a href="/agscin">@agscin</a>. We used these models in a special ensemble we tried out that used Random Forests to take all the predictions from all our models, but surprisingly this ensemble did not do well.</p> <p>We tried many things that did not work: 1. Various optimizers: Adam+Lookahead, RAdam, RAdam+Lookahead, Yogi, SGD, SGD+Lookahead 2. Mish activation 3. Larger and more complex models like ResNext models, SE-ResNet models, etc. 4. Even smaller models like MobileNetV2 5. <code>circular_crop</code> 6. Probably more I cannot remember right now LOL</p> <p>My teammate tested some of these on CIFAR10 and obtained interesting results that did not transfer to the APTOS dataset, which was quite unfortunate.</p> <p>Our final model (all classification) is a simple average ensemble of:</p> <p>2 x EfficientNetB3 trained purely on 2019 train data (with polar unrolling) for classification. Image size: 300x768. Public Score: 0.788, 0.782 <a href="https://www.kaggle.com/agscin/polar-unrolling-preprocessing-84th-place">https://www.kaggle.com/agscin/polar-unrolling-preprocessing-84th-place</a></p> <p>1 x EfficientNetB3 pretrained on 2015 data and fine-tuned on 2019 (with polar unrolling) for classification. Public Score: 0.794 Image size: 300x768</p> <p>5-fold CV ResNet50 trained for classification. Image size: 256x256. Public Score: 0.793</p> <p>Score: Public LB - 0.822, Private LB - 0.923</p> <p>I want to thank my teammate, Artur. I had a great time working with him. I would also like to thank <a href="/ratthachat">@ratthachat</a> for his <code>crop_image_from_gray</code> function. And even though I wasn't able to use the EfficientNets, I would like to thanks <a href="/drhabib">@drhabib</a> for his contributions to the competition. I would finally like to thank the fast.ai team for developing an amazing library and course that helped me jumpstart my deep learning experience. I also would like to thank the fastai community for the support and guidance.</p> <h2>Thoughts</h2> <ul> <li>If you are hitting a dead-end, ask for help on the forums, and people will be glad to help you out. Also, teaming up and ensembling can also really help you if you are struggling. After all, two minds are better than one!</li> <li>In this competition, we had no idea what the private LB would be like. But eventually, it turned out to be like the training data (I was so <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/105763#latest-609621">wrong</a> and I am not sure why). But it meant that the age-old adage "Trust your CV" once again was proven true!</li> <li>Even though some of these new optimizers and activation functions did not turn out to be successful in this competition, some of them did well on CIFAR10 and are worth trying out in other competitions.</li> <li>Being honest, the ResNet50 models I developed probably have low generalization ability. Interestingly, the validation loss was slowly increasing while QWK also increased. It is possible that ResNet50 may have some problem generalizing. But I believe some of this may have to do with the optimizer and the hyperparameters used. A few hours before the deadline, I tried training a ResNet50 with SGD and I saw almost no difference between training and validation loss. Unfortunately, there was not enough time to try to submit or add to our ensemble so I had no idea how it did on the private LB. According, to <a href="https://www.fast.ai/2018/07/02/adam-weight-decay/">this article</a> Adam's generalization ability, is bad due to poorly chosen hyperparameters but I am unsure. This is something to look into a little further, but it may be important to use SGD optimizer as a baseline.</li> </ul> <p><strong>In conclusion, I learned a lot from this competition. I am happy to get my first medal, and I am excited to apply what I learned here to other competitions!</strong></p>
Severstal: Steel Defect Detection
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry to share so late . First of all thanks the organizer Severstal and Kaggle ​ for a very interesting and challenging competition. I hope all the solutions could bring some help and ideas for steel defect detection. Thank everyone that share the ideas in this competition. Without your help, I can't get this gold. I have to thank <a href="/hengck23">@hengck23</a> ! After trying a lot of model pipeline, I found his solution is the most concise one and my final solution is based on that. </p> <h2>Score</h2> <p>Winning submission: | Public LB | Private LB | | ------------- |:-------------:| | 0.91863 | 0.90765 | </p> <p>Best submission | Public LB | Private LB | | ------------- |:-------------:| | 0.91824 | 0.90934 | </p> <h2>Summary</h2> <p><strong>Basic Model:​</strong> Unet, Feature Pyramid Network (FPN) <strong>Encoder:</strong>​ efficientnet-b3, efficientnet-b4, efficientnet-b5, se-resnext50 <strong>Loss:​</strong> Focal Loss <strong>Optimizer:​</strong> Adam, init lr = 0.0005 <strong>Learning Rate Scheduler:​</strong> ReduceLROnPlateau (factor=0.5, patience=3,cooldown=3, min_lr=1e-8) <strong>Image Size:</strong>​ 256x800 for training, 256x1600 for inference <strong>Image Augmentation:</strong> horizontal flip, vertical flip <strong>Sampler:</strong> Weighted Sampler <strong>Ensemble Model</strong>: I simply average 9 model output probability to achieve the final mask probability without TTA 1. FPN + efficientnet-b5 + concatenation of feature maps 2. FPN + efficientnet-b4 3. Unet + efficientnet-b4 , add pseudo labeling data in training data 4. Unet + efficientnet-b4, training with heavy image augmentation 5. Unet + efficientnet-b4 +SCSE layer 6. Unet + efficientnet-b4 +SCSE layer, add pseudo labeling data in training data 7. Unet + efficientnet-b4 + Mish layer 8. Unet + efficientnet-b3 9. Unet + se-resnext50</p> <p><strong>Threshold</strong> Label Thresholds: 0.7, 0.7, 0.6, 0.6 Pixel Thresholds: 0.4, 0.4, 0.4, 0.4</p> <h2>Model Pipeline</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F263239%2Ff7bf0093fee5274cf1aa07efc29b0c7a%2Fmodel.png?generation=1573784178983559&amp;alt=media" alt=""></p> <p>You can check this <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/111457#latest-654845">Link</a> for more detail and code, thank to HengCherKeng again !</p> <h2>The key point I think that help me win a gold</h2> <ul> <li>use code pipeline <a href="https://github.com/PavelOstyakov/pipeline">https://github.com/PavelOstyakov/pipeline</a></li> <li>make sure the diversity of models in ensemble</li> <li>using one-stage model pipeline prevent me from training classification model individually and tuning too much hyper-parameters. Simple is best.</li> </ul>
Open Images 2019 - Object Detection
3rd place solution [0.67170|0.64214 Public|Private LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Learning an Efficient Network for Large-Scale Hierarchical Object Detection with Data Imbalance: 3rd Place Solution to Open Images Challenge 2019 <a href="https://arxiv.org/abs/1910.12044">https://arxiv.org/abs/1910.12044</a></p>
Severstal: Steel Defect Detection
7-th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our team member <a href="https://www.kaggle.com/xuyuan">xuyuan</a> and I had already participated in a segmentation challenge (TGS salt) and we thought this time will be a little easier, so joined this competation really late 4, 5 weeks ago. But these few weeks was not enough to try out many things. We haven't even time to train properly a classifier, so we stick to a pure segmentation ensemble.</p> <p>Our final ensemble contained 3 efficientnet unet models: * 2 folds of efficientnet-b7 unet. trained on 256x384 with mixup and label smoothing. The best public score was 0.91288. We don't know the private score because only the csv file was submitted. In the final ensemble, these b7 models was fine tuned on the full resolution. * 1 fold of efficientnet-b4 unet. trained on 256x384 and fine tuned on the full resolution. * 2 fold of efficientnet-b0 unet. trained on 256x384 and fine tuned on the full resolution. The best one got public 0.91284 and private 0.89719 score.</p> <p>For the ensemble we have tried many thresholds. For final submission we have used [0.8, 0.99, 0.75, 0.75] which produced 87, 0, 602, 111 class images. Another submission which used [0.8, 0.99, 0.78, 0.75] produced 87, 0, 577, 111 class images. This would have a private score 0.90819 and public 0.92016 which would be enough for the second place. So predicting the class3 less seems to be a trick.</p> <p>The min area was 100 and pixel threshold 0.3.</p> <p>What worked for us so far: * AdamW with the Noam scheduler * gitlab pipeline to manage training experiments * mixup and label smoothing * fine tuning on the full resolution * random sampler * cross entropy loss</p> <p>What didn't work: * SGD * SWA worked really good for the salt competition, but for this competition it didn't work at all * pseudo labeling (trained on last 2, 3 days) * training a classifier * balanced sampler</p>
Severstal: Steel Defect Detection
24th place training framework using pytorch
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We got 24th place in Steel Defect Detection. When we played in this competetion, we found that there is few good semantic segmentation training framework in pytorch which made it a hard time to do experiments, so we developed a library which mimiced <a href="https://github.com/open-mmlab/mmdetection">mmdetection</a>.</p> <p>Our implementation is named <a href="https://github.com/Media-Smart/vedaseg">vedaseg</a>, it contains fpn, unet, deeplabv3plus, deeplabv3, pspnet, etc. This implementation is a modular semantic segmentation library which is flexible and extensible. We can implement a new semantic segmentation model like building a lego toy.</p>
Severstal: Steel Defect Detection
Step by step (9th place solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It will be a long story (just like I did with <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44659#251185">Porto Seguro</a>) so grab yourself a coffee :)</p> <h2>Premature optimization is the root of all evil</h2> <p>What was the goal of this competition? In my opinion the goal was to predict mask for specific image and specific defect type. That's the way data is constructed - one row of data is combination of image and defect type. </p> <p>But for some reason (almost) everyone assumed that the goal was to build model which predicts 4 masks from 1 image. You may think it's not really a difference, I will explain why I think it matters.</p> <p><a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/106099#610085">Two months ago</a> I wanted to build 4 classification models and 4 segmentation models - one for each class. In this thread <a href="/phoenix9032">@phoenix9032</a> proposed three Pipeline Strategies. Please look at strategy 2. </p> <p>We agree we need some kind of classifier. Why we need it? What should it do? Classifier takes image as input and predicts probability that class has a defect. And what we should do with that information? </p> <p>We can use that information after segmentation to say that segmentation was pointless. But in that case what was the reason to start segmentation at all? We have only 1 hour to predict everything.</p> <p>So we need to use that information before segmentation. When classifier predicts that there is specific defect we can call segmentation, if not - we can skip segmentation and save time. But wait - if your segmentation model is predicting 4 masks at once you don't need information about defect type, because you will call segmentation anyway if at least one defect is possible.</p> <p>But what if you build single classifier with 4 outputs and then you use separate segmentation model for defect? Then you need to execute segmentation only for the defect type with high probability.</p> <p>How do you select which model is the best one? The one with best metrics, right? But how do you know which model has best metrics if you average 4 defects? What if I told you that predicting each mask is independent task? When you split competition to 4 different parts you can work on each of them independently - which makes solution much more stable - and then you can even survive the shakeup.</p> <p>And that's not all. How to handle class inbalance? I read many posts that people can't predict class 1 and class 2. Some people dropped class 2. In my case each class was separate training. On different training data.</p> <p>You can assume otherwise - that predicting defects is not independent task, there are no pixels with two different defects, so you may say model which predicts all defects at once is smarter. Like softmax with 5 class (I had no time to try that). But what are you doing after predicting mask and before saving submission? You are executing some kind of postprocessing - and then you can remove defect at all if it's too small. And again this is independent for each class.</p> <h2>metrics</h2> <p>Soon I realized that <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/107182">something is wrong with the metrics</a>. I wasn't sure, maybe there was some mistake in my thinking - thanks to <a href="/zavareh89">@zavareh89</a> for confirmation.</p> <p>"The leaderboard score is the mean of the Dice coefficients for each ImageId, ClassId pair in the test set." </p> <p>It means that score in this competiton was calculated separately for each image and for each defect type. So if ground truth is mask with zero pixels and you will predict zero pixels - you have score 1, but if you predict at least one pixel - your score is 0. It is crucial to filter out empty masks.</p> <p>Default metrics used in kernels (at least in keras kernels) takes whole batch of images, four masks together and flattens that. Then intersection and union is calculated in array of one dimension. In which case ground truth is mask with zero pixels if you flatten all images and all masks? Only when all 4 masks in all images in batch are empty - which is rare case. This metrics is totally wrong.</p> <h2>loss</h2> <p>If there is something I learned from TGS Salt competiton it is the lovasz loss.</p> <p>The problem with binary crossentropy is that it works in the beginning but then the loss is decreasing but the metrics is not increasing. Lovasz loss is much better for segmentation, but it needs good model to start, because the training is slower.</p> <p>I tested lovasz_hinge and lovasz_softmax. </p> <p>So the correct workflow is: - train with BCE - take best model (using metrics not loss!) - train with lovasz_hinge</p> <p>It always works. </p> <h2>segmentation models</h2> <p>Thanks to <a href="/cdeotte">@cdeotte</a> for <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/103367#latest-649639">information how to use segmentaton models</a></p> <p>Thanks to <a href="/bibek777">@bibek777</a> for <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/111110#640168">info about EfficientNet.</a></p> <p>And of course thanks to <a href="/pavel92">@pavel92</a> for creating segmentation models :)</p> <p>In the beggining I was using unet with resnet34. Then I was trying different networks - resnet50, densenet, etc... but performance was lower. Then I tried efficientnet and it finally it was better.</p> <p>I used resnet34 classifier to the end of the competition, but for segmentation I switched to b0, then b1, then b2.</p> <h2>PyTorch</h2> <p>Everything was good, I was doing my deep learning and then Kaggle announced new feature - now time of GPU usage is limited to 30 hours per week. You may think 30 hours is plenty of time, truth is I was able to use half of this time during one day. <a href="https://www.kaggle.com/general/108481#624663">So I wrote this.</a> </p> <p>I borrowed GTX 1080 and to my surprise I was quickly able to make everything work locally, so few days later I bought RTX 2070. From that time all my training was performed on my GPU. Funny thing: last competiton when I was using my home computer instead Kaggle kernels was Porto Seguro (the one with gold medal).</p> <p>I had no issues with speed of my local GPU - it was similar to Kaggle kernel. But <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/109185">there was another kind of problem.</a>. Cheap GPU has smaller memory than the one used by Kaggle. So I wasn't able to fit same batch. And my model started to overfit. Thanks to <a href="/bfishh">@bfishh</a> for confirmation. Thanks to <a href="/ostamand">@ostamand</a> for info about gradient accumulation in PyTorch. Also thanks to <a href="/sandeepsign">@sandeepsign</a> and <a href="/ashwan1">@ashwan1</a> for info about float16.</p> <p>I started to learn PyTorch. spoiler: I was never able to try float16 and gradient accumulation didn't work correctly. But I realized PyTorch is much much better than TensorFlow/Keras, I had much more control over everything. I know guys in Google are smart and that's why they designed TensorFlow 2.0 but I had no time to check it, so I use PyTorch.</p> <p>Funny thing: I could fit larger batch in PyTorch than in Keras for same architecture. Maybe it is related to framework itself, maybe it is related to segmentation models implementation.</p> <h2>classifier and class inbalance</h2> <p>On first October I trained 4 folds resnet34 classifier. As I wrote before classifier is used to skip segmentation for some images. But in my case it is also used to handle class inbalance.</p> <p>When classifier predicts that image may have defect this image is later processed by segmentation model. Let me rephrase this sentence - when classifier predicts that image doesn't have defect - it is not processed by segmentation model. We can conclude from that something about distribution of data used in segmentation model.</p> <p>For each class: - take all images with mask - add all images with high prediction of classifier - train segmentation only on this data</p> <p>I started from 0.5 threshold, final threshold was 0.01. This is my final trainset for each class: (targets means ground truth mask, outputs means classifier &gt; threshold, all is union of them)</p> <p><code> class 1 all 2405 / 12568 outputs 2377 / 12568 targets 897 / 12568 all-targets 1508 / 12568 class 2 all 949 / 12568 outputs 941 / 12568 targets 247 / 12568 all-targets 702 / 12568 class 3 all 8398 / 12568 outputs 8381 / 12568 targets 5150 / 12568 all-targets 3248 / 12568 class 4 all 1603 / 12568 outputs 1591 / 12568 targets 801 / 12568 all-targets 802 / 12568 </code></p> <h2>final training workflow</h2> <p>In the EfficientNet thread <a href="/hengck23">@hengck23</a> wrote following sentence:</p> <p>"My efficientnet b3 and b5 with fpn gives good results. Input size is crop of 256x400. You may want to use this as reference"</p> <p>But for my crop was always worse than full size. You can read it <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/111110#648254">here</a>. </p> <p>And then <a href="/phoenix9032">@phoenix9032</a> wrote following sentence:</p> <p>"Yes Chris . With qubvel models I can train with 256x256, and then predict with 256x1600. Or even run 256x256 for 30 epochs , load the model and finetune with 256x512 by passing that as input size and then finally finetune stage 3 with 256x1600 as well . "</p> <p>Theoreticaly there is no new information here. I was also using crop for training and full size for prediction. But finetuning on different sizes? Why not? So I tested it. And there was no improvement.</p> <p>The main problem is overfitting. First model generalizes but at some point it overfits to training data. By using augmentation we can push it more, but then it overfits to all possible augmentatons and there is no longer improvement on validation.</p> <p>But why when I train on 256x416 and then I train on 256x800 it needs time to overfit? If the model is already dumb it should stay dumb with 256x800 too. Instead it looks at larger input data and learn new stuff. Validation metrics was not increasing, but...</p> <p>What if I use different crop sizes not to finetune but as an augmentation?</p> <p>The problem was that augmentation was in DataGenerator which produces single image and for training I needed whole batch of same resolution. So I could look into PyTorch docs because it's probably very easy to implement custom DataLoader but it was October 17th and I really had no time to fix potential bugs in my code.</p> <p>I was already using two DataGenerators: one for the training and one for validation. Why not create 4 DataGenerators and 4 DataLoaders? (PyTorch FTW!)</p> <p>My final training workflow is: - train on 256x256 crops (all images from train dataset) batch_size = 12 - train on 416x256 crops (all images from train dataset) batch_size = 12 - train on 608x256 crops (all images from train dataset) batch_size = 8 - validate on full size (all images from valid dataset) batch_size = 4 - call it an epoch</p> <p>I verified that it works in each class - always result is better than full size training.</p> <p>It's important to decrease lr: <code> scheduler = ReduceLROnPlateau(optimizer, mode="min", patience=3, factor=0.5, eps=1e-09, verbose=True) </code></p> <p>For BCE stage: <code> optimizer = optim.Adam(model.parameters(), lr = 1e-3) </code></p> <p>And for lovasz stage: <code> optimizer = optim.Adam(model.parameters(), lr = 1e-4) </code></p> <h2>final days</h2> <p>After October 18th I used 24h per day only to train my final models. Last one was trained few hours before competition deadline. Without it I would be probably in the silver zone.</p> <p>My top two submissions from public are also my top two submission from private. I believe it is because my solution is stable.</p> <p>and now some details from my winning kernel:</p> <h3>classification - 4 folds resnet34 (BCE loss in name)</h3> <p><code> 20191003-classifier-class-2/model_class_a_56_0.072748.pth 20191003-classifier-class-2/model_class_a_46_0.074594.pth 20191003-classifier-class-2/model_class_a_51_0.065235.pth 20191003-classifier-class-2/model_class_a_37_0.063228.pth </code></p> <h3>segmentation - 5 folds FPN efficientnet-b2 (my metrics in name)</h3> <p>``` 20191019-class-3/model_lovasz_class_3_fold_0_c_25_0.787906.pth 20191020-class-3-fold-1/model_lovasz_class_3_fold_1_c_38_0.800861.pth 20191021-class-3-fold-2/model_lovasz_class_3_fold_2_c_30_0.805968.pth 20191022-class-3/model_lovasz_class_3_fold_3_c_28_0.809485.pth 20191024-class-3-fold-4/model_lovasz_class_3_fold_4_c_28_0.817969.pth</p> <p>20191024-class-1/model_lovasz_class_1_fold_0_d_1_0.829611.pth 20191024-class-1/model_lovasz_class_1_fold_1_d_1_0.831025.pth 20191024-class-1/model_lovasz_class_1_fold_2_e_1_0.836173.pth 20191024-class-1/model_lovasz_class_1_fold_3_c_30_0.867282.pth 20191024-class-1/model_lovasz_class_1_fold_4_c_20_0.833884.pth</p> <p>20191020-class-2/model_lovasz_class_2_fold_0_c_7_0.877366.pth</p> <p>20191023-class-4/model_lovasz_class_4_fold_1_d_21_0.869015.pth 20191023-class-4/model_lovasz_class_4_fold_0_e_3_0.879779.pth 20191023-class-4/model_lovasz_class_4_fold_3_d_1_0.884751.pth 20191023-class-4/model_lovasz_class_4_fold_4_d_2_0.901342.pth 20191023-class-4/model_lovasz_class_4_fold_2_e_9_0.849709.pth ```</p> <h3>classification thresholds</h3> <p><code> class 1: t = 0.3 class 2: t = 0.3 class 3: t = 0.25 class 4: t = 0.1 </code></p> <h3>classification results</h3> <p><code> class 1 idx_outputs_positive 191 class 2 idx_outputs_positive 71 class 3 idx_outputs_positive 781 class 4 idx_outputs_positive 176 </code></p> <h3>segmentation thresholds</h3> <p>for each class: min mask size = 1000 pixel thresholds:</p> <p><code> class 1: t = 0.55 class 2: t = 0.7 class 3: t = 0.53 class 4: t = 0.5 </code></p> <h3>segmentation results</h3> <p><code> class 1: found masks: 92 class 2: found masks: 9 class 3: found masks: 586 class 4: found masks: 113 </code></p> <h2>summary</h2> <ul> <li>Kaggle is still the best place on the planet</li> <li>to me PyTorch is better than Tensorflow 1.x / Keras</li> <li>using segmentation models was very good idea</li> <li>reading everything on forum is extremely important</li> <li>I had 3 months for this competition and I trained everything in the last week</li> </ul> <p>Thanks Severstal for an awesome competition. Thanks everyone for great fun! See you in another competition, hopefuly soon, not in another 2 years :)</p>
APTOS 2019 Blindness Detection
32nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle for hosting this competition.</p> <p><strong>Overview</strong></p> <ul> <li>Use 2015 competition data.</li> <li>Ensemble Efficientnet B5 with different preprocessing methods.</li> </ul> <p><strong>Step1 : pretrain</strong></p> <ul> <li>Use 2015 competition data as train dataset and 2019 competition train data as val dataset.</li> <li>Image size : 300x300.</li> <li>Model: Efficientnet B5 and B4.</li> <li>Preprocessing: No cropping, just resize.</li> <li>Loss : MSE, as regression problem.</li> <li>Save best model based on kappa score.</li> </ul> <p><strong>Step2: finetune</strong></p> <ul> <li>Split 2019 train data into 0.85:0.15 train/val dataset. Train on the same dataset.</li> <li>Image size: 300x300.</li> <li>Preprocessing: different preprocessing methods: resize, Cropping &amp; blur ,Cropping &amp; Ben's Preprocessing.</li> <li>Heavy data argumentation: Rotation, zoom, lightning, flip, shear,shift.</li> <li>Loss : MSE, as regression problem.</li> <li>Save best model based on kappa score.</li> </ul> <p><strong>Step3: ensemble</strong></p> <ul> <li><p>Use TTA: fliplr, flipud, rot90</p></li> <li><p>The best single model (v1) is Efficientnet B5 with Cropping &amp; Ben's Preprocessing</p> <p>&gt; CV: 0.9294 &gt; &gt; LB: 0.8222 &gt; &gt; PB: 0.9266</p></li> <li><p>The best LB model (v2) is ensemble Efficientnet B5 with resize and Efficientnet B5 with Cropping &amp; blur</p> <p>&gt; CV: 0.9159/0.9271 &gt; &gt; LB: 0.8312 &gt; &gt; PB: 0.9274</p></li> <li><p>The best PB model (v3) is ensemble all. Efficientnet B5 with resize / Efficientnet B5 with Cropping &amp; blur / Efficientnet B5 with Cropping &amp; Ben's Preprocessing</p> <p>&gt; CV: 0.9159/0.9271/0.9294 &gt; &gt; LB: 0.8264 &gt; &gt; PB: 0.9292</p></li> </ul> <p>I choose v1 and v2 as my final submission.</p> <p><strong>Conclusion</strong></p> <ul> <li>Maybe you should trust your local CV (consider train/test acc). (Local CV are more close to the final PB score)</li> <li>Different preprocessing methods are useful</li> <li>More Data and heavy argumentation are useful</li> <li>Different image size may helpful</li> </ul> <p>Thanks for everyone in this discussion broad. </p>
IEEE-CIS Fraud Detection
Silver Medal in My First Kaggle Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Received a silver medal (54/6438) in my very first Kaggle competition! I know it's not the best (not as good as gold), but as someone who just started learning Machine Learning on Coursera not so long ago, and who only used logistic regression in real life, I am really happy where I am!</p> <p>Taking this chance, I have a few special thank you to say.</p> <p>Thank @kyakovlev for your wonderful baseline kernels. I have to admit that without your baseline, I would have had little idea where to start. People say your are the MVP of this competition and I couldn't agree more. </p> <p>Thank @cdeotte for your discussion thread on systematic feature engineering and all of your kind replies to other Kagglers' questions. Learned so much.</p> <p>And last but certainly not the least, thank you @raddar for your hint, and for showing me there was still time to build a good solution, when I was out of ideas and about to give up, by getting up to top 1% on leaderboard so fast although being late to the party. That was tremendous encouragement.</p> <p>And for others who kindly shared their kernels and insights, I made sure I upvoted them. </p> <p>Happy Kaggling!</p>
Understanding Clouds from Satellite Images
129th Place solution (Classifer Cascading + Segmentation)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Kaggle and Max Planck Institute for this interesting competition and congrats to all the winners! Here is a brief summary of my solution (Public 0.67698, Private 0.66713).</p> <ul> <li><p><strong>No Preprocessing</strong></p></li> <li><p><strong>Augmentations (by Albumentations)</strong> ShiftScaleRotate (scale_limit=0.1, rotate_limit=0, shift_limit=0.1, p=0.5, border_mode=0) horizontal flip (p=0.5) vertical flip (p=0.5)</p></li> <li><p><strong>Validation</strong> StratifiedKFold for the number of masks</p></li> <li><p><strong>Segmentation Model (ensemble of 7 models x 5folds)</strong> UNet-SeResNext50 UNet-ResNet50 UNet-ResNet18 UNet-ResNext101 LinkNet-EfficientNet-B5 FPN-EfficientNet-B4 (4 folds) FPN-EfficientNet-B5 FPN-EfficientNet-B6 UNet-EfficientNet-B6</p></li> <li><p><strong>Classification Model (ensemble of 7 models x 5folds)</strong> EfficientNet-B2 EfficientNet-B4 EfficientNet-B5</p></li> <li><p><strong>Loss</strong> BCE + Dice</p></li> <li><p><strong>Ensemble</strong></p></li> <li>simple average of the 14 segmentation models</li> <li><p>We also tried voting ensemble but it didn't work for us</p></li> <li><p><strong>Postprocessing</strong> TTA : None pixel threshold = 0.5 small mask threshold = 20000 Cascading of classifier result (Apply each classifier one after another) Convex Hull PostProcessing</p></li> <li><p><strong>Final submission</strong> Our final submission was selected based on best public scores. We believe main reason we scored less on private LB was because of weak segmentation models. Each of our models had a very average CV but our classifiers and postprocessing were really good.</p></li> </ul>
Understanding Clouds from Satellite Images
Easy silver in last days [55th]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Easy silver in last days</h2> <p>I have adopted my pipeline from Severstal Defect Detection and was able to get silver medal in last two days with just 6 submissions, here is a short description of 55th place solution.</p> <p>2 step pileline 1) Multi-task network (classification + segmentation) as classifier to remove empty masks 2) Binary segmentation for each class</p> <h3>1st step.</h3> <p>I have trained 5-fold <code>FPN(resnet34) + aux classfication output</code> on 480x640 images using <code>Flip</code>, <code>RandomBrightness</code> as augmentations. Model trained just 6-7 epochs and than starts to overfit, I do nothing with that, just save top 5 checkpoints according to metric.</p> <p>Loss (segmentation head): bce+dice Loss (classification head): bce Optimizer: AdamW Postprocessing: remove masks less than 10000 pixels Thresholds: [0.6, 0.6, 0.6, 0.6] <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F940556%2Faec538c6a536cc9dd070697b4e416870%2F2019-11-19%2010-58-58.png?generation=1574150366439784&amp;alt=media" alt=""></p> <h3>2nd step.</h3> <p>For each class trained <code>2 x Unet(se_resnext50_34x4d)</code> only on images with masks of that class! with same optimizer, image size and augmentations.</p> <p>Loss: bce+dice Thresholds: [0.4, 0.4, 0.4, 0.4]</p> <h3>Ensemble</h3> <p>For all models made checkpoints weights! averaging (+0.005-0.01 on validation). Models over each stage have been just averaged with Flip TTA.</p> <h3>Useful links</h3> <ul> <li>Segmentation Models: <a href="https://github.com/qubvel/segmentation_models.pytorch">https://github.com/qubvel/segmentation_models.pytorch</a></li> <li>Test Time Augmentation for PyTorch: <a href="https://github.com/qubvel/ttach">https://github.com/qubvel/ttach</a></li> </ul> <p><strong>And congratulations to winners!</strong></p>
Understanding Clouds from Satellite Images
Finally GM & 7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>congratulations for all kagglers.</p> <h1>Small talk</h1> <p>After the failure of steel, I had no time to feel sad, so I immediately went to this competition, made efforts for my GM. One of my former teammates <a href="https://www.kaggle.com/naivelamb">Xuan Cao</a> choose solo. Of course, he made the right choice because he win a solo gold and became GM. Congratulations to him. And then I choose my friends who had been suffered from steel like me, <a href="https://www.kaggle.com/mdlszhengli">Zheng Li</a>, <a href="https://www.kaggle.com/lanjunyelan">yelan</a>, <a href="https://www.kaggle.com/hesene">Jhui He</a> and <a href="https://www.kaggle.com/strideradu">Strideradu</a> to team up.</p> <h1>Solution</h1> <p>&gt; Our solution is very simple, just ensemble.</p> <h2>Segmentation v1:</h2> <p>Model: efficientnet e5/7-FPN,se101-FPN, se101-linket Loss: dice loss</p> <h2>Segmentation v2:</h2> <p>Model: efficientnet e5-FPN Loss: SymmetricLovaszLoss+dice loss</p> <h2>Classification:</h2> <p>&gt; We have tried some pure classifiers, but the dice improvement of oof of our segmentation is limited, so we turn to multi-task learning, a segmentation model with fc head.</p> <p>Model: efficientnet e5-fpn, se50-unet, se50-fpn Loss: 0.1 * bce (classification) +(bce + lovasz + dice)(segmentation)</p> <h2>Ensemble:</h2> <h3>v1:</h3> <p>Averaged probability from classification model for removing fp, and Segmentation v1 for tp, then we can got the around 0.670 oof cv, and the threshold for classfication is [0.65, 0.65, 0.65 ,0.65], then remove the small size mask(the threshold is [21000, 21000, 21000, 10000]), finally got the 0.6783 lb.</p> <h3>v2:.</h3> <p>we averaged probability from classification and the max pixel probability from Segmentation v2 for removing fp, but the lb was bad, and the threshold is low(0.55, the low threshold is not good in steel), so we abandoned this.</p> <h2>Post processing:</h2> <ol> <li>From training set, each image has at least one label. Then, I extracted 4 channels with empty samples From my 6783 sub above, and extracted the maximum prediction probability of the classifier on this sample, and then restore the pixel mask if the prediction probability for a category &gt; 0.55</li> <li>I did a mask union of the samples that both Segmentation v2 and 6783sub predicted as postive. Combined with the above post-treatment, we can get 0.6800 lb.</li> </ol> <h1>Conclusion</h1> <ol> <li>Unfortunately, we did not select the best submission(0.67254), which was from ensemble v2. but the public lb was 0.67360, so we did not select this submission, and of course we had a lot of submissions in the top3. Fortunately, Our submission which we choose, still allows us to go into the gold zone.</li> <li>Thanks to my teammates for their efforts and I congratulate myself on becoming GM.</li> </ol>
Understanding Clouds from Satellite Images
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Kaggle and Max Planck Institute for this interesting competition and congrats to all the winners! Here is a brief summary of my solution (Public 0.67698, Private 0.66713).</p> <h2>Preprocessing</h2> <ul> <li>exclude bad images (removed 13 images)</li> <li>resize image size to (320, 512)</li> </ul> <h2>Augmentations (by Albumentations)</h2> <ul> <li>gamma (limit=(50,100), p=0.5)</li> <li>brightness (limit=0.2, p=0.5)</li> <li>shift (limit=0.2, border_mode=0, p=0.5)</li> <li>rotation (limit=30deg, border_mode=0, p=0.5)</li> <li>horizontal flip (p=0.5)</li> <li>vertical flip (p=0.5)</li> </ul> <h2>Validation</h2> <ul> <li>StratifiedKFold for the number of empty masks</li> </ul> <h2>Model (ensemble of 7 models x 5folds)</h2> <ol> <li>UNet-ResNet34 + CBAM + Hypercolumns</li> <li>same as 1. but with other seed</li> <li>UNet-ResNet18 + CBAM + Hypercolumns</li> <li>UNet-InceptionResNetV2 + CBAM+ Hypercolumns</li> <li>UNet-SeResNext50 + CBAM + Hypercolumns</li> <li>UNet-ResNet34 + CBAM + FPA</li> <li>UNet-ResNet18 + CBAM + FPA I used the weights of best validation score epochs.</li> </ol> <h2>Loss</h2> <ul> <li>BCE + LovaszHinge</li> <li>on top oh that I used deep supervision with BCE+LovaszHinge loss (for only non-empty masks) multiplied by 0.1</li> </ul> <h2>Optimizer &amp; Scheduler</h2> <ul> <li>Adam &amp; CosineAnnealingWarmRestart (20epoch cycle)</li> <li>learning rate : 1e-4 to 1e-6</li> </ul> <h2>Ensemble</h2> <ul> <li>simple average of the 7 models (x 5folds = total 35 models)</li> </ul> <h2>Postprocessing</h2> <ul> <li>TTA : None + h-flip + v-flip + h- and v- flip</li> <li>pixel threshold = 0.45</li> <li>small mask threshold = 18000 Both determined by the 5foldCV for model 1.</li> </ul> <h2>Final submission</h2> <ul> <li>I checked only Public LB score for ensembles. So I needed some criteria to choose the final submission. I decided to choose two submissions which were good in Public LB and stable against the small mask threshold, although these were not my best Public LB submission. Luckily I survived the shake up and got a gold medal.</li> </ul>
Predicting Molecular Properties
12 Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We basically used this.</p> <p><a href="https://arxiv.org/abs/1906.04015">https://arxiv.org/abs/1906.04015</a></p> <p>We modified the architecture slightly so that the outputs would be on the connections of atoms, but that was pretty much it. For simplicity, we trained one net each on j_1, j_2, and j_3 couplings, and then split it into one net for each coupling type (e.g. 1JHC).</p> <p>The only slightly fancy thing we did was that we pretrained by initially using the Mulliken charges as features, which we removed further on in the training.</p> <p>That's it, no particularly clever tricks :-).</p> <p>P.S. If anyone has any questions about the paper, Brandon and I will be happy to address them to the best of our ability.</p>
Severstal: Steel Defect Detection
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners in this competition! This is my first competition in Kaggle and also the first competition for my teammates <a href="/zhuhongbo">@zhuhongbo</a> and <a href="/paffpaffyu">@paffpaffyu</a>. They were my former classmates in the university. We are all very surprised that we won the competition. </p> <p>First of all, thanks to <a href="/hengck23">@hengck23</a> for sharing his insightful ideas. We followed some of his posts and got a huge improvement. And also thanks to <a href="/lightforever">@lightforever</a>, our best final solution used several models from him. And thanks to <a href="/pavel92">@pavel92</a> for his convenient segmentation library.</p> <p><strong>Classification</strong> Classification is an important part of this competition. Even though classifiers can only slightly improve your score after 0.915 in public LB, they can work as a preliminary screening by filtering out around half of images with no defects. This will enable us to ensemble more models in segmentation part. We trained our classifiers with random crop of 224x1568 and do inference on full size. This random crop gives slightly improvement on accuracy. Augmentations: Randomcrop, Hflip, Vflip, RandomBrightnessContrast (from albumentations) and a customized defect blackout Since this is a sematic segmentation task, we know exactly where the defects are. As a result, these defects components can be randomly blacked out and the label for this image will also change from 1 to 0 if all defects are blacked out. This augmentation indeed works on local CV and public LB. Here are some graphs of the training process of a ResNet34 classifier. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3601023%2Ffb6be779598553f5c9c1f02cecb252a4%2Floss_res22.png?generation=1571984109468690&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3601023%2F4c60680ddbb0c74b6d8d5ae6f86ae17f%2Faucv2.png?generation=1571984166024661&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3601023%2Fbe60f89444c0a3cea1a68c8be14283b6%2Faccv2.png?generation=1571984154002357&amp;alt=media" alt=""> Batchsize: 8 for efficientnet-b1, 16 for resnet34 (both accumulate gradients for 32 samples) Optimizer: SGD Model Ensemble: 3 x efficientnet-b1+1 x resnet34 TTA: None, Hflip, Vflip Threshold: 0.6,0.6,0.6,0.6</p> <p><strong>Segmentation</strong> We have to admit that we used models from <a href="/lightforever">@lightforever</a> these models improved our score from 0.907 private LB to our current score. Train data: 256x512 crop images Augmentations: Hflip, Vflip, RandomBrightnessContrast (from albumentations) Batchsize: 12 or 24 (both accumulate gradients for 24 samples) Optimizer: Rectified Adam Models: Unet (efficientnet-b3), FPN (efficientnet-b3) from <a href="/pavel92">@pavel92</a> segmentation_models_pytorch Loss: BCE (with pos_weight = (2.0,2.0,1.0,1.5)) 0.75*BCE+0.25*DICE (with pos_weight = (2.0,2.0,1.0,1.5)) Model Ensemble: 1 x Unet(BCE loss) + 3 x FPN(first trained with BCE loss then finetuned with BCEDice loss) +2 x FPN(BCEloss)+ 3 x Unet from mlcomp+catalyst infer TTA: None, Hflip, Vflip Label Thresholds: 0.7, 0.7, 0.6, 0.6 Pixel Thresholds: 0.55,0.55,0.55,0.55 Postprocessing: Remove whole mask if total pixel &lt; threshold (600,600,900,2000) + remove small components with size &lt;150</p> <p><strong>Pesudo Label</strong> We did 2 rounds of pseudo labels in this competition. The first round is generated from a submission with 0.916 public LB, maybe it is too early? The second round was done several days before the end of this competition, generated from a submission with 0.91985 public LB. With pseudo label and public models, we finally improved from 0.91985 to 0.92124 on public LB and from 0.90663 to 0.90883 on private LB. The pseudo labels are chosen if classifiers and segmentation networks make the same decisions. We got this idea from Heng. An image will only be chosen if the probabilities from classifiers are all over 0.95 or below 0.05 and it gets same result from segmentation part. According to this rule, 1135 images are chosen and added to trainset.</p> <p><strong>Predictions on public LB:</strong> Defect 1: 97(128) Defect 2: 2(43) Defect 3: 611(741) Defect 4: 110(120) Sum Pos: 820(1032)</p>
Understanding Clouds from Satellite Images
143rd place (bronze) solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Code and more detail: <a href="https://github.com/btrotta/kaggle-clouds">https://github.com/btrotta/kaggle-clouds</a></h2> <h2>Pre-processing the images</h2> <p>I worked with grayscale images shrunken to 25% of original size.</p> <p>I got a large boost in model accuracy from filtering out the over-exposed areas in the images. Below is a sample image before and after correction (I also changed the missing area to grey).</p> <p><img src="https://raw.githubusercontent.com/btrotta/kaggle-clouds/master/img/before_after.png" alt="Before and after correction"></p> <h2>Model</h2> <p>I used a blend of efficientnetb4 effecientnetb5, both pre-trained from this library: <a href="https://github.com/qubvel/segmentation_models">https://github.com/qubvel/segmentation_models</a>. I trained for 10 epochs with the encoder layers frozen, then fine-tuned the whole model for 10 epochs with a lower learning rate. I did horizontal and vertical flip augmentation; I tried others but found they didn't help.</p> <h2>Post-processing the model predictions</h2> <p>The key to post-processing is to observe that the dice metric is not continuous: if a class doesn't exist in an image, there is a huge difference in predicting 1 pixel (dice score 0) and predicting 0 pixels (dice score 1). So, to decide whether to make a non-zero prediction, we need to estimate 2 things: the probability that the class exists in the image, and the expected dice score given that the class does exist. Then we can calculate the expected dice score for a zero and a non-zero prediction, and choose between them accordingly. I built very simple models for these, all just using a single variable: the 95th percentile of the predicted class probabilities for each image. </p> <p>I didn't attempt to reshape the predicted areas into rectangles or polygons, as in some published kernels. I also didn't enforce a minimum predicted area. My hypothesis is that this information is already built in to the neural network predictions, and that this is why augmentations that change the size or shape of the masks (e.g. skew, rotation, zoom) give poor results.</p>
Open Images 2019 - Instance Segmentation
20th place solution - maskrcnn-benchmark baseline
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Open Images and Kaggle team for this great competition(s) and congrats to all (tentative) prize and medal winners!</p> <p>My result is not outstanding but the solution might be valuable to be shared because I used the famous maskrcnn-benchmark library 'as it is' and also used its outputs as it is without TTA or any post processing. Training two models requires only 14 hours (x2) using V100 8GPUs.</p> <p>All codes are available at: <a href="https://github.com/yu4u/kaggle-open-images-2019-instance-segmentation" target="_blank">https://github.com/yu4u/kaggle-open-images-2019-instance-segmentation</a></p> <p>There are mainly two issues to be solved in this competition and the Object Detection track: (1) class imbalance and (2) class hierarchy. I tackled these issues only on a dataset creation side. The former is easy to handle: use fixed number of training images for each class. In this post, I mainly describe how to handle class hierarchy.</p> <p>Firstly, I divided all classes into two groups: layer0 and layer1. From challenge-2019-label300-segmentable-hierarchy.json we can see that:</p> <ol> <li>Maximum depth of hierarchy is 2 (starting from 0)</li> <li>The number of depth 2 classes is only 5.</li> </ol> <pre><code>Carnivore └── Bear ├── Brown bear ├── Polar bear └── Teddy bear &lt;--- Are you serious? Reptile └── Turtle ├── Tortoise └── Sea turtle </code></pre> <p>Thus, I decided to group depth 0 classes as layer0 group and depth 1 and 2 classes together as layer1 group. The idea is to make different model for each of two groups.<br> In training each model, a dedicated dataset is used, which includes only the target group class instances. By doing so, there is no need to care about class hierarchy.<br> However, practically, it is impossible to make dataset from only training images that includes only target classes and does not include non-target classes. Therefore, I removed non-target class instances from training images.</p> <p>For layer0 group dataset:</p> <ol> <li>Remove non-target class annotations that occlude target class object 25% or more</li> <li>Convert non-target class to its parent class (Thus it becomes target class. Some classes need to be processed twice. 'Teddy bear' is converted only to 'Toy', not 'Carnivore')</li> </ol> <p>For layer1 group dataset:</p> <ol> <li>Remove non-target class annotations that occlude target class object 25% or more</li> <li>Remove non-target class annotations that do not have any child class (no impact to layer1 group classes because there is no relationship between them)</li> <li>Remove non-target class annotations that have some child classes, and fill their bbox with gray in the training image (removing only annotations is not good idea because these cause 'false false positive' signal (loss) to the model)</li> </ol> <p>That's all, and let's train!</p>
Open Images 2019 - Instance Segmentation
11th place solution [0.4796 private LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank the competition organizers and all the competitors! </p> <p>Here's my brief solution writeup:</p> <h2>1. Dataset</h2> <ul> <li>No external dataset. I only use FAIR's ImageNet pretrained weights for initialization, as I have described in the Official External Data Thread.</li> <li>Class balancing. For each class, images are sampled so that probability to have at least one instance of the class is equal (1/300) across 300 classes. One instance is randomly picked from an image to train the segmentation network described below.</li> </ul> <h2>2. Pipiline and Models</h2> <p>A two-stage pipeline with detection and single-instance segmentation networks is employed. - Detection Model. The detection baseline model is Feature Pyramid Network with ResNeXt152 backbone with modulated deformable convolution layers. (see <a href="https://www.kaggle.com/c/open-images-2019-object-detection/discussion/110953">my post at the detection track</a>). </p> <ul> <li>Segmentation Model. The segmentation model is ResNet152-C4 with two upsampling layers and two U-net-like skip connections. </li> </ul> <p>Each instance is cropped from the image based on: 1) At training time: the ground truth bounding boxes. 2) At inference time: the bounding boxes detected by the (ensembled) detection model including the parent classes. The cropped images are resized to (320, 320). The output mask resolution is (160, 160).</p> <p>The models and training pipeline are developed based on the maskrcnn-benchmark repo.</p> <h2>3. Training</h2> <p>The training conditions are optimized for single GPU (V100).</p> <ul> <li><p>Detection Model. The detection model has been trained using 500-class box labels and eight models are ensembled (0.597 private LB at object detection track).</p></li> <li><p>Segmentation Model. The segmentation model has been trained for 1.8 million iterations and cosine decay is scheduled for the last 0.2 million iterations. Batchsize is 8 and batchnorm layers are used.</p></li> </ul> <h2>4. Ensembling</h2> <ul> <li>Two models ensembling. Two segmentation models with different image sampling seeds are ensembled with and without horizontal flip. The output heatmaps are averaged.</li> <li>Results. Model Ensembling improved private LB score from 0.4740 (single segmentation model) to 0.4796.</li> </ul>
Open Images 2019 - Object Detection
TPU solution: RetinaNet + NAS-FPN + ResNet/EfficientNet, code shared [24th place]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>TL;DR: TPUs actually work. They are fast (10-100 times of 1080Ti). There are limitations that make porting difficult. My fixed code and models are at <a href="https://github.com/artyompal/tpu_models">https://github.com/artyompal/tpu_models</a>.</p> <hr> <h1>TPU</h1> <p>I started this competition quite late. I had 1 month, which is not enough to train a lot of models of GPU. I had some TPU credits, so I decided to give it a try.</p> <p>I forked TPU models repo and trained RetinaNet + FPN + ResNet50. I trained the model like in this tutorial: <a href="https://cloud.google.com/tpu/docs/tutorials/retinanet">https://cloud.google.com/tpu/docs/tutorials/retinanet</a> and it actually worked. To achieve this, I had to convert my data into TFRecords and also convert my validation data to a COCO JSON file. I realized that our metric (AP@50) is just an ordinary COCO/PASCAL VOC AP@50 metric, as long as there is no hierarchy. So training just worked out of the box.</p> <p>What's TPU? It's a matrix chip. It's a stand-alone device in Google's local network which connects to your cloud instance as TensorFlow master. It has a lot of HBM memory (I used batch 64 with images up to 1024). It has no HDD and it can't connect to your SSD, so it must use Google Cloud Storage for input (images and labels) and output (weights and TensorBoard logs). I had v2x8 and v3x8 TPUs; 3rd generation is ~50% faster.</p> <p>RetinaNet+ResNet50, batch size 64, images 640x640 processes 250 samples per second on TPUv2. On TPUv3, it processes 350 samples per second: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1859557%2F0373ce32c12fbd7aa0b29ba5ee9b288b%2Fperformance.png?generation=1570020130759200&amp;alt=media" alt=""></p> <p>One issue about TPUs is, they currently require a completely static TF graph. So you have to resize every image to a fixed resolution. It's a limitation of the current implementation (TensorFlow 1.14). Another issue is, image channels are swapped from NHWC to HWCN. Another thing, it doesn't support some TF operations. And you would need special pretrained weight for TPU, normal TF models will not work.</p> <p>Oh yeah, the main issue is, one has to use TPUEstimator to train models. tf.Estimators are weird pieces of software which by default erase all of your checkpoints but last five, completely ignoring your validation results. Every Deep Learning framework supports validation score-based ReduceLROnPlateau, but not tf.Estimators. If only we had PyTorch/XLA working... (I didn't try it)</p> <h1>Training</h1> <p>Naive training on 443 leaf classes resulted in 0.283 on the public leaderboard. Following last year solutions, I split dataset classes into 6 parts by frequency: 1. Classes 1-100 2. Classes 101-200 3. Classes 201-300 4. Classes 301-400 5. Classes 401-432 6. Human parts (11 classes), as they have a special dataset.</p> <p>I trained 6 models by parts and got a baseline which resulted in 0.48 on the public leaderboard. We teamed up with Dmitry, who had collected 5 pretrained models from different sources and got a script with Soft-NMS blending. This gave us 0.54 on the public LB. I also trained this model by 5 folds, which resulted in 0.51 LB by itself and 0.55 in the ensemble. I did it because I wanted to do pseudo-labeling for some data cleaning, but ran out of time to finish this.</p> <h1>Models</h1> <p>TPU models repo has out-of-the-box support of RetinaNet with NAS-FPN (<a href="https://arxiv.org/pdf/1904.07392.pdf">https://arxiv.org/pdf/1904.07392.pdf</a>) with AutoAugment augmentations (<a href="https://arxiv.org/pdf/1805.09501.pdf">https://arxiv.org/pdf/1805.09501.pdf</a>) and DropBlock regularization for both FPN part and backbone (<a href="https://arxiv.org/pdf/1810.12890.pdf">https://arxiv.org/pdf/1810.12890.pdf</a>). By default, RetinaNet only supports ResNet backbone, but I added support of EfficientNet. I also tried adding SE-ResNeXt backbone, but it worked too slow, maybe because I didn't transpose channels to HWCN. Also, I recall that I had a problem with TPUs not supporting REFLECT padding: <a href="https://cloud.google.com/tpu/docs/tensorflow-ops">https://cloud.google.com/tpu/docs/tensorflow-ops</a>.</p> <p>Of the ResNet family, I only had pretrained weights for ResNet50. Papers above claim that with some combinations of AutoAugment and DropBlock, RetinaNet trains even better without a pretrained backbone! This wasn't the case for me, though. Anyway, I had a lot of horsepowers, so I downloaded ImageNet and made my own pretrained models for ResNet101, ResNet152, and ResNet200.</p> <p>Long story short: NAS-FPN is great, DropBlock really helps, and AutoAugment augmentations do improve model performance.</p> <h1>Data labeling</h1> <p>Data labels are really noisy. I manually selected 5 labels per class for the validation set. I made a blacklist of labels which are clearly wrong: i.e. the label is "cello", but it's actually a violin.</p> <h1>Last day inference and final ensemble construction</h1> <p>We had seriously run out of time. I had to write and fix the distributed inference script in the last two days of the competition. This could not end well, and it didn't. I trained models with ResNet152/200 backbones, but we couldn't make a proper ensemble.</p> <p>Oh yeah, the last night before the deadline I got an error with inference with EfficientNet models: <a href="https://github.com/tensorflow/tensorflow/issues/29574">https://github.com/tensorflow/tensorflow/issues/29574</a>. So we could not use any of my EfficientNet models in the final ensemble. I probably could work around this issue by hacking TPUEstimator, but... time is money, you know :)</p> <h1>Conclusions</h1> <p>Well, it was fun! TPUs are fun. As long as someone else is paying. We spent about $1600 worth of GCP credits, and I paid $75 out of my pocket on top of that.</p> <p>I'm sharing my forked TPU repo here: <a href="https://github.com/artyompal/tpu_models">https://github.com/artyompal/tpu_models</a>. I'm also going to upload my pretrained weights for TPU models. I'm editing the repo right now, please let me know if something is missing in the README.md.</p> <hr> <p>Thanks to <a href="/dsvolkov">@dsvolkov</a> for being my teammate and for all that help with ensembling, debugging and diagnostics!</p>
Severstal: Steel Defect Detection
14th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This time, I was trying to be more methodological instead of randomly picking up ideas from papers. I spent more time thinking and digging into why my current model was doing bad before training a new model. Plus luck, I finally got my first gold.</p> <h1>Data augmentation and pre-processing</h1> <p>I used <code>albumentation</code> to do data augmentation <code> def aug_medium(prob=1): return aug.Compose([ aug.Flip(), aug.OneOf([ aug.CLAHE(clip_limit=2, p=.5), aug.IAASharpen(p=.25), ], p=0.35), aug.OneOf([ aug.RandomContrast(), aug.RandomGamma(), aug.RandomBrightness(), ], p=0.3), aug.OneOf([ aug.ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03), aug.GridDistortion(), aug.OpticalDistortion(distort_limit=2, shift_limit=0.5), ], p=0.3), aug.ShiftScaleRotate(rotate_limit=12), aug.OneOf([ aug.GaussNoise(p=.35), SaltPepperNoise(level_limit=0.0002, p=.7), aug.ISONoise(p=.7), ], p=.5), aug.Cutout(num_holes=3, p=.25), ], p=prob) </code> Then I took 256x512 crops from the augmented image. Note that many images contain large black area, and there is always steel part shown on at least one end of an image. Simple random crops would make training inefficient, so here is what I did: 1. Pick a random crop 2. If the proportion of pixels with values &lt; 10 is greater than 85% or the average pixel values of the crop &lt; 15, pick the crop on either left or right end of image depending on which end satisfying the criterion. If nothing works, pick the crop on the right end</p> <h1>Balanced Sampling</h1> <p>To fight against imbalanced classes, I used balanced sampling. I don't like the approach that random picking defect type (or non-defect) under the same probability. It would make the concept of epoch arbitrary. I prefer deterministic approach, so in each epoch, I did something like <code>0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 ...</code> till every single non-defect (class 0) was sampled once.</p> <h1>Model</h1> <h2>Model 1:</h2> <p>End-to-end classification + segmentation asymmetric U-Net on training set only. Backbone: se-resnext50 32x4d Decoder: CBAM attention and hyper-columns. I thought that the last decoder (and maybe the upsample in the second last decoder as well) was kind of redundant, so I removed it. In this way, the output stride of my model became 2. This saved a lot of my GPU memory. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1605499%2F6ea80511606b3aeb6f597b7e012da580%2FSeverstal%20Model.png?generation=1572029254979627&amp;alt=media" alt=""></p> <p>I used a similar setup in TGS salt competition too. Note that, unlike other classes, I fed all images to the defect 3 branch instead of defect 3 images only.</p> <h2>Model 2:</h2> <p>Same as model 1, except using deep-stem and replacing every ReLU with Mish. Trained on training set and pseudo-labels on public test set</p> <h1>Training</h1> <p>Loss: BCE loss for classification, symmetric Lovasz-Hinge for segmentation Optimizer: Adam Scheduler: Warmup+Flat+Cosine. 0.5 epoch linear warmup, 49.5 epochs flat at 1e-4, and 50 epochs cosine to 0 Batch size: 6 No fine-tuning on full images</p> <h1>Post-processing</h1> <p>Thresholds: since I was using hinge loss, I didn't tune segmentation thresholds Classification: [0.55, 0.99, 0.25, 0.5] Minimum pixels: [0, 0, 1200, 0] These values were determined by one fold of model 1. After the competition ended, I tried 0.5 for classification and 0 for minimum pixels for all four defects. It gave me 0.90625 on private test.</p> <p>TTA: original, hflip and vflip</p> <p>The masks are actually polygonal bounding boxes. It means that no holes in each part of defect. I randomly saw <a href="https://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html">this page </a>. I decided to gave <strong>Edge-based segmentation</strong> in that page a try. This post-processing consistently gave me 0.00005-0.0001 boost on local validation, public test and private test.</p> <h1>Pseudo-labelling</h1> <p>Obtained pseudo-labels from 3 folds ensemble of model 1. Removed defect 1 with classification output &lt; 0.85 and defect 3 with classification output &lt; 0.75. I kept the amount of test images around 35% train images for each epoch.</p> <h1>Ensemble</h1> <p>2 folds of model 1 and 1 fold of model 2. Simple average before thresholding. My best single fold of model 1 gave me 0.90310 on private test.</p> <h1>Things didn't work</h1> <p>See <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/114199#657496">this post</a></p> <p>Congratulations to all winners! P.S. I am not releasing my code at this moment since the efficiency prize is still ongoing.</p>
APTOS 2019 Blindness Detection
113th place solution - some notes from the kaggle freshman
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Model and input size</strong> EfficientNet-B5(456 x 456) number of output: 5 (classification problem)</p> <p><strong>Preprocessing</strong> 1. crop from gray (thanks to <a href="/ahoukang">@ahoukang</a> from the <a href="https://www.kaggle.com/ahoukang/aptos-vote">post</a> 2. Replace both blue and red channel with green channel and apply CLAHE (thanks to <a href="/bibek777">@bibek777</a> from the <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/102613#latest-621678">post</a></p> <p><strong>Augmentation</strong> - crop resize (thanks to <a href="/jeffreydf">@jeffreydf</a> from the <a href="https://github.com/JeffreyDF/kaggle_diabetic_retinopathy">source code</a> for the fifth place for old DR competition - color jitter - horrizontal/vertical flip - rotation - zoom-in (thanks to <a href="/jeffreydf">@jeffreydf</a> from the <a href="https://github.com/JeffreyDF/kaggle_diabetic_retinopathy">source code</a> for the fifth place for old DR competition</p> <p><strong>Training Process</strong> I leveraged two datasets, 2015 (35 k) and 2019 (3.6 k) DR dataset for the training process. The first model was trained from scratch using 2019 DR dataset (~100 epochs), and the second model can be divided into two stages. In the first stage, I pre-trained the model on *full DR dataset(~100 epochs), setting 2019 DR dataset as validation set. In the second stage, I fine-tuned the model only on 2019 DR dataset(~20 epochs).</p> <p><strong>Testing Process</strong> Test Time Augmentation (five random combinations of flip, rotation, zoom-in ) was adapted for each model. For the first and the second model, we got LB 0.808 and LB 0.798 respectively. Finally, we ensemble two models by averaging the prediction and got LB 0.817.</p> <p><strong>Note:</strong> 1. Since I found that the first model was pretty accurate on class 0 (normal) than the other classes, so I only combined 2015 class 1-4 data with 2019 data as pre-trained dataset. It brought the benefit that costing less time for training the second model as well. 2. I have tried to use only the 2015 data as training data after deadline. In comparison with the full data(2015 + 2019), although the public LB is higher, the private LB is lower. 3. Special thanks to <a href="/kirayue">@kirayue</a> and <a href="/hyc1993">@hyc1993</a> for all the discussions and suggestions, you guys always helped me a lot:)</p> <p>Dealing with real world dataset is so challenging also exciting for me and I truly learnt a lot from the competition, especially resource like those awesome discussions and kernels from the great kagglers, I do very appreciate their sharing and hopefully I can share my strength one day. Thank you so much!!!</p>
IEEE-CIS Fraud Detection
Solution and code from 21st
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The 20+ teams don't post solutions usually. But something happened make me feel it is necessary to do this. <br> I wanna say I don't know if the the shakeup is a coincidence or not. I post my track about this competition to help someone who wanna learn about this competition. People may have different opinions about whole this mess. You cound ask about the solution or code with comments. <strong>Plz just dont argue here.</strong></p> <p>All my track files are in this github:<a href="https://github.com/white-bird/kaggle-ieee">https://github.com/white-bird/kaggle-ieee</a> <br> if you wanna only run the best model, you should run: <br> f3_deepwalk.py, <br> fd_key.py, <br> fe_V307.py, <br> fi_yu.py, <br> model26.ipynb, <br> model32.ipynb, <br> <a href="https://www.kaggle.com/whitebird/ieee-internal-blend">https://www.kaggle.com/whitebird/ieee-internal-blend</a> </p> <p>LB 9500-9520 : I spent most of my time at here while I try to dig the count/mean/std features which didn't work. </p> <p>LB 9520-9580 : I realized the bad guys stole the cards and make transactions for money, but cards always have some protects, like the biggest amount for one transaction. So they need to have many similar transactions on one card in a lone period or many cards in a short time. That's the keypoint of this competition ----- the series samples make it fraud, not single sample. We need to find some "keys" to group the data: </p> <p>1) V307. There are too many V features. Some are int and some are floats. It's not hard to find out that int means the times this card have transactions with same website/seller, and float means the accumulated amount. Obviously, int + cardid may casue misjudge easily. If you have some baseline models, I recommend you the lib eli5 to find which feature is most important, which leads me to the V307. You can find these eda at model14.ipynb. I use the fe_V307.py to process the feature. </p> <p>2).deviceinfo &amp; id. Different cards have same amt in same addr with same device. is it strange? So I use fd_key.py to extract them. </p> <p>3). cardid + D. My teammates found this. All people knows the D features minus days mean a lot. We find the D2 and D15 run through the time best by max all the data, while D2 and D15 has the biggest value. fi_yu.py </p> <p>4). amt + days + addr1. It is simple but easy to misjudge. </p> <p>LB 9590-9600: So we all know the fraud sample is fraud because its similar samples is fraud. Why not let the infect of fraud more crazy? Making a two-stage models improve 0.001:model26.ipynb + model32.ipynb </p> <p>LB 9600-9630: This is caused by a bug. I grouped the keys above and get big improvments offline. However, there is only one key, cardid + D, behave badly online. I used 2~3 days to find out that I grouped them with train and test separately. It make improvments online when I grouped the key with all data. It means the key is not working as other keys to make group features but as a embedding key. Then I wrote some rules to process results with kernels. It's easy to understand but make huge boost:<a href="https://www.kaggle.com/whitebird/ieee-internal-blend?scriptVersionId=21198581">https://www.kaggle.com/whitebird/ieee-internal-blend?scriptVersionId=21198581</a> </p> <p>And there are other small improvments I don't mention. Post here if you have any question while reading/running my code. </p>
Severstal: Steel Defect Detection
Efficiency Prize Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Efficiency prize solution is EfficientNet. It is a single EfficientNet-B0 with FPN architecture. I am using <a href="https://github.com/qubvel/segmentation_models.pytorch">https://github.com/qubvel/segmentation_models.pytorch</a> implementation. Thanks to @pavel92 for the great library.</p> <p>The main points are:</p> <ul> <li><p>additional head having encoder features as input is used for classification. It predicts probabilities for each of the four defect classes. No separate classification network is used.</p></li> <li><p>training was performed on single 80 / 20 split in three stages:</p> <ol><li><p>train with loss combined of classification binary cross-entropy loss and segmentation cross-entropy loss</p></li> <li><p>added test images and predicted masks from my best submission to the train data. Added Dice loss as a component to the loss and continue training on the checkpoint from stage 1</p></li> <li><p>added validation images and masks to the train data and continue training for a few more epochs. Again the loss consists of 3 components: classification BCE loss, segmentation CE loss and Dice loss</p></li></ol></li> <li><p>for the final model averaged the weights of 5 checkpoints from stage 3</p></li> <li><p>training on 256x512 crops</p></li> <li><p>no test-time augmentation is applied</p></li> <li><p>if the classification head predicts no defect then no decoding is performed. No threshold tuning for classification has been done – just using 0.5</p></li> <li><p>TorchScript tracing is applied</p></li> </ul> <p>Speed improvements are coming from not using a separate classifier network and not running the decoder when the classification head predicts no defect.</p> <p>Many thanks to @lightforever for the Catalyst framework - it is awesome and helped me to experiment quickly in this competition.</p>
Severstal: Steel Defect Detection
8th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratatulations to the winners, and thanks to kaggle, competition sponsor and kernel contributors who shared their insights, it helped us a lot.</p> <p>Models We ensembled FPN-B0,B1,B2,B3,B4,Seresnext50. UNET-Seresnext50,Resnet34. Custom Attention Unet-B0,B1</p> <p>Augmentations Only Flipping and Random Brightness, Random Gamma, Random Contrast</p> <p>Training Progressive Learning was our primary approach started from 256x256 upto complete size. 256x256 were trained with encoder frozen for faster convergence - only batchnorm layers of encoder were unfrozen. In the end some of the models were over-fitted on complete train data(Stopped Early). </p> <p>Post - Training Used Triple Thresholding ( Magic element) and only flipping as TTA.</p>
IEEE-CIS Fraud Detection
6th place solution: The Zoo
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to Kaggle and IEEE to hosting this competition, I hope that we will have another tabular competition soon enough. Big shout out goes to all my teammates <a href="/dott1718">@dott1718</a>, <a href="/christofhenkel">@christofhenkel</a>, <a href="/ebouteillon">@ebouteillon</a>, and <a href="/maxjeblick">@maxjeblick</a> it was really a pleasure to working with them.</p> <p>From reading the solutions that have already been posted and from expecting what others did, I am pretty sure that most did (similar as in Santander) a very similar thing with some fine-grained tricks. Our solution has maybe 2-3 tricks which might be different to what others did. I am a bit tired, so sorry when not all things are super precise :)</p> <p><strong>User IDs</strong></p> <p>As everyone should know by now, identifying users is the key to achieving a good score. We did a lot of trial and error here, and <a href="/ebouteillon">@ebouteillon</a> and <a href="/dott1718">@dott1718</a> were monumental in our identification of users. When we merged we combined our two approaches and in the end <a href="/dott1718">@dott1718</a> crafted a very elegant and well-working solution for user identification. You can find the script of how to do it here (let me call those ids uid6 from now on): <a href="https://www.kaggle.com/dott1718/ieee-userid-proxy">https://www.kaggle.com/dott1718/ieee-userid-proxy</a></p> <p>Our models used different variations of the user id, with some simpler forms that have much less unique values ranging up to the ids posted before, which are very restrictive but also the most precise ones. </p> <p>However, now comes the bit more trick part of choosing the CV setup. There are different ways of doing it, and there are different types of leaks you then introduce in your CV setup. For example, one way is to do monthly time-split leak which is also what we chose to use in the end as imho this CV kind-of represents test data the closest. We chose to look mostly at average CV across folds. The important fact here is though that you have ID overlaps between folds and you have more overlaps the more folds you have around your validation fold. IDs overlap way more in closer time proximity. So for example, Fold0 (Christmas) and the last fold perform worse than those folds in the middle, because they have more IDs overlapping to the training folds.</p> <p>What is then also important is to understand how this overlap will behave in test:</p> <p><img src="https://i.imgur.com/WmSVsf9.png" alt=""></p> <p>Here you see different IDs (let’s focus on UID6) and on the x-axis the different months of test. The y-axis is the percentage of unique IDs in this month that we also observe in train. As the performance of the models is way better on overlapping ids compared to non-overlapping ones, we could expect private LB to have much lower score than public LB. This also means that it is more important on private LB to have a well working model for non-overlapping customers that you have never observed before in training.</p> <p>As a result of this, we decided to split our efforts into both developing strong models for overlapping IDs and non-overlappping IDs. We created for both our CV setup as well for test two indices which both mark the overlaps, and the non-overlaps. For example, in a validation fold those transactions where we observe the ID in the other folds would be marked as overlap, and all others as non-overlap.</p> <p><strong>Features</strong></p> <p>To be honest, this is probably our weak point as we did not spend too much time on this. Maybe we could further improve our results with better features, but I don’t know. We use a lot of count encoded features (also the uids), as well as other aggregated features. We also introduce aggregated features on UID level. Overall this should not be too different compared to some public kernels.</p> <p><strong>Models</strong></p> <p>We utilize three types of models: LGB, Catboost, and Neural Networks all fitted on monthly CV setup. All models use pretty much the same / similar features with some differences of course. In Catboost it was necessary to include the IDs as explicit categorical variables to make it work. Neural networks of course needed some feature transformations and other tricks. <a href="/christofhenkel">@christofhenkel</a> and <a href="/maxjeblick">@maxjeblick</a> did most of the work there, so they are probably the best to elaborate on this further. We additionally did a bunch of pseudo-tagged LGB models where we used predictions from only overlapping UIDs (because we are confident with them) as either soft or hard labels. We use them partly in the blend, but hard for me to say if they help much.</p> <p><strong>Blending</strong></p> <p>Our blending is now probably the interesting part. First of all, we decided to do hillclimbing blending, mostly using mean or gmean on raw predictions. But instead of optimizing for overall CV, we optimize separately for overlapping and non-overlapping CV. So first, we only optimize the CV for those indices which we have marked as non-overlapping, and then we do the same for the overlapping indices. The cool thing is now that different models come into the mix for those different indices. </p> <p>LGBs usually work well for both parts, so they are part of both overlapping and non-overlapping blends. Neural networks help significantly for the non-overlapping part, but not so much for the overlapping part because they don’t overfit heavily on the IDs and it is even hard to force them to. Catboost models work well on both parts, but have some issues on the non-overlaps leaking too much which doesn’t work well on test, so they are only part of overlaps. To summarize: Overlap Blend consists of LGBs + Catboosts and nonoverlap blend consists of LGBs + NNs.</p> <p>Now the next tricky thing is how to combine them? One way that surprisingly works well is to just fuse them by raw probability. Actually, most of our final models just do it like that. We had some other fancy methods for fusing, starting from simple mean adjustments, towards a tricky rank-based fusing approach that <a href="/christofhenkel">@christofhenkel</a> came up with. The idea is roughly to rank all predictions using only the coverage model, take the nocoverage part and rank again, build a mapping of those ranks to overall ranks and apply it to the nocoverage ranks. </p> <p><strong>Post-processing</strong></p> <p>What further helped us is the following trick. Take all predictions from a customer based on our UID6, and combine the predictions to a single value so that all transactions of a customer have the same value. This gave both a nice boost on CV as well as on LB. I think this is also something that helped us quite significantly on private LB because it makes the ranking much more precise by grouping customers together. One way is to just take the mean of the predictions, what we mostly did is to take 90% quantile of the predictions as this usually worked best on CV.</p> <p><strong>Final submission</strong></p> <p>Our blends had a certain random range on both CV as well as LB and as we could not fully decide which one to pick we made a blend of blends by taking rank-average of a bunch of our top blends we had. This was a good choice and it is nearly our top private LB solution we picked by doing so. </p> <p>In our second submission we tried to gamble a bit more by focusing even more on the non-overlaps. So we did a different CV setup splitting by UID instead of by month, so we had little to no UID leaking in the CV. Then we set the nonoverlap predictions based on a blend of that CV setup. It only scored a bit lower on private LB.</p>
Understanding Clouds from Satellite Images
95th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It has been great to work with <a href="/muhakabartay">@muhakabartay</a>, <a href="/carlolepelaars">@carlolepelaars</a>, <a href="/roydatascience">@roydatascience</a>, <a href="/lightnezzofbeing">@lightnezzofbeing</a> . We have open sourced the code in github @ <a href="https://github.com/LightnessOfBeing/Cl0uds">https://github.com/LightnessOfBeing/Cl0uds</a>. </p>
Understanding Clouds from Satellite Images
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners! Here is my solution (Public 0.67376, Private 0.66765).</p> <h2>Pre processing:</h2> <ul> <li>resize image size to (320, 512)</li> <li>exclude bad images (removed 21 images)</li> </ul> <h2>Augmentations:</h2> <p>I used <a href="https://github.com/albu/albumentations">albumentations</a>. - HorizontalFlip, VerticalFlip - ShiftScaleRotate, GridDistortion - Blur, MedianBlur, GaussianBlur - CLAHE, RandomBrightnessContrast, HueSaturationValue, IAASharpen</p> <h2>Model:</h2> <p>I used <a href="https://github.com/qubvel/segmentation_models.pytorch">segmentation_models.pytorch</a>, <a href="https://github.com/cadene/pretrained-models.pytorch">pretrained-models.pytorch</a>, <a href="https://github.com/lukemelas/EfficientNet-PyTorch">EfficientNet-PyTorch</a>.</p> Model-1: <ul> <li>densenet169 Unet with classification</li> <li>image size : 320x480</li> </ul> Model-2: <ul> <li>efficientnet-b4 FPN with classification</li> <li>image size : 320x480</li> </ul> <h2>Optimizer:</h2> <ul> <li><a href="https://github.com/LiyuanLucasLiu/RAdam">RAdam</a></li> </ul> <h2>Loss:</h2> Pre training: <ul> <li>segmentation: BCE + Dice</li> <li>classification: FocalLoss</li> </ul> Main training: <ul> <li>(classification)( FocalLoss * 0.5 + BCEWithLogits * 0.5 ) * 0.05 + (segmentation)( BCE + Dice ) * 0.95</li> </ul> <h2>Ensemble:</h2> <ul> <li>simple average of the 2 models(x 4 = total 8 models)</li> </ul> <h2>Post processing:</h2> <ul> <li>TTA : None, h-flip, v-flip, h-flip and v-flip</li> <li>Threshold : I use <a href="https://github.com/optuna/optuna">optuna</a> to find the optimal value from the cv score. </li> </ul> <h2>GPU:</h2> <ul> <li>RTX2080Ti x 1</li> </ul>
Understanding Clouds from Satellite Images
Quick Silver - The Late Joiners' Journey
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello and congrats to all participants! Thanks, organizers for this incredible competition!</p> <p>We entered this competition after a good rest from Severstal competition. Actually, I made the repo for it 12 days before the final deadline.</p> <p>We used our pipeline from Severstal competition and changed only separate parts of it. At first, I tried the best scoring config from Severstal competition for this data, only hoping for "easy gold" but got the rank around ~900. This is my second competition, where the 'fit_predict' could not get into the bronze zone.</p> <p><strong>At first, here are the things that we tried, and that did not work at all:</strong></p> <ul> <li>Higher than 352x576 resolution,</li> <li>Adding heavy encoders (everything larger than Densenet169/SEResNeXt50 overfitted like crazy)</li> <li>Masking that black stripe with mean value or overlaying with a black rectangle,</li> <li>Training on crops,</li> <li>Separate classifier,</li> <li>Unet decoder,</li> <li>PSP decoder,</li> <li>Hard color augmentations,</li> <li>Decoder from <a href="/hengck23">@hengck23</a> (but was great to learn from!),</li> <li>BCE Loss, Focal Loss, Dice loss or Lovasz loss (alone),</li> <li>Symmetric CrossEntropy (<a href="https://arxiv.org/pdf/1908.06112.pdf">https://arxiv.org/pdf/1908.06112.pdf</a>)</li> </ul> <p><strong>Things that worked:</strong> <a href="/spsancti">@spsancti</a>: - Combination of Trimmed BCE and Dice loss, - Separate classification head, - EfficientNet-B0 and B1 encoders, - Pseudolabeling and Knowledge distillation</p> <p><a href="/hakuryuu">@hakuryuu</a>: - Small networks (Resnet34, EfficientNet-B0...B2), - Grid shuffle augmentation, - Sum of symmetric Lovasz, Trimmed Focal, Dice and BCE losses</p> <p><strong>We both used:</strong> - Cosine Annealing LR schedule, - FPN decoder, - Hard geometric augmentations, - Over9000 optimizer (better than RAdam on all runs) - Threshold selection on the validation</p> <p><strong>Libraries used:</strong> - Catalyst and Apex - Albumentations - <a href="/pavel92">@pavel92</a> 's awesome segmentation models library</p> <p>We trained single-fold models, with different split, to get more boost after merging them.</p> <p><strong>Pseudolabelling and Knowledge distillation</strong></p> <p>One day before the deadline, I decided to try to use pseudolabelling. I took our most successful submit for that time (0.6575 public) and used it as hard pseudo labels. I did not select any confident predictions assuming that the source markings are noisy enough to tolerate some more noise from pseudo labels. I pretrained the model on pseudo labels and continued to the regular train. Immediately there was an improvement on local validation while training on pseudo labels, which quickly disappeared when I switched to the regular train. So, I trained several models with this scheme only to confirm this result.</p> <p>This made me think that noisy corners in original markings confuse models too much, and I decided to relabel training data with the same ensemble that produced this submission. With relabeled training data mixed with pseudo labels, the single model could reach 0.68 on local validation. I submitted it and got the first of my single models to break the 0.66 barrier on public LB. </p> <p>These results looked too optimistic, so we thought that I created a selection bias, so we decided to select one ensemble with models trained with pseudo labels and KD and one without.</p> <p><strong>Ensembling</strong> In our final submission we had 2 ensembles: The first one was trained without pseudo labels and consisted of 10 models. The second one was the same as first, but with 5 more models trained on KD mixed with pseudo labels. </p> <p>Every model in ensemble was wrapped with 8xTTA: - HorizontalFlip - VerticalFip - BrightnessUp - BrightnessDown</p> <p>Probably, the assumption of selection bias was correct, as ensemble trained with KD and pseudo labels scored 0.65982 on private LB, which is 0.0034 less than an ensemble without them. </p> <p><strong>Hardware</strong> We used 2 servers with 1xV100 and 1 workstation with 2x1080Ti. We thank VITech Lab and 3DLook for providing computing resources for this competition.</p>
Understanding Clouds from Satellite Images
86-th(bronze) writeup - solution description and lessons learned.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>solution summary</strong> My final submit is a voting average of 5 best submissions by public LB. Each of this 5 submissions is a 5-fold voting average of an efficientnet-b4 Unet or FPN, trained with BCE Dice/only bce loss with or without tta on image sizes 512x352, 640x320. Voting average settings - minimum 4 of 5 nonempty masks to consider mask non-empty, minimum 2 positive pixels to consider a pixel positive for non-empty mask.</p> <p><strong>Pipeline</strong> The code is based on this great <a href="https://www.kaggle.com/artgor/segmentation-in-pytorch-using-convenient-tools">kernel </a> with few important changes that allowed to get single model public score 0.66+: - efficientnet-b4 backbone - border mode cv2.BORDER_REFLECT_101(default) in albumentations ShiftScaleRotate - Radam optimizer - Customer learning rate scheduler(decoder starts at 1e-3 and decleaning to 1e-4 in 10 epochs, encoder starts at 0 and increasing to 1e-4 in 10 epochs, then both decleaning in steps to 1e-5 for next 30 epochs everu 3 epochs). Usually the best checkpoint is at epoch ~22.</p> <p>Side note: I switched from keras to pytorch in this competition and quite happy with that. The most important reasons are: 1. Mixed precision training that can be enabled with 1 line of code in catalyst 2. Parameter groups in optimizer that allow fine control over model learning.</p> <p><strong>Here are some highlights of the lessons learned.</strong> 1. Even in competition like this where CV/LB discrepancy is small, single model score improvement after a hyperparameter change doesn't prove anything. N-fold should be used for validation every time. 2. Too much parameter fitting on validation set(threshold, min_size) easily leads to overfitting to validation. Using constant values eventually is better. 3. Models with tta inference don't improve public LB most of the time, but consistently better on private.</p> <p><strong>Things that didn't work for me:</strong> 1. Pseudo labeling 2. Loss functions beyond BCEDice/BCE. 3. Classification(stopped adding value after LB reached 0.66)</p> <p><strong>What things I wish I've tried:</strong> 1. Implement final thresholded dice as a metric and use it for checkpointing 2. triplet thresholding(<a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/107824">https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/107824</a>), or double flat thresholds like Heng did. 3. implement good k-fold validation scheme early and make a grid search for image size/loss/augmentations/tta 4. pretraining - <a href="https://www.kaggle.com/c/understanding_cloud_organization/discussion/118017#latest-676633">https://www.kaggle.com/c/understanding_cloud_organization/discussion/118017#latest-676633</a></p> <p><strong>Conclusion</strong> I got my first kaggle medal, was quite close to silver zone and didn't suffer a major shakeup( actually enjoyed it with getting +16 positions) so result is quite positive for me. But still have to learn a lot.</p>
Understanding Clouds from Satellite Images
1st placed solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>UPDATE: code available on github</h2> <p><a href="https://github.com/pudae/kaggle-understanding-clouds">https://github.com/pudae/kaggle-understanding-clouds</a></p> <hr> <p>Congrats to all the winners and survivors of the shake-up. Thanks to Kaggle and the hosting team for the interesting competition.</p> <p>Except for some tricks, improvements almost have been made by using ensemble. So, in this post, I will briefly describe the track of scores in the last week. The details will be shared as codes.</p> <h3>Common Settings</h3> <p><strong>Types of networks</strong> - Model A: UNet with classification head - Model B: FPN or UNet, no classification head</p> <p><strong>Backbones</strong> - resnet34, efficientnet-b1, resnext101_32x8d_wsl, resnext101_32x16d_wsl</p> <p><strong>DataSet</strong> - split: train vs val = 9 vs 2 - Model A: All labels - Model B: non-empty labels</p> <p><strong>Loss</strong> - classification part: BCE - segmentation part: BCE * 0.75 + DICE * 0.25</p> <p><strong>Optimizer</strong> - AdamW, weight decay 0.01 - encoder learning rate 0.000025 - decoder learning rate 0.00025 - OneCycle scheduler, shallow models 30 epochs, deep models 15 epochs</p> <p><strong>Augmentation</strong> - Common: hflip, vflip, shift/scale/rotate, grid distortion, channel shuffle, invert, to gray - Model A: random crop, size 384 - Model B: full-size, size 384, 544, 576, 768</p> <h3>The track of scores</h3> <p><strong>train single model</strong> At first, I’d tried to train a good single network. I’d struggled to improve and stabilize the LB scores for 2 weeks, but I’d failed. - TTA3: CV 0.6517 / Public LB 0..66951 / Private LB 0.65828</p> <p><strong>add segmentation models</strong> I thought the reason for the unstable LB score was because of poor segmentation performance. If we can have a more powerful segmentation model, the effect of poor classification performance can be reduced.</p> <p>So, I began trying to train good segmentation only model. Because I could filter out negative predictions using the classification model, only positive labels were needed to train.</p> <p>From this time, CV and LB were correlated well. I trained several segmentation models with different backbone, image size, etc. - TTA4, 1 seg with cls + 1 seg: CV 0.6560, Public LB 0.67395, Private LB 0.66495 - TTA4, 1 seg with cls + 3 seg: CV 0.6582, Public LB 0.67482, Private LB 0.66501 - TTA4, 1 seg with cls + 4 seg: CV 0.6587, Public LB 0.67551, Private LB 0.66604 - TTA4, 1 seg with cls + 7 seg: CV 0.6594, Public LB 0.67596, Private LB 0.66663</p> <p><strong>add more models with classification head</strong> Now, the segmentation part became enough good. so, I added two more models with classification head. - TTA4, 3 seg with cls + 7 seg: CV 0.6625, Public LB 0.67678, Private LB 0.66746</p> <p><strong>use segmentation models as a classifier</strong> To take advantage of the performance of the segmentation models, I used a mean of top K pixel probabilities as a classification probability. <code> cls_probabilities = np.sort(mask_probabilities.reshape(4, -1), axis=1) cls_probabilities = np.mean(cls_probabilities[:,-17500:], axis=1) </code></p> <ul> <li>TTA4, 3 seg with cls + 7 seg: 0.6629, 0.67822, 0.67046</li> <li>TTA4, 3 seg with cls + 8 seg: 0.6635, 0.67906, 0.67117</li> </ul> <p><strong>use max probability as a positive prediction</strong> All images in the train set have at least one type of cloud, so I treated the label of max probability in each image as a positive prediction. <code> cls_probabilities[np.argmax(cls_probabilities)] = 1 </code> - TTA4, 3 seg with cls + 8 seg: CV 0.6640, Public LB 0.68031, Private LB 0.67170</p> <p><strong>use exponential moving average</strong> Finally, I changed the averaging weights method to the exponential moving average. Before that, the average of the last 5 weights was used. - TTA4, 3 seg with cls + 8 seg: CV 0.6636, Public LB 0.68130, Private LB 0.67126 - TTA4, 3 seg with cls + 9 seg: CV 0.6637, Public LB 0.68185, Private LB 0.67175 (<strong>Final Submission</strong>)</p>
Severstal: Steel Defect Detection
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Severstal for hosting this competition. And congrats to the winners!</p> <p>I was very surprised that such a huge shake-up and I am very fortunate to get a solo gold medal. I have tried on this competition only about 10 days, so I haven't done many experiments and my solution is not so special. I was just very lucky.</p> <p>In a short words, my solution is just an ensemble multiple models. From my experience so far, I think it is very important to ensemble various models to build a robust solution.</p> <h3>Classification</h3> <ul> <li>models <ul><li>3x EfficientNet-b4 (first 3 of stratified 10 folds)</li></ul></li> <li>input <ul><li>full size (256 x 1600)</li></ul></li> <li>augmentations <ul><li>random crop rescale</li> <li>hflip, vflip</li> <li>random contrast, random gamma, random brightness</li></ul></li> <li>TTA <ul><li>none, hflip</li></ul></li> <li>threshold label <ul><li>[0.5, 0.5, 0.5, 0.5]</li></ul></li> </ul> <h3>Segmentation</h3> <ul> <li>models <ul><li>EfficientNet-b3 Unet stratified 4fold w/ full size image</li> <li>EfficientNet-b3 Unet 1fold w/ random crop 256 x 800</li> <li>3x Unet from <a href="https://www.kaggle.com/lightforever/severstal-mlcomp-catalyst-infer-0-90672">mlcomp + catalyst infer kernel</a> from @lightforever </li></ul></li> <li>augmentations <ul><li>same as classification</li></ul></li> <li>loss function <ul><li>BCEDice (bce weiht=0.75, dice weight=0.25)</li></ul></li> <li>TTA <ul><li>none, hflip</li></ul></li> <li>threshold mask <ul><li>[0.5, 0.5, 0.5, 0.5]</li></ul></li> <li>postprocess</li> </ul> <h3>Predictions on public LB</h3> <p>defect 1 = 82 (128) defect 2 = 5 (43) defect 3 = 601 (741) defect 4 = 110 (120)</p>
Understanding Clouds from Satellite Images
3rd silver place key points
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First I would like to thank Max Planck Institute and Kaggle for hosting this interesting competition.</p> <p>I would like to share some of the key points of my 3rd place (silver) solution :-) It sounds cool right? (well I love to make that top silver medal the most out of it, forgive me :P)</p> <h3>1) Cutmix augmentation</h3> <p>Naturally thinking, cutmix is the best way to deal with this competition. We can cut a part of this image and paste to another image. This idea came off from my mind without knowing its existence academically, which I later found an official paper about it. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1829450%2Fb8053a93d2452d3417e3bfe0100ea953%2Fcutmix.png?generation=1574165517927124&amp;alt=media" alt=""></p> <p>How to do that in code? I search for some augmentation package, but find it hard to flexibly code it my way. So I decided to do it manually. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1829450%2Fbc80b1cb83404da2281afe27ec2f7f72%2Fcutmixcode.png?generation=1574165662592266&amp;alt=media" alt=""> The different of doing cutmix or not is just an extra section of <code>__getitem__</code> in data generator. Here, <code>indexes_augment</code> is the random indexes pick from the training data, <code>w_cutmix</code> and <code>h_cutmix</code> are width and height of the crop. So I just get the random starting position of width and height in the original drawn image (<code>X</code>) and insert part of other image (<code>Xc</code>) into it. </p> <p><strong>Cutmix boosted both LB and CV by 0.004.</strong></p> <h3>2) Pseudo-label</h3> <p>Pseudo-label only works if we correctly select good samples, as well as the correct number of samples. I did this by assessing the <code>quality</code> of each predicted validation image by calculating: <code>quality = (number of pixels with probability &amp;gt; top) + w*(number of pixels with probability &amp;lt; bot)</code>. Here, <code>top</code> can take values from [.7, .75, .8, .85, .9], <code>bot</code> can take values from [.1, .15, .2, .25, .3], and <code>w</code> is the weight of low-value pixels as compared to high-value pixels, which can be taken from, say, [.1, .5, 1, 2, 10]. </p> <p>I get the <code>quality</code> of all validation data, rank it, and select <code>nb_samples</code> most confident samples from it, and see the score. I search through a full set of validation data and had a result something like this <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1829450%2Fbdb7de2de3781797e891e7eb8ec93508%2Fpseudo.png?generation=1574166277986726&amp;alt=media" alt=""></p> <p>So, I can manually decide <code>bot</code>, <code>top</code>, <code>w</code>, and <code>nb_samples</code> as long as <code>nb_samples</code> are reasonable with the corresponding score. For example, <code>bot=.1</code>, <code>top=.7</code>, <code>w=1</code>, and <code>nb_samples</code>=1000 (with corresponding <code>dice=0.77xx</code>), which means the most 1000 confident predictions out of 5546 train images can have that good dice. Then I can pick up the same ratio of images from test predictions, which is (1000/5546*3698).</p> <p><strong>Pseudo labelling boosted around 0.003 on both CV and LB.</strong></p> <h3>3) Estimating private LB distribution and decide to trust CV</h3> <p>First, I did a test on private LB, based on <a href="https://www.kaggle.com/c/understanding_cloud_organization/discussion/109793#latest-631950">this topic</a>. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1528571%2F43771b2c3fd35e39047f55b552604852%2FLB_probing.png?generation=1569149948996763&amp;alt=media" alt=""></p> <p>From that probing result, we need to make 1 assumption: "Train set and full test set should have the same distribution of classes."</p> <p>Then, by submitting each class as empty (others as 1-pixel masks), we can know the percentage of each class in the public LB. Then the assumption I make will allow us to know the percentage of each class in the private LB. The result is:</p> <p>Train data: Fish 49.85%, Flower 57.35%, Gravel 47.00%, Sugar 32.36%. Private test data: Fish 49.94%, Flower 56.76%, Gravel 47.20%, Sugar 31.30%.</p> <p>As you can see, the distributions of private test and train very similar, allowing me to completely trust CV. Therefore during the whole competition, I never probed LB by submissions, but only stick with full k-fold to search for post-processing parameters. <strong>This is important, as it guides the way we do everyday in the competition</strong>. And you can see that I jumped on private LB, and I also selected my possibly best submission.</p> <p>Finally, I still would like to emphasize again that late sharing should not be encouraged. I have a bad thought that whenever I see the excessive sharers around in future competitions, I would be very disappointed, and discouraged from competing. In other words I am somehow "scared" of their existence. </p> <p>Thanks for reading!</p>
Severstal: Steel Defect Detection
Silver frankenstein's monster (57th place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This solution is actually a small modification of great kernel by @lightforever (<a href="https://www.kaggle.com/lightforever/severstal-mlcomp-catalyst-infer-0-90672">https://www.kaggle.com/lightforever/severstal-mlcomp-catalyst-infer-0-90672</a>) which used mlcomp/catalyst 🤘.</p> <p>Things i added: - Heng's resnet34 classifier (found here: <a href="https://www.kaggle.com/bibek777/heng-s-model-inference-kernel">https://www.kaggle.com/bibek777/heng-s-model-inference-kernel</a>) ;sorry, didn't notice appropriate model in mlcomp kernel : / - One more unet with resnext50_32x4d encoder (fold number 3)</p> <p>What could work (it gave higher private score, but i didn't select that solutions): - Efficientnet-b5/vgg16 encoders - More resnext50_32x4d encoders trained on different folds </p> <p>What didn't work: - Additional augmentations for classification (vertical and horizontal flip) and segmentation (vertical flip) - Thresholds higher than 0.5</p> <p>Here is my code: <a href="https://www.kaggle.com/shiron8bit/silver-frankenstein">https://www.kaggle.com/shiron8bit/silver-frankenstein</a></p> <p>Kudos to @hengck23 and @lightforever!</p>
Lyft 3D Object Detection for Autonomous Vehicles
19th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft 3D Object Detection for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey guys,</p> <p>First of all, I'd like to thank the organizers for this awesome challenge. This was not a typical computer vision competition that's for sure and I spent a good amount of time just to understand the problem and the data. I went through multiple papers related to 3d object detection. I chose to go with <a href="https://github.com/traveller59/second.pytorch">SECOND</a>:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F761152%2Fadcfa4df972dfe71bde7404745460bb1%2Fpic-selected-191130-2044-46.png?generation=1575126996055584&amp;alt=media" alt=""></p> <p>I tweaked the SECOND's original source code to get it working for lyft dataset. Added the evaluation metric of the competition for val set evaluation. Modified the default config files according to lyft object dimensions (<code>anchor_ranges</code>, <code>sizes</code> etc). Splitted train/val based on scenes (80:20), <a href="https://github.com/pyaf/second.pytorch/blob/master/second/configs/nuscenes/all.pp.lowa.config">configs/nuscenes/all.pp.lowa.config</a> with VoxelNet and PillarFeatureNet worked best for me. </p> <p>check out my source code: <a href="https://github.com/pyaf/second.pytorch">https://github.com/pyaf/second.pytorch</a> </p>
Lyft 3D Object Detection for Autonomous Vehicles
26th Place Solution Summary [0.078 Private LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft 3D Object Detection for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Congrats + Thanks</h2> <p>Congrats to all the winners! Thanks to the competition hosts for a very interesting and challenging competition. Big thanks to <a href="/rishabhiitbhu">@rishabhiitbhu</a> for all his sharing in the forums and kernels throughout the competition and everyone else who contributed their insights. </p> <h2>Model</h2> <p>The model was based on the PointPillars [1] implementation from second.pytorch [2]. This repo contains SECOND [3], VoxelNet [4], and PointPillars implementations and supports Kitti [5] and NuScenes [6] data formats. Most of my time was spent updating the code to work with the Lyft dataset and removing the Spconv [7] dependency used by SECOND architecture which kept causing errors while trying to compile it. Many of the changes to remove the Spconv dependency were taken from Nutomoy’s fork [8] of second.pytorch but this fork does not support the NuScenes format. </p> <h2>Configuration</h2> <p>| Point Cloud Range | Voxel Size | Max Num Points Per Voxel | |--|--|--| | [-100, -100, -10, 100, 100, 10] | [0.25, 0.25, 20] | 60</p> <h2>Validation</h2> <p>First 80% of data was used for training, last 20% was used for validation. Data was split by scene. </p> <h2>Training, Evaluation &amp; Test Set Predictions</h2> <p>On Saturday 9th before the competition end I was able to make my first submission using Point Pillars model. This first attempt got the following results:</p> <p>| GPU | Epochs | Train Time | Val mAP | Public LB mAP | Private LB mAP | |--|--|--|--|--|--| | GTX1080 | 20 | ~18 | 0.049 | 0.066 | 0.065 |</p> <p>On analyzing predictions from both eval set and test set I noticed there were no predictions outside of +-50 meters along X and Y. The reason for this was the point cloud range was set to [-50, -50, -10, 50, 50, 10], many of the object anchor ranges were set to the same range or less, there was also some post processing that filter out any detections outside of 50 meters from the center of the point cloud. These were defaults used for working with NuScenes and not well suited for the Lyft dataset. </p> <p>After extending the point cloud range to [-100, -100, -10, 100, 100, 10] , extending the object anchor ranges, increasing the post processing detection filtering radius to 100 meters, and retraining I got the following results:</p> <p>| GPU | Epochs | Train Time | Val mAP | Public LB mAP | Private LB mAP | |--|--|--|--|--|--| | GTX1080 | 19 | ~36 | 0.072 | 0.079 | 0.078 |</p> <h2>References</h2> <ol> <li>PointPillars: Fast Encoders for Object Detection from Point Clouds <a href="https://arxiv.org/pdf/1812.05784.pdf">[paper]</a></li> <li>second.pytorch <a href="https://github.com/traveller59/second.pytorch">[code]</a></li> <li>SECOND: Sparsely Embedded Convolutional Detection <a href="https://pdfs.semanticscholar.org/5125/a16039cabc6320c908a4764f32596e018ad3.pdf">[paper]</a></li> <li>VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection <a href="https://arxiv.org/pdf/1711.06396.pdf">[paper]</a></li> <li>Kitti dataset <a href="http://www.cvlibs.net/datasets/kitti/">[data]</a></li> <li>NuScenes dataset <a href="https://www.nuscenes.org/">[data]</a></li> <li>Spconv <a href="https://github.com/traveller59/spconv">[code]</a></li> <li>Nutonomy’s fork of second.pytorch <a href="https://github.com/nutonomy/second.pytorch">[code]</a></li> </ol>
Lyft 3D Object Detection for Autonomous Vehicles
2nd Place Solution Summary (Private: 0.202 / Public: 0.205)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft 3D Object Detection for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Before I describe&nbsp;my solution (which I will only summarize verbally), I'd like to thank the organizers and hosts for this interesting, well-organized competition which came with a rich, multi-data dataset that provided multiple solution avenues.&nbsp; And of course, I'd also like to thank the competitors for contributing to this competition and congratulations to all the winners and medal holders too.</p> <p>Also, apologies for not having written this sooner since I got caught up with multiple commitments after the competition - but better late than never.&nbsp; &nbsp;Particularly, my solution takes a slightly different approach wherein I actually did end up using 2D images effectively (compared to the other approaches which use only LIDAR), so it may be of interest. </p> <h1>Methods</h1> <p>These are my primary methods:</p> <p><strong>Method 1.</strong>&nbsp; As with most of high scoring solutions which have been shared one method I used is based on Voxelnet with PointPillars (<a href="https://github.com/traveller59/second.pytorch">https://github.com/traveller59/second.pytorch</a>).&nbsp; I started with this since it seemed to have the most fastest path to getting a first cut solution - NuScenes format, multi-class, similar classes, etc..&nbsp; In particular I re-used the multi-head version of the config (all.mhead.pp.config), and simply remapped existing NuScenes classes to Lyft classes (with the exception of some new classes which I just did a closest match, e.g. other_vehicle to trailer).&nbsp; Here I found that the following parameters made the most difference</p> <ul> <li><code>point_cloud_range</code> - [-100, -100, -5, 100, 100, 3]</li> <li><code>voxel_size</code> - in the final implementation I used an ensemble of models ranging from 0.1x0.1 to 0.25x0.25, with the best performing single model at 0.2x0.2.&nbsp;&nbsp;</li> <li><code>feature_map</code> - this has to be changed to match <code>voxel_size</code> above</li> <li><code>max_number_of_voxels</code> - I found that this was best set to as large as possible, e.g. &gt; 200k (depending on the amount of GPU memory available).</li> </ul> <p>Other than flip augmentations I did not find any additional advantage in additional augmentations, nor did I spend any time varying architecture parameters.&nbsp; I left the single model NMS parameters at the default and did not tune the thresholds, even though the downstream ensemble used rotated soft-NMS.</p> <p>In terms of external data, I did start my first model by training on the NuScenes dataset from scratch, which helped to (1) validate that the repository results were acceptable on NuScenes scoring (on average yes, but per class scores varied somewhat from what was reported) (2) accelerate subsequent training cycles as a pretrained model for Lyft dataset.&nbsp; &nbsp;Post-competition analysis showed that using the NuScenes dataset as an initial pretrained model might not have been necessary, but it was still helpful for me to validate the integrity of the repository.</p> <p>My best model had a private/public of 0.170/0.173 using a voxel size of 0.2x0.2.&nbsp; Including test time augmentation for this model.&nbsp; &nbsp;With test time augmentation (described below) this contributed to a private/public of 0.185/0.188.&nbsp;&nbsp;</p> <p>For the final ensemble I used a combination of 8 models of varying voxel sizes (and a few of them either used full data rather than train/val and one of them at some additional augs e.g. scaling).&nbsp; This LIDAR only ensemble gave a private/public of 0.188/0.191, which is not that much better from the best model TTA.</p> <p><strong>Method 2.</strong>&nbsp; Additionally, I used a 2D-&gt;3D approach that leveraged Frustum ConvNet (<a href="https://github.com/zhixinwang/frustum-convnet">https://github.com/zhixinwang/frustum-convnet</a>) by using 2D boxes/proposals from conventional object detectors.&nbsp; Since this repository worked on KITTI format, this required a few corrections on the KITTI converter:</p> <ol> <li>The rotation had to be pi instead of pi/2 (this was corrected by the Kaggle community)</li> <li>Ego pose differences between LIDAR and cameras had to be taken into account (see PR:&nbsp;<a href="https://github.com/lyft/nuscenes-devkit/pull/75">https://github.com/lyft/nuscenes-devkit/pull/75</a>)</li> </ol> <p>For the object detectors I used Detectron 2 and Tensorflow Object Detection API and trained the labels after converting from KITTI to COCO format.&nbsp; Pretrained models used from these object detectors include&nbsp;Faster-RCNN with X101-FPN for Detectron 2 and Faster-RCNN with Inception ResNet V2 (Atrous) for TF object detection API. </p> <p>I also wrote a de-converter from KITTI back to the Lyft submission format, and ignore overlapping boxes across camera views since they would be rectified by the ensemble process (which is rotated soft-NMS) described later.</p> <p>For the Frustum-ConvNet training process, I maintained 3 classes as per the original repository (instead of expanding this to all classes in a single training cycle, to minimize risk of modifications).&nbsp; I then mapped bicycle and motorcycle classes to 'Cyclist', pedestrian and animal classes to 'Pedestrian', and all other vehicle classes to 'Car', but modified their average sizes whenever the class was used.&nbsp; This required training each class mostly independently.</p> <p>Surprisingly, this approach by itself gave around private/public of 0.169/0.171 (ensemble of the 2 models from the frameworks x 2 checkpoints), which was pretty respectable - more importantly, combining this with approach (using SECOND) gave a final private/public of 0.202/0.205, which was would not have been achievable just using LIDAR alone since my LIDAR ensemble had already saturated.&nbsp;&nbsp;</p> <p>Further investigation showed that the best model for Frustum-ConvNet when compared to the SECOND approach has an improvement on pedestrian and bicycle classes, as shown below.&nbsp; Additionally, my guess is that this approach was able to detect some missed false negatives from the LIDAR approach for most of the classes which even though the overall score was lower.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F495818%2F6bb7a3583aee98d1efe199da3c8fb2e6%2F3d_vs_2d.png?generation=1577161276113708&amp;alt=media" alt=""></p> <h1>Ensembles</h1> <ol> <li><p>I used test time augmentation for the SECOND/LIDAR approach, which effectively is just orig, x flip, y flip, and x-y flip - preprocessing the point cloud by these flips, and then postprocessing the detected object boxes by the inverse.&nbsp; I did not apply TTA for the Frustum-ConvNet approach.</p></li> <li><p>I employed the same rotated soft-NMS for all ensembles, which include combining TTA predictions&nbsp;+ merging all LIDAR models and all Frustum-ConvNet models.&nbsp; In other words this acted as somewhat of a "late fusion" approach to both 3D and 2D-&gt;3D (via Frustum-ConvNet) models.&nbsp; A little bit of validation checking showed that using Gaussian function at a sigma of 0.85 gave the best validation score, so I stuck to this throughout the competition.&nbsp; I did not have success with 3D (i.e. including height intersection) but only IOU considering the rotated bounding boxes of the objects (it scored worse), and only considered soft-NMS within the same class.</p></li> </ol> <h1>What Didn't Work</h1> <p>I would be very interested if anyone enabled the following to work properly.&nbsp; It is also food for thought:</p> <ol> <li><p>Using high resolution maps to improve LIDAR/SECOND training - I had the idea of augmenting the point clouds with non road vs road points, but wasn't confident of this approach.&nbsp; I also tried weighing predicted objects directly using the map information to down-weigh non-road objects for certain classes, but I only got extremely marginal improvement.</p></li> <li><p>Using samples which were adjacent in time - I tried weighing predictions across pre and post adjacent frame samples for both non-road and road objects, and boost the predictions of objects which were seen across multiple adjacent frames.&nbsp; This did not help.&nbsp; Perhaps a tracking approach would have improved scores here.</p></li> <li><p>Animals - the 2D (Frustum-ConvNet) approach actually detected some animals (dogs) correctly, but the IOU evaluation was too sensitive for this to work well, so my score was 0 for both local and LB.</p></li> <li><p>PointRCNN - a last minute attempt to use this did not work well and I only got about slightly better than half the score for the car class (if I remember ~0.15 instead of 0.3+ for cars).</p></li> </ol>
Lyft 3D Object Detection for Autonomous Vehicles
22th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft 3D Object Detection for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone,</p> <p>Ending final validation, I share my solution. I converted points clouds into RGB images and applied simple SSD (Single Shot Multibox Detector). The Brightness corresponds to the density of clouds, and the Hue corresponds to the weighted center of z-position.</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-3d-object-detection-for-autonomous-vehicles">https://github.com/toshi-k/kaggle-3d-object-detection-for-autonomous-vehicles</a></p> <p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-3d-object-detection-for-autonomous-vehicles/master/img/concept.png" alt="conceptual diagram"></p>
RSNA Intracranial Hemorrhage Detection
3rd place solution & become GM!! (updated with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>update</h2> <p>code is <a href="https://github.com/okotaku/kaggle_rsna2019_3rd_solution">here</a>.</p> <p>Hi, dear kagglers. First of all, thank you very much RSNA and kaggle for hosting such a fantastic competition. And congrats winners and all kagglers:) I finally became kaggle Grandmaster. It was a super tough road but all experience made me stronger. I am very proud of it😆</p> <p>Here is my solution. I will write details in later parts and will share my github repo after I clean up it.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F841938%2Fbba0a58dc68950ba36d51fb210b8329c%2Fimage1.png?generation=1573695201895407&amp;alt=media" alt=""></p> <p>Final model: private 0.045 User stacking model only: private 0.043 (I couldn't select it qq)</p> <h2>Special Preprocessing</h2> <h3>windowing</h3> <p>I used 2 types of windowing. - <a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/110728#latest-659011">subdural window</a> - <a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/src/cnn/dataset/custom_dataset.py#L16">Appian’s 3 types windows</a></p> <p>For me subdural was a little better.</p> <h3>concat user slice</h3> <p>This method gave me much improvement. There are some images (about20 - 40) in one SeriesInstanceUID. And when sorted by ImagePositionPatient2, you can see that targets are continuous. I will call those images, s1, s2, s3, ..., st, st+1, … in my post. Here is the example. You can see more details in <a href="https://www.kaggle.com/takuok/eda-of-rsna">my kernel</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F841938%2Fc11506e3c3a5c81bf6277b2963f5df83%2Fimage3.png?generation=1573695232477620&amp;alt=media" alt=""></p> <p>So I decided to concat some images from the same SeriesInstanceUID. - st-1, st, st+1 - st, st+1, st+2 - st-2, st-1, st - st-2, st, st+2 - np.mean (st-3, st-2, st-1), st, np.mean(st+1, st+2, st+3) - np.mean (st-5, st-4, st-3, st-2, st-1), st, np.mean(st+1, st+2, st+3, st+4, st+5) - np.mean (st-X for X in all values), st, np.mean(st+X for X in all values)</p> <p>Then predicted st’s target.</p> <p>And I tried multi task training. - st-1, st, st+1 then predict targets of st-1, st, st+1 - st-2, st, st+2 then predict targets of st-2, st, st+2</p> <p>This model got 0.060~0.062(sry I forgot) on stage1 Public. It was my best single model and those 2 models improved my ensemble score from 0.057 to 0.056 on stage1.</p> <h2>User Stacking</h2> <p>I used “concat user slice” method to show models multiple slices of the user. And I used this method on ensemble parts. I call it User Stacking.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F841938%2F8599bb8cc52e1ebac3c14203f98e5427%2Fimage2.png?generation=1573695252564269&amp;alt=media" alt=""></p> <h2>Other things I used</h2> <ul> <li>They didn’t have much improvement, but I write up.</li> <li>Appian’s 0.066 models</li> <li>predict 5 classed and fill “any” on max prediction.</li> <li><a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/113339#latest-664918">CQ500 External Data</a></li> <li>crop black area</li> <li>retrain stage2 data</li> <li><a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/108065#latest-636669">generalized mean pooling</a></li> </ul> <h2>What didn’t work.</h2> <ul> <li>EfficientNet</li> <li>Cbam Resnet</li> </ul> <p>*Slide design: Japanese Autumn leaves (紅葉: koyo)</p>
RSNA Intracranial Hemorrhage Detection
10th place solution (+ github code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all participants and the winners ! And, I became KaggleMaster on this competition ! We will go to Japanese BBQ (Yakiniku) by prize money of this competition, haha.</p> <p>Following is a summary of our solutions.</p> <p>Code : <a href="https://github.com/shimacos37/kaggle_rsna_2019_10th_solution">https://github.com/shimacos37/kaggle_rsna_2019_10th_solution</a> We mostly used <a href="/appian">@appian</a> code. Thank you very much <a href="/appian">@appian</a> !!</p> <h1>Pipeline</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1227363%2F393c14ac48f011e5cab030c1493eb47e%2FRSNA_pipeline%20(4" alt="">.png?generation=1574649511734185&amp;alt=media)</p> <h1>Summary</h1> <h2>Stage 1</h2> <h3>Preprocess</h3> <ul> <li>As most of people did, we applied three window (brain, blood/subdural, bone).</li> <li>Delete some noisy image (image which has small brain area).</li> <li>PatientID based 5-fold split.</li> </ul> <h3>Train</h3> <ul> <li>We trained simply changed backbone in <a href="/appian">@appian</a> code and applied some ideas.</li> <li>We usually used 512x512 img_size and applied simple augmentations (flip, resize, etc...)</li> <li>Finally, We constructed eleven models. Consequently, I think that it doesn't need to construct so many models...</li> </ul> Simple CNN models <ol> <li>SeResNext-50</li> <li>SeResNext-50 (Resize 410x410)</li> <li>SeResNext-101 (Mixup used)</li> <li>Efficientnetb3</li> <li>InceptionV4</li> <li>InceptionResNetV2</li> <li>Xception</li> </ol> Some Ideas <ul> <li><p>We predicted label without 'any' and 'any' by other label probability (1 - (1-p_1)*(1-p_2)...)</p> <ul><li>This is not so high score, but should be have some contribution when stacking.</li></ul></li> <li><p>We used adjacent images for input, and predict center label. Please see following figure.</p></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1227363%2Ff3dc21d1aa487195426175d6aa5f4401%2FUntitled%20Diagram%20(3" alt="">.png?generation=1574650665165165&amp;alt=media)</p> <ul> <li>We applied label smoothing by moving average or interpolation of the sandwiched label area. <ul><li>Because, we noticed the the boundary of label tends to have high log_loss by our EDA.</li></ul></li> </ul> <h2>Stage 2</h2> <h3>Preprocess</h3> <ul> <li>First, we predicted the probabilities of labels per an image.</li> <li>Second, we sorted the probabilities by Position2 per StudyInstanceUID.</li> <li>We extracted below features. <ul><li>Aggregate feature (min, max, mean, std), pred-pred_mean, pred / pred_mean, etc</li> <li>Moving average feature (3, 5, 7, 9 adjacent prediction), pred - moving_average_pred, pred / moving_average_pred, etc</li></ul></li> </ul> <h3>Stacking</h3> <ul> <li>We simply trained LightGBM and MLP by above features.</li> <li>And we constructed CNN stacking model like below figure.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1227363%2Fce8d6f1abe7fcd102dcdea7a1b93fb72%2Fcnn_stacking%20(3" alt="">.png?generation=1574658076805462&amp;alt=media)</p> <ul> <li>We treated above features as images. <ul><li>height : features from different models</li> <li>width : feature dimension</li> <li>channel : adjacent features sorted by Position2</li></ul></li> </ul> <h2>Stage 3</h2> <h3>Preprocess</h3> <ul> <li>We used the same method of Stage 2.</li> </ul> <h3>Stacking</h3> <ul> <li>We simply trained LightGBM</li> <li>We clipped prediction values by [1e-6 1 - 1e-6] and made submissions.</li> </ul>
RSNA Intracranial Hemorrhage Detection
2nd Place Solution - Sequential model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Code &amp; Val/LB scores : <a href="https://github.com/darraghdog/rsna">https://github.com/darraghdog/rsna</a> <br> Congrats all winners, looking forward to go through your solutions. Big shout out to competition hosts RSNA, kaggle community, pytorch community, albumentations and FB's work on resnext -very cool how they trained this. We were very sad not to get a top3 in Recursion competition, now we are very happy 😄 </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F213493%2Fc76202951ffb4afcc5d56acb579ac552%2Frsna_nobrainer.png?generation=1573698154700507&amp;alt=media" alt=""></p> <p><strong>Overview</strong> In general we just have a single image classifier, data split on 5 folds, we only trained on 3 of them, and then extracted pre-logit layer from the classifier and fed into an LSTM. Classifier trained on 5 epochs each fold, 480 images with below pre-processing. Each epoch, each fold, we extract embedding layer (use TTA and avg embeddings) train a separate LSTM for 12 epochs on each of those - so 15 LSTMs (3 fold image models X 5 epochs), and average the predictions. Was a bit concerned the preprocessing filter may lose information, so trained the above again without the preprocessing filter and it did worse; but averaging both pipelines did ever so slightly better. The pipeline from first paragraph above would, for all intensive purposes be just as good as final solution, but as we needed to fix docu pre-stage 2 the two pipelines are in github and final solution. </p> <p><strong>Preprocessing:</strong> - Used Appian’s windowing from dicom images. <a href="https://github.com/darraghdog/rsna/blob/master/eda/window_v1_test.py#L66">Linky</a> - Cut any black space. There were then headrest or machine artifacts in the image making the head much smaller than it could be - see visual above. These were generally thin lines, so used scipy.ndimage minimum_filter to try to wipe those thin lines. <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainorig.py#L159">Linky</a> - Albumentations as mentioned in visual above. </p> <p><strong>Image classifier</strong> - Resnext101 - did not spend a whole lot of time here as it ran so long. But tested SeResenext and Efficitentnetv0 and they did not work as well. - Extract pre logit layer (GAP layer) at inference time <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainorig.py#L387">Linky</a> </p> <p><strong>Create Sequences</strong> - Extract metadata from dicoms : <a href="https://github.com/darraghdog/rsna/blob/master/eda/meta_eda_v1.py">Linky</a> - Sequence images on Patient, Study and Series - most sequences were between 24 and 60 images in length. <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainlstmdeltasum.py#L200">Linky</a> </p> <p><strong>LSTM</strong> - Feed in the embeddings in sequence on above key - Patient, Study and Series - also concat on the deltas between current and previous/next embeddings (<code>current-previous embedding</code> and <code>current-next embedding</code>) to give the model knowledge of changes around the image. <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainlstmdeltasum.py#L133">Linky</a> - LSTM architecture lifted from the winners of first stage toxic competition. This is a beast - only improvements came from making the hiddens layers larger. Oh, we added on the embeddings to the lstm output and this helped a bit also. <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainlstmdeltasum.py#L352">Linky</a> - For sequences of different length, padded them to same length, made a dummy embedding of zeros, and then through the results of this away before calculating loss and saving the predictions. </p> <p><strong>What did not help...</strong> <br> Too long to do justice... mixup on image, mixup on embedding, augmentations on sequences (partial sequences, reversed sequences), 1d convolutions for sequences (although SeuTao got it working)</p> <p><strong>Given more time</strong> <br> Make the classifier and the lstm model single end-to-end model. Train all on stage2 data, we only got to train two folds of the image model on stage-2 data.</p>
RSNA Intracranial Hemorrhage Detection
11th place solution (with updated code on github)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all. Thank you kaggle and the host team for organizing this interesting competition.</p> <p>The updated source code is available at <a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage">https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage</a> I will probably upload all trained models later.</p> <h3>Windowing</h3> <p>For this challenge, windowing is important to focus on the matter, in this case the brain and the blood. There are good kernels explaining how windowing works.</p> <ul> <li><a href="https://www.kaggle.com/dcstang/see-like-a-radiologist-with-systematic-windowing">See like a Radiologist with Systematic Windowing</a> by <a href="https://www.kaggle.com/dcstang">David Tang</a></li> <li><a href="https://www.kaggle.com/allunia/rsna-ih-detection-eda">RSNA IH Detection - EDA</a> by <a href="https://www.kaggle.com/allunia">Allunia</a></li> </ul> <p>We used three types of windows to focus and assigned them to each of the chennel to construct images on the fly for training.</p> <p>| Channel | Matter | Window Center | Window Width | ----------|--------|---------------|--------------- | 0 | Brain | 40 | 80 | | 1 | Blood/Subdural | 80 | 200 | | 2 | Soft tissues | 40 | 380 |</p> <p>Here is an example before and after applying the windowing. This image is labeled as <code>any intraparenchymal</code> and you can see that windowing helps focusing on the matter. Please check <a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/dicom_windowing.ipynb">windowing.ipynb</a> for the detail.</p> <p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/windowing.png?raw=true" alt="windowing.png"></p> <h3>Classification</h3> <p>This step focuses on pixel data contained in DICOM file not meta data. But still four kind of meta data is used to apply windowing properly. <code>RescaleSlope</code> and <code>RescaleIntercept</code> are used for windowing. <code>BitsStored</code> and <code>PixelRepresentation</code> are used for fixing wrong intercept values which is mentioned in <a href="https://www.kaggle.com/jhoward/cleaning-the-data-for-rapid-prototyping-fastai">Cleaning the data for rapid prototyping</a> written by <a href="https://www.kaggle.com/jhoward">Jeremy Howard</a>. </p> <ul> <li>Two architectures are used. <code>se_resnext50_32x4d</code> and <code>se_resnext101_32x4d</code>. </li> <li>Imagenet pretrained weights from <a href="https://github.com/Cadene/pretrained-models.pytorch">https://github.com/Cadene/pretrained-models.pytorch</a></li> <li>8 folds each. </li> <li>Adding a random number to windowed pixel data as augmentation led to a little better generalization performace. This idea is based on a hunch that CT scanners are probably not perfectly calibrated. </li> <li>Test time augmentations(n=5) are used for predictions.</li> <li>Checkpoints from 2nd and 3rd epochs are used for predictions and then averaged.</li> <li>Final predictions are obtained from simple average of <code>se_resnext50_32x4d</code> and <code>se_resnext101_32x4d</code>. </li> </ul> <p><strong>The training result of 0th fold of se_resnext50_32x4d (<a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/conf/model100.py">model100.py</a>)</strong></p> <p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/image/model100_fold0.png?raw=true" alt="model100_fold0.png"></p> <p><strong>The training result of 0th fold of se_resnext101_32x4d (<a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/conf/model110.py">model110.py</a>)</strong></p> <p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/image/model110_fold0.png?raw=true" alt="model110_fold0.png"></p> <p><strong>Logloss for each of the Hemorrhage Types after emsembling (oof)</strong></p> <p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/image/ensembled.png?raw=true" alt="ensembled.png"></p> <p>This ensembled score (0.0642) is similar to the score (0.065) we got on public LB in the first stage before introducing second level model.</p> <h3>Second Level Model</h3> <p>The second level model focuses on a series of CT scan unlike the classification model which focuses on a given image(slice). The main idea is that other slices of a certain slice within the same series can be useful to enhance the predictions of that slice. For example, if both of the adjacent slices of a certain slice are inferred as <code>epidural</code>, the middle of the slice is most likey <code>epidural</code>. This kind of relationships can trained using something like LightGBM. The train data can be constructed as follows,</p> <p>For example, in case of training <code>epidural</code> based on oof predictions, you can construct a record like this,</p> <p><code> prediction of the given slice, left1, right1, left2, right2, left3, right3, ..., </code></p> <ul> <li><code>left1</code> indicates the prediction of the first slice to the left from the given slice.</li> <li><code>right2</code> indicates the prediction of the second slice to the right from the given slice.</li> </ul> <p>We included <code>left1</code> to <code>left9</code> and <code>right1</code> to <code>right9</code> for each of the slice. <code>left1</code> and <code>right1</code> are unsurprisingly the most useful features among all other slices except the given slice based on feature importance(lightgbm gain). Some distant slices such as <code>left9</code> or <code>right9</code> are not as important as closer slices to the given slice but still somewhat useful.</p> <p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/image/secondlevel.png?raw=true" alt="secondlevel.png"></p> <ul> <li>Final predictions are obtained by simply averaging predictions from LightGBM, Catboost and XGB.</li> <li>The 1st stage public LB score was improved from 0.65 to 0.57 by this.</li> </ul> <p>Thank you for reading!</p>
RSNA Intracranial Hemorrhage Detection
5th place solution (with code).
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all winners, </p> <p>Congrats to my teammate <a href="/tarobxl">@tarobxl</a> achieving GM tier and <a href="/anjum48">@anjum48</a> for his master tier. </p> <p>On behalf of the team, I would like to make the writeup. </p> <h1>Image Preprocessing</h1> <p>We have three types of preprocessing data. <br> 1. Imaging with multiple windows. <br> We use three windows to construct RGB image. Each channel is corresponded to a window. <code> 'brain': [40, 80], 'bone': [600, 2800], 'subdual': [75, 215] </code> </p> <ol> <li><p>Imaging with multiple windows then crop. <br> Same as (1), we crop and keep the only informative part.</p></li> <li><p>Imaging with spatially adjacent. <br> We use only one window [40, 80] for preprocessing. To construct RGB images, we use metadata to know the spaitally adjacent. Let say to construct RGB of slice St, we take: <br> R = St-1, G = St, B = St+1. </p> <p>Finally, we crop and keep only informative parts as same as (2). Please refer this kernel for more detail: <a href="https://www.kaggle.com/anjum48/preprocessing-adjacent-images-and-cropping">https://www.kaggle.com/anjum48/preprocessing-adjacent-images-and-cropping</a></p></li> </ol> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1938879%2F4b1c04d72e5d1359e7d41a3a03bac540%2Fdata_preprocessing.png?generation=1573699900426447&amp;alt=media" alt=""></p> <h1>Data Preprocessing</h1> <p>First, we remove the overlapped patients between train and test. This part may be the reason for the shakeup since we estimate that the shakeup score is in a range of 0.001 - 0.002.</p> <p>In each fold, we do random sampling such that the number of positive patients ar balanced to the number of negative patients. This step helps to have the correlation between CV and LB, and stable as well. </p> <h1>Modeling</h1> <p>We train 5 Folds splitted by patients. The models and their performance on stage 2: </p> <p>|No. | Model | Data | Before PP | After PP| |---|---------- |:-------------:|-------------------------|---:| |1. | Resnet18 | (1) | 0.060 | 0.054 | |2. | Resnet34 | (1) | - | - | |3. | Resnet50 | (1) | 0.058 | 0.052 | |4. | Resnet50 | (3) | 0.054 | 0.051 | |5. | Densenet169 | (2) | 0.055 | 0.049 | |6. | InceptionV3 + Deepsupervision | (1) | 0.060 | 0.053 | |7. | EfficientNet-B0 | (3) | 0.054 | 0.051 | |8. | EfficientNet-B3 | (2) | 0.055 | 0.050 | |9. | EfficientNet-B5 | (3) | 0.048 | 0.048 | </p> <p>PP =Post-processing. </p> <p>We have three pipelines, the following is mine which is used to train the model <code>No. 1,2,3,4,5</code>. </p> <ul> <li>Optimizer: AdamW </li> <li>Image size: 512x512 </li> <li><p>Stages: </p> <ul><li><p>Warmup: Freeze the backbone, train the FC only. </p> <ul><li>LR: 0.001</li> <li>num_epochs: 3 </li></ul></li> <li><p>Warmup: Unfreeze the backbone, train all the model. </p> <ul><li>LR: 0.0001</li> <li>num_epochs: 20 </li> <li>scheduler: ReduceLROnPlateau, patience = 0. </li> <li>EarlyStoppingCallback: patience = 3. </li></ul></li></ul></li> <li><p>Augmentations: <code>python Resize(*image_size), HorizontalFlip(), OneOf([ ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03), GridDistortion(), OpticalDistortion(distort_limit=2, shift_limit=0.5), ], p=0.3), ShiftScaleRotate(shift_limit=0.05, scale_limit=0.1, rotate_limit=10), </code> </p></li> <li><p>TTA: Normal + HFlip. </p></li> </ul> <p>By this pipeline, I finish the training at around 8-10 epochs. The deep models (SEResnext50, resnet101, etc) do not work well. Training with longer epochs (upto 25) leads to be overfitted. </p> <h1>Post-processing</h1> <p>We leverage metadata and use H2O to build a model for post-processing. More details will come up by <a href="/tarobxl">@tarobxl</a>. </p> <h1>Stacking</h1> <p>First, the do post-processing for each prediction of each model. <br> Second, we use the stacking pipeline designed by magician <a href="/mathormad">@mathormad</a>. Please upvote this topic: <a href="https://www.kaggle.com/c/imaterialist-challenge-fashion-2018/discussion/57934">https://www.kaggle.com/c/imaterialist-challenge-fashion-2018/discussion/57934</a> </p> <p>Update: <br> Stacking pipeline is shared at: <br> <a href="https://www.kaggle.com/mathormad/5th-place-solution-stacking-pipeline">https://www.kaggle.com/mathormad/5th-place-solution-stacking-pipeline</a> <br> Dont hesitate to upvote it.</p> <h1>Code</h1> <p>My pipeline code is published at: <br> <a href="https://github.com/ngxbac/Kaggle-RSNA">https://github.com/ngxbac/Kaggle-RSNA</a> <br> The model checkpoints, graph, training processes are recorded by wandb. <a href="https://app.wandb.ai/ngxbac/Kaggle-RSNA">https://app.wandb.ai/ngxbac/Kaggle-RSNA</a> </p> <p><a href="/anjum48">@anjum48</a> 's pipeline: <a href="https://github.com/Anjum48/rsna-ich">https://github.com/Anjum48/rsna-ich</a> </p> <p><a href="/mathormad">@mathormad</a>'s pipeline to train InceptionV3 + Deepsupervision: <br> <a href="https://github.com/triducnguyentang/RSNA">https://github.com/triducnguyentang/RSNA</a></p> <p><a href="/tarobxl">@tarobxl</a> 's post-processing code: <a href="https://github.com/tiendzung-le/Kaggle-RSNA-5th-place-Solution">https://github.com/tiendzung-le/Kaggle-RSNA-5th-place-Solution</a></p>
RSNA Intracranial Hemorrhage Detection
13th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Kaggle and RSNA for hosting such an interesting competition. Thanks the whole team @andy2709 @moewie94 @lego1st @nguyenbadung for the great collaboration.</p> <p>The code is publicly available at <a href="https://github.com/dattran2346/rsna-2019">https://github.com/dattran2346/rsna-2019</a></p> <h2>1. Preprocessing</h2> <h3>Windowing</h3> <ul> <li>We use various combination of brain, subdural, bony, and default window and stack them to create a normal 3-channel image.</li> </ul> <p><code> ct_windows = { 'brain': {'L': 40, 'W': 80}, 'subdural': {'L': 75, 'W': 215}, 'bony': {'L': 600, 'W': 2800}, 'default': { # from metadata } } </code></p> <h3>Window setting optimization</h3> <ul> <li>The idea is to use a 1x1 convolution and sigmoid activation to learn relevant windows, the weight is initialized to be the default brain, subdural and bony window. @andy2709 tried this method and noticed that the final learned window is very closed to the default window. </li> </ul> <h3>Data split</h3> <ul> <li>We splited the dataset by both patient id and study id. I, @andy2709 and @lego1st trained the models by patient split, while @nguyenbadung and @moewie94 trained by study split.</li> </ul> <h2>2. Model</h2> <h3>2D Model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3342872%2F1f61df62cbecc1d1777aeeb6069c19ae%2F2dmodel.png?generation=1574771438363883&amp;alt=media" alt=""></p> <p>We applied 2 stage training here: - In the 1st stage, just a normal CNN training, backbones are EfficientNetB2-B5, SEResNeXt50, SEResNeXt101. - In the 2nd, we use 5 consecutive slices’ outputs and applied a simple CNN to predict the center slice.</p> <h3>3D Model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3342872%2F45cd1a578fa954caba1551680f907b69%2F3dmodels.png?generation=1574771686608008&amp;alt=media" alt=""></p> <ul> <li>We use a normal backbone as a decoder and a bi-directional LSTM with a FC layer as the decoder, the model was trained end-to-end.</li> <li>For each study, we select 10 random slices (contiguous and not) in order and put though the network during training. For inference, all slices are considered.</li> </ul> <h2>3. Stacking</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3342872%2F3e4aba9020f00b35b66a03f103057fc0%2Fstacking.png?generation=1574771786432271&amp;alt=media" alt=""></p> <ul> <li>Concatenate prediction from all model (split by both study-id and patient-id), build simple cnn model:</li> </ul> <p><code> model = nn.Sequential( nn.Linear(input_dim, 1024), nn.ReLU(), nn.Dropout(0.5), nn.Linear(1024, 1024), nn.ReLU(), nn.Dropout(0.5), nn.Linear(1024, 6),) </code></p> <ul> <li>Average prediction from 2 types of model: study id split and patient id split.</li> </ul> <h2>4. Summary</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3342872%2F9d5b6d677fa3626ac13f6299e2b6d094%2FScreenshot%20from%202019-11-26%2019-46-08.png?generation=1574772402149756&amp;alt=media" alt=""></p>
RSNA Intracranial Hemorrhage Detection
#9 Solution with CODE - Team BIG HEAD: Training model bonanza, TTA and L2 stacking.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>CODE UPDATE</strong></p> <p>Code is here: <a href="http://github.com/antorsae/rsna-intracranial-hemorrhage-detection-team-bighead">http://github.com/antorsae/rsna-intracranial-hemorrhage-detection-team-bighead</a></p> <p><strong>SOLUTION OVERVIEW</strong></p> <p>Our solution consists of pretty weak models (CV 0.07x in most of them) using L2 stacking (5 folds) trained with both <em>xgboost</em> and <em>catboost</em> and ensembled via averaging. </p> <p>We trained ~50 models (10 architectures/losses * 5 folds) in total.</p> <p>The following table summarizes the architectures, folds, and GPUs to train each model:</p> <p><img src="https://i.imgur.com/pNaJwFP.png" alt=""></p> <p><strong>Fastai v1 3-slice networks: standard window and loss</strong></p> <p>Architectures not highlighted (first five) were implemented using fastai v1 taking 3 consecutive slices (512x512) of a study and feeding them to the vanilla architecture with a fully connected head that outputs 6*3 = 18 logits. Training is done for 15 epochs using 1-cycle-policy. Batch size is allocated dynamically maximizing GPU memory usage. We use random rotations and flips as augmentation.</p> <p>Loss function is the weighted average of the 3 slices giving more importance to the center slice: ``` W_LOSS = 0.1 GENERAL_WEIGHTS = FloatTensor([2., 1., 1., 1., 1., 1.]) general_weights_3slices = torch.cat([GENERAL_WEIGHTS * W_LOSS, GENERAL_WEIGHTS, GENERAL_WEIGHTS * W_LOSS])</p> <p>def weighted_loss(pred:Tensor,targ:Tensor)-&gt;Tensor: return F.binary_cross_entropy_with_logits(pred, targ.float(), general_weights_3slices.to(device=pred.device)) ```</p> <p><strong>Fastai v2 3-slice networks: subdural window and subdural focused loss</strong></p> <p>We decided to use fastai v2 primarily b/c augmentations are done in GPU and a few of the computers we have had CPU bottlenecks doing augmentations, no longer the case with fastai v2.</p> <p>Architectures highlighted in red we implemented as above with the following differences: - Fastai v2 was used: much of a learning process and still has rough edges (some of them we realized after stage 1 finished and we could NOT change code). - Window centered at 100 and width of 254 (to take advantage of the range of <code>uint8</code>) - Loss weighted on subdural more (10x) than other types: ``` SUBDURAL_WEIGHTS = FloatTensor([.8, .4, .4, .4, .4, 4.]) subdural_weights_3slices = torch.cat([SUBDURAL_WEIGHTS * W_LOSS, SUBDURAL_WEIGHTS, SUBDURAL_WEIGHTS * W_LOSS])</p> <p>def subdural_loss(pred:Tensor,targ:Tensor)-&gt;Tensor: return F.binary_cross_entropy_with_logits(pred, targ.float(), subdural_weights_3slices.to(device=pred.device)) ```</p> <p><strong>Input to L2 Models</strong></p> <p>Once models are trained we run OOF predictions using TTA with 10 repetitions. And we use the mean and std of those 10 TTA predictions for each architecture as input to both <em>xgboost</em> and <em>catboost</em>, both for the central and surrounding slices.</p> <p>Two L2 models are trained: <em>xgboost</em> and <em>catboost</em>, and then simply averaged. One submission we did with the fastai v1 models only (they finished sooner) and the other using both.</p> <p><strong>Things we would have done differently</strong></p> <ul> <li>Class-aware sampling (balance dataset)</li> <li>Pseudo-label training</li> <li>Fastai v2 head is different than v1 for vision models, the v1 head works better.</li> <li>We used a pretty high <em>eps</em> for <em>Adam</em> optimizer in v2, defaults (in v1) work better.</li> <li>Learnable window</li> <li>TTA with zoom, crops and cut-out</li> <li>Add extra channel with distance to center (similar to coord-conv but just radius to center) to make network location-aware.</li> <li>L2 model using lightgbm too and averaging 5 folds of L2 (we trained with 4 folds and hence we did not use all training set for L2)</li> </ul> <p>...and the mandatory meme as a tribute to our team name: <img src="https://i.imgur.com/1bz50NK.png" alt=""></p>
RSNA Intracranial Hemorrhage Detection
8th place solution + code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I'd like to thank the whole team for the great collaboration: @meanshift, @tgilewicz, @nordberdt, @dmytropoplavskiy, it was a pleasure working with you on this competition. Following is a summary of our solution and experiments.</p> <p>Code: <a href="https://github.com/ambrzeski/kaggle-rsna-2019">https://github.com/ambrzeski/kaggle-rsna-2019</a></p> <h1>Preprocessing</h1> <h2>Gantry tilt correction</h2> <p>In order to reduce the impact of radiation on patient’s eyes during head CT, sometimes scans are performed with a tilted gantry. Such studies have slices not aligned properly in the y axis, which causes the distortion in the 3D volume, visible on the sagittal view. To correct the distortion, slices must be shifted in the y axis with a shear transform. The angle of the shear can be determined from “Image Orientation (Patient)” DICOM field.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2F4d53a9260f20571c9869308cff6ff522%2Ftmp.png?generation=1574347411118888&amp;alt=media" alt=""> <em>Saggital view before and after gantry tilt correction.</em></p> <p>We observed that gantry tilt is quite common in this dataset, occurring in more than 50% of studies in the train set and 90% in stage1 test set. We haven’t measured the impact of gantry tilt correction on models’ performance. Supposedly it doesn’t matter too much when inputting slices as channels to 2D model, but when using wider 3D context it could make a difference, especially considering the fact that the angle of the shear varies among studies.</p> <h2>Windowing</h2> <p>First, HU values were clipped to [-400, 1000] range. Then, instead of using fixed size windows, we apply a non-linear transform, which stretches out most valuable HU ranges and compresses less interesting ranges. The exact shape of the transform function is manually designed by us. We’ve tried modeling this function as a cumulative distribution function of histogram of pixel values in the dataset, but setting the values by hand allowed us to enhance specific ranges containing hemorrhages, and consequently worked slightly better.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2F73aee48d3ffbd4e946fddb9ffed245f6%2Fcdf.png?generation=1574347839604990&amp;alt=media" alt=""> <em>Mapping from HU values to normalized values</em></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2Fbfd8f13a2fad296ce7b15bc421a23b0a%2Ftmp2.png?generation=1574347948527157&amp;alt=media" alt=""> <em>From left to right: image clipped to [-400, 1000] range, brain window (L:40, W:80), non-linear transform</em></p> <p>We’ve also tried fixed and learnable windows, but our non-linear transform gave the best results, while also being the most convenient to use.</p> <h1>Models</h1> <p>All of our models are various variants of 2D CNNs with consequent slices being fed to the network as image channels, predicting classes for the middle slice only. Models were trained on 5 folds with no patient overlap between folds. As an optimizer, we used RAdam. For data augmentation we used standard spatial transformations. Any transforms messing with pixel values resulted in a decrease in performance.</p> <p>For some of the models, slices (or groups of slices) were forwarded separately through the network backbone and their feature maps concatenated and combined by a convolutional layer. As such approach for 5 slices would increase 5x the training time, for the first 4 epochs, the model has been trained on the current slice only with weights of combining 3d convolution related to other slices set to 0. For two more epochs, the model has been trained on all 5 input slices. Combining model outputs improved results more (around 0.006-0.01) compared to multiple inputs to single model.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2F408f11eb7f3aed5077c61e42b9352bde%2Fd1.png?generation=1574348143538768&amp;alt=media" alt=""></p> <p>Summary of all the models in the ensemble is presented below. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2Fec45c41cb7847d4f71d7291a026948fd%2FScreenshot_2019-11-21_16-06-37.png?generation=1574348844202508&amp;alt=media" alt=""> *in stage2 we had a bug in validation, which may cause the results to be slightly inaccurate</p> <p>As many have noticed, scores on the final leaderboard are significantly lower than stage 1 leaderboard. We also observed a large divergence between model ranking on local cross-validation and private leaderboard. Specifically, simpler models, like resnet18, seem to perform better on private leaderboard, compared to more complex ones.</p> <h1>Segmentation masks</h1> <p>We hand-labeled 196 studies with segmentation masks. For the labeling we picked studies with the highest log loss on out-of-fold predictions. We’ve tried multi-task learning, using pre-computed mask as attention masks or additional features for a classifier, but all of these approaches failed to provide any significant boost to log loss metric. Anyway, we included one of the models trained with masks to a final ensemble, hoping to at least increase the variety within the ensemble. Segmentation based model performed slightly better on other metrics like F1.</p> <p>The model has been trained to do both classification and segmentation (for labeled samples), with segmentation samples oversampled for initial few epoch and switched to the original samples distribution after.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2F3e2561e87c2d3b3c4f9b9ccdc6408b0f%2Fd2.png?generation=1574349217941420&amp;alt=media" alt=""></p> <h1>TTA, ensembling and second-level model</h1> <p>For each model we perform 5-fold ensembling and test-time augmentations. Then, final predictions are obtained using a L2 linear model, which takes as an input 1st level predictions for 5 consecutive slices from each model and returns final predictions for a middle slice. The L2 model gives a very slight improvement (0.0002 difference from simple averaging on LB).</p> <h1>What didn’t work</h1> <ul> <li>We’ve tried transfer learning with 3D CNNs from <a href="https://github.com/Tencent/MedicalNet">https://github.com/Tencent/MedicalNet</a>, but trainings were taking too much time and we had to abandon this idea, but it still might be worth trying given more time</li> <li>Experiments with segmentation masks</li> <li>Class balancing</li> <li>Multiple trainable windows</li> </ul>
RSNA Intracranial Hemorrhage Detection
7th place outline
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I make it short because there is almost no novelty in my solution.</p> <p><strong>Overall Strategy:</strong> 1. Train a image-level CNN and save to hard drive its GAP features. 2. Recover the original CT sequence by sorting the z-position in the meta data, and input the saved GAP features to train a scan(study)-level RNN model. This strategy is inspired from <a href="https://rd.springer.com/content/pdf/10.1007%2Fs00330-019-06163-2.pdf">https://rd.springer.com/content/pdf/10.1007%2Fs00330-019-06163-2.pdf</a> </p> <p><strong>Preprocessing for CNN:</strong> I used Appian's windowing. Spent some efforts to tweak it but results are all similar.</p> <p><strong>Augmentation for CNN:</strong> Heavy augmentation including crop and resize back, affine (360 degree rotation), contrast and brightness, gamma correction, blurring and sharpening, mirroring, optical distortion, grid distortion, elastic transform ...</p> <p><strong>CNN models:</strong> efficientnet_b5 efficientnet_b6 inception_resnet_v2 inception_v4 senet154 seresnext50 seresnext101 Totally 7 models, all trained on a different 80-20 training-validation split. The input resolution varied between 384x384 and 512x512 depending on the size of the model.</p> <p><strong>RNN models:</strong> Two bidirectional GRU layers. Length of sequence fixed to 72. Padding and loss masking used.</p>
RSNA Intracranial Hemorrhage Detection
A note from a 46th place finisher (The Purpose)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This post is <strong>non-technical</strong>. This post is <strong>in memoria</strong> to a friend (Π.Δ) I lost 2 summers ago by Intracranial aneurysm in the age of 28. This is a reminder on why we build cutting edge technology, why we invest such a lot of time here on Kaggle or in daylight jobs, to create great algorithms, to learn more and more, to improve skills. All these hours spent have to have a <strong>purpose</strong> to stand in time. And the greatest of them is to <strong>serve human lives</strong>. Doctors are fighting for it using medical knowledge and experience, radiologists do as well, <strong>We fight with keyboards</strong> and <strong>ML</strong> :) !</p> <p>Thanks for reading my post, neongen</p>
RSNA Intracranial Hemorrhage Detection
16th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners and thanks to kaggle and organizers for opening this learning space.</p> <p>We started relatively late, but we made a good starting point with the code that <a href="/appian">@appian</a> shared. Great thanks to <a href="/appian">@appian</a> </p> <p>Our overall procedure is as follows.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2276152%2F770038c6cb65c7774c82ff8b1e1c5877%2F17th_solution_figure2.png?generation=1573814036534212&amp;alt=media" alt=""></p> <h3>In step 1</h3> <ul> <li>Basic training is performed by considering an image as an independent input. <ul><li>input shape : (batch_size, 512, 512, 3) <ul><li>4th axis (3) means 3 channels with multiple windowing parameters</li></ul></li> <li>output shape : (batch_size, 6)</li> <li>CNN Architectures : SE-ResNeXt-101 and EfficientNet-B6</li> <li>loss : weighted log loss (weights = [2/7, 1/7, 1/7, 1/7, 1/7, 1/7])</li> <li>optimizer : Adam (with learning rate from 1e-4 to 1e-5)</li> <li>sampling : random sampling or location based sampling (sampling middle slices more from image series in patient-level)</li> <li>5 folds or 7 folds training</li></ul></li> </ul> <h3>In step 2</h3> <ul> <li>We wanted to calibrate the output distributions considering the relation of labels or adjacent image slices, so we recognized the outputs of patient-level images as a signal and trained the model.</li> <li>Output distributions are extracted from the validation set. (For example, 5 models from 5 folds can make total training dataset.)</li> <li>If about 640,000 images are used in step1, about 19,500 output signals (the number of patients) are used in step 2. <ul><li>input shape : (batch_size, None, 6, 1) <ul><li>1 axis (None) means the length of signal (the number of slices)</li> <li>2 axis (6) means the number of labels</li></ul></li> <li>output shape : (batch_size, None, 6, 1)</li> <li>CNN Architecture : simple CNN model with 4 convolution layers having 5x6 matrix</li> <li>loss : weighted log loss (weights = [2/7, 1/7, 1/7, 1/7, 1/7, 1/7])</li> <li>optimizer : Adam (with learning rate 1e-5)</li> <li>5 folds training</li></ul></li> </ul> <p>```</p> <hr> <h1>Layer (type) Output Shape Param # </h1> <p>input_1 (InputLayer) (None, None, 6, 1) 0 </p> <hr> <p>conv2d_1 (Conv2D) (None, None, 6, 64) 1984 </p> <hr> <p>conv2d_2 (Conv2D) (None, None, 6, 64) 122944 </p> <hr> <p>conv2d_3 (Conv2D) (None, None, 6, 64) 122944 </p> <hr> <p>conv2d_4 (Conv2D) (None, None, 6, 64) 122944 </p> <hr> <h1>conv2d_5 (Conv2D) (None, None, 6, 1) 65 </h1> <p>```</p> <p>We also thought to handle sequential information in image-level, but the deadline was short, so the process was split into two steps and output signals with relatively small dimensions were used as the next best thing. </p> <p><strong>The results are as follows.</strong> *<em>step 1 result : 0.05425 (private score)</em>* <strong>step 2 result : 0.04793 (private score)</strong> *<em>We think that the core processing of our team, like other teams, also was to reflect sequential information.</em>*</p>
RSNA Intracranial Hemorrhage Detection
15th place solution (0.047) --- Close but no Cigar!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>I see that many of the top 10 have already posted their solutions, but we would still like to share ours!</strong></p> <p><strong>One reason is</strong> that the starting point of our [best] solution really was the <a href="https://www.kaggle.com/akensert/inceptionv3-prev-resnet50-keras-baseline-model">public kernel</a>. What was added, thanks to my team mates, was albumentation's augmentation (which would be put in <code>_read()</code>), as well as bigger input size and more epochs of training (these are pretty obvious). There were also some alterations in the learning late schedules, but that's pretty much it! </p> <p>Then as the "next level", we would use [about] a dozen of models (including B2-B6, Xception and InceptionResnetV2 in addition to InceptionV3) to ensemble. ResNe(X)ts didn't work for us (or rather, not for me), so they weren't included in this particular submission.</p> <p>Below is the method to squeeze out as much as we could from the individually trained models that we had (scores of ~0.068-0.074), which would eventually get us to 0.058 (public LB).</p> <p>Four (or three) levels of averages (wisdom of the crowds at work!):</p> <p><em>We had 3 separate ensembles: (I) mix of architectures (90/10 train/val split), (II) 6-CV B4, and (III) 6-CV InceptionResNetV2.</em></p> <p><strong>For each ensemble in ensembles:</strong> *<em>(1).</em>* All models' individual predictions are a weighted average of each epoch's prediction (a.k.a. snapshot predictions). The weights are optimized via scipy.optimize.minimize, resulting in 1 prediction matrix (M x N_classes) per model. <strong>(2).</strong> Now each model's predictions are also averaged (but not optimized weights, because we don't have a validation set here). Now we are at ~0.065 on public LB. <strong>(3).</strong> Every possible validation prediction data point (from <strong>(1)</strong>) is used to train two sequence models (2-layered LSTM and 2-layered GRU; see figure (credits to my teammate <a href="/ratthachat">@ratthachat</a> :-))), which are then used to "correct"/predict the test set predictions. Target Y would be the true labels (N_classes=6) and the input would be the sequence of predictions for each StudyID's slices (sorted from low ImagePosition3 to high ImagePosition3) and including the floating point value of ImagePosition3 (N_slices=60, N_features=6+1). This step really boosts our log loss: we now advanced from 0.065 -&gt; 0.059 just like that! Each Sequence model (RNN) was trained for 35 epochs, and like <strong>(1)</strong>, we used scipy.optimize.minimize to compute a weighted average of all 35 epochs' predictions (so yet again using the "average snapshot prediction method"; seems to work really well for us) <strong>end For :-)</strong> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2183023%2F6368f8a9ed88a7bbd1dcc92469d4889a%2Fgru_2.jpg?generation=1573727710325475&amp;alt=media" alt=""></p> <p>We now had 3 ensemble predictions (3 submission files), which were simply averaged and submitted to Kaggle!</p> <p><strong>So to summarize</strong>, we used pretty standard models with lots of averages and good post processing. </p> <p>Unfortunately (for us) it didn't go all the way to the top 10, but we're still extremely grateful for this competition! You know, creating friendships and all that!</p> <p>Also, good job everyone! :-)</p>
Lyft 3D Object Detection for Autonomous Vehicles
3rd place solution [0.182 Private LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft 3D Object Detection for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to organizers and Kaggle for this competition!</p> <p>Congratulations to all winners!</p> <p>Here is my solution:</p> <h3>Dataset &amp; Pre-processing</h3> <ul> <li>No external data and only point-cloud of this competition</li> <li>Animals and emergency vehicle classes were not used.</li> <li>The objects which have less than 5 points were ignored.</li> <li>The detection area was 100m x 75m for vehicle's classes (car, other_vehicle, truck and bus) and 100m x 50 for small object's classes (pedestrian, bicycle and motorcycle).</li> <li>Dataset was splitted bt StratifiedKFold using the number of objects for each class in a scenes.</li> </ul> <h3>Model</h3> <p>My network was the combination of VoxelNet [1] and PointPillars [2], and the implementation was based on <a href="https://github.com/traveller59/second.pytorch">traveller59's second.pytorch</a>. The network utilized only FC and Conv2d, no Sparse Convolution or Deformable Convolution.</p> <p>Base model's pipeline:</p> <ol> <li>Point-cloud was splitted into voxels [0.25m x 0.25m x 0.75m]</li> <li>The same network of PointPillars' Pillar Feature Net was applied to each voxel and output channel size = 16 worked best for me.</li> <li>Voxel-representation features (C x D x H x W) were reshaped into pseudo-image features (C * D x H x W).</li> <li>Almost the same network of RPN in VoxelNet was used for vehicle's classes and this was based on <a href="https://github.com/traveller59/second.pytorch/blob/master/second/configs/nuscenes/all.pp.mhead.config">this configuration file</a>. The first DeConv2d was replaced to Conv2d and other parameters were adjusted. <br> For other small object's classes, three Conv2d were applied to the cropped feature map from the reshaped feature map. The cropped featuere map corresponding to the detection area.</li> <li>The prediction outputs were localization, classification and direction.</li> </ol> <p>Optional models:</p> <ol> <li>Pre-activation ResNet was used in RPN and this idea was from [3]</li> <li>RPN for small object's classes was changed. Conv2d x 1 (1st output) and Conv2d x 3 (2nd output) were applied to the cropped feature map then these outputs were concatenated. This was based on the original RPN in VoxelNet.</li> </ol> <h3>Post-processing</h3> <ul> <li>NMS was used to suppress the overlaps for each model.</li> <li>Three models were ensembled using Soft NMS. </li> <li>No score threshold.</li> </ul> <h3>Data Augmentation</h3> <ul> <li>global translation (x, y, z)</li> <li>global scaling</li> <li>rotation around z-axis</li> <li>mixup-augmentation <br> Ground-truths were cropped and pasted to other samples (More explanation is in [4]). This augmentation raised the scores a lot for all classes. </li> </ul> <h3>No Improvement for me</h3> <ul> <li>I extended Weighted-boxes-fusion (<a href="https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/64633#latest-590283">ZFTurbo's solution on the Open Image competition last year</a> [5]) to apply 3D Bonding Boxes to ensemble different models, but the score was lower than Soft NMS. It was better than NMS.</li> <li>I tried to use a semantic map image (original, filtered) to filter the predictions, but both positive and false predictions were dropped.</li> <li>Point-cloud-coloring using raw images or feature maps extracted from 2D detection model did not work as additional feature for Point-cloud in my architectures.</li> </ul> <h3>Reference</h3> <ol> <li><p>VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection [<a href="https://arxiv.org/abs/1711.06396">arvix</a>]</p></li> <li><p>PointPillars: Fast Encoders for Object Detection from Point Clouds [<a href="https://arxiv.org/abs/1812.05784">arvix</a>]</p></li> <li><p>End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds [<a href="https://arxiv.org/abs/1910.06528">arvix</a>]</p></li> <li><p>Fast Point R-CNN [<a href="https://arxiv.org/abs/1908.02990">arvix</a>]</p></li> <li><p>Weighted Boxes Fusion: ensembling boxes for object detection models [<a href="https://arxiv.org/abs/1910.13302">arxiv</a>] [<a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion">github</a>]</p></li> </ol>
Severstal: Steel Defect Detection
Private LB 0.91 solution + code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,</p> <p>Congrats to all the winners! I'd like to thank <a href="/hengck23">@hengck23</a> for all his public discussion and code - it was really useful for me to validate my EfficientNet + Unet implementation. Also to <a href="/lightforever">@lightforever</a> for his kernel - I got a lot of ideas from looking at his training configuration.</p> <p>Write-up for my solution below.</p> <p><a href="https://github.com/khornlund/severstal-steel-defect-detection">Github</a> <a href="https://www.kaggle.com/khornlund/sever-ensemble-classification?scriptVersionId=22207424">0.91 Scoring Kernel</a></p> <h1>Results</h1> <p>Winning submission:</p> <p>| Public LB | Private LB | | --- | --- | | 0.92124 | <strong>0.90883</strong> |</p> <p>My best submission:</p> <p>| Public LB | Private LB | | --- | --- | | 0.91817 | <strong>0.91023</strong> |</p> <p>My chosen submission:</p> <p>| Public LB | Private LB | | --- | --- | | 0.91844 | 0.90274 |</p> <p>I chose my submission according to public LB score, and ended up rank 55/2436. Silly me!</p> <h1>Models</h1> <p>I used <a href="https://github.com/qubvel/segmentation_models.pytorch">segmentation_models.pytorch</a> (SMP) as a framework for all of my models. It's a really nice package and easy to extend, so I implemented a few of my own encoder and decoder modules.</p> <p>I used an ensemble of models for my submissions, covered below.</p> <h2>Encoders</h2> <p>I ported <a href="https://github.com/lukemelas/EfficientNet-PyTorch">EfficientNet</a> to the above framework and had great results. I was hoping this would be a competitive advantage, but during the competition someone added an EfficientNet encoder to SMP and many others started using it. I used the <code>b5</code> model for most of the competition, and found the smaller models didn't work as well.</p> <p>I also ported <code>InceptionV4</code> late in the competition and had pretty good results.</p> <p>I ported a few others that didn't yield good results:</p> <ul> <li><a href="https://github.com/gasvn/Res2Net">Res2Net</a></li> <li><a href="https://github.com/wuhuikai/FastFCN/blob/master/encoding/dilated/resnet.py">Dilated ResNet</a></li> </ul> <p>I had good results using <code>se_resnext50_32x4d</code> too. I found that because it didn't consume as much memory as the <code>efficientnet-b5</code>, I could use larger batch and image sizes which led to improvements.</p> <h2>Decoders</h2> <p>I used Unet + FPN from SMP. I added Dropout to the Unet implementation.</p> <p>I implemented <a href="https://github.com/bigmb/Unet-Segmentation-Pytorch-Nest-of-Unets/blob/master/Models.py">Nested Unet</a> such that it could use pretrained encoders, but it didn't yield good results.</p> <h2>Other</h2> <p>I ported <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/segmentation/deeplabv3.py">DeepLabV3</a> to SMP but didn't get good results.</p> <h2>Scores</h2> <p>These are the highest (private) scoring single models of each architecture.</p> <p>| Encoder | Decoder | Public LB | Private LB | | --- | --- | --- | --- | | efficientnet-b5 | FPN | 0.91631 | 0.90110 | | efficientnet-b5 | Unet | 0.91665 | 0.89769 | | se_resnext50_32x4d | FPN | 0.91744 | 0.90038 | | se_resnext50_32x4d | Unet | 0.91685 | 0.89647 | | inceptionv4 | FPN | 0.91667 | 0.89149 |</p> <h1>Training</h1> <h2>GPU</h2> <p>Early on I used a 2080Ti at home. For the final stretch I rented some Tesla V100's in the cloud. I found being able to increase the batch size using the V100 (16GB) gave a significant improvement over the 2080Ti (11GB).</p> <h2>Loss</h2> <p>I used <code>(0.6 * BCE) + (0.4 * (1 - Dice))</code>.</p> <h2>Targets</h2> <p>I treated this as 4-class classification (no background class). If a pixel was predicted to have two kinds of detects, the lower confidence predictions were removed in post-processing.</p> <h2>Optimizer</h2> <pre><code>- RAdam - Encoder - learning rate 7e-5 - weight decay: 3e-5 (not applied to bias) - Decoders - learning rate 3e-3 - weight decay: 3e-4 (not applied to bias) </code></pre> <h2>LR Schedule</h2> <p>Flat for 30 epochs, then cosine anneal over 220 epochs. Typically I stopped training around 150-200 epochs.</p> <h2>Image Sizes</h2> <p>256x384, 256x416, 256x448, 256x480</p> <p>Larger image sizes gave better results, but so did larger batch sizes. The <code>se_resnext50_32x4d</code> encoders could use a batch size of 32-36, while the <code>efficientnet-b5</code> encoders typically used a batch size of 16-20.</p> <h2>Grayscale Input</h2> <p>The images were provided as 3-channel duplicated grayscale. I modified the models to accept 1 channel input, by recycling pretrained weights. I did a bunch of testing around this as I was worried it might hurt convergence, but using 3-channel input didn't give better results.</p> <p>I parameterised the recycling of the weights so I could train models using the R, G, or B pretrained weights for the first conv layer. My hope was that this would produce a more diverse model ensemble.</p> <h2>Augmentation</h2> <p>I used the following <a href="https://github.com/albu/albumentations">Albumentations</a></p> <p><code> Compose([ OneOf([ CropNonEmptyMaskIfExists(self.height, self.width), RandomCrop(self.height, self.width) ], p=1), OneOf([ CLAHE(p=0.5), # modified source to get this to work with grayscale GaussianBlur(3, p=0.3), IAASharpen(alpha=(0.2, 0.3), p=0.3), ], p=1), Flip(p=0.5), Normalize(mean=[0.3439], std=[0.0383]), ToTensor(), ]) </code></p> <p>I found the <code>mean</code> and <code>std</code> from the training images.</p> <p>It would have been nice to experiment with more of these, but it took so long to train the models it was difficult. I found these augs worked better than simple crops/flips and stuck with them.</p> <h2>Validation</h2> <p>I used a random 20% of the training data for validation with each run.</p> <p>Models were largely selected based on their Mean Dice Coefficient. Where a few models had similar performance I would look at the Dice Coefficient for the most common class and the loss.</p> <p>High scoring models I trained had a Mean Dice Coefficient around 0.951 - 0.952. Here's an example validation score:</p> <p><code> val_dice_0 : 0.9680132865905762 val_dice_1 : 0.9881579875946045 val_dice_2 : 0.8649587631225586 val_dice_3 : 0.9835753440856934 val_dice_mean : 0.9511765241622925 </code></p> <h2>Pseudo Labels</h2> <p>I used the ensemble outputs of models as pseudo labels, which gave a huge performance boost. I used a custom <a href="https://github.com/khornlund/pytorch-balanced-sampler">BatchSampler</a> to undersample (sample rate ~60%) from the pseudo-labelled data, and fix the number of pseudo-labelled samples per batch (each batch would contain 12% pseudo-labelled samples).</p> <p>Some other people had poor results with pseudo-labels. Perhaps the technique above helped mitigate whatever downsides they faced.</p> <h2><a href="https://github.com/NVIDIA/apex">Apex Mixed Precision</a></h2> <p>I tried to get this to work for so long in order to take advantage of the larger batch sizes it enables. However, now matter what I tried, I had worse convergence using it. Eventually I gave up.</p> <p>It's possible I was doing something wrong - but I invested a lot of time into trying this, and from talking to others at work it seems like they've had similar issues.</p> <h1>Post Processing &amp; Submission</h1> <h2>TTA</h2> <p>Only flip along dim 3 (W). I found TTA wasn't very useful in this competition, and consumed valuable submission time.</p> <h2>Prediction Thresholds</h2> <p>I used 0.5 for each class ie. if the output was &gt; 0.5, the output was positive for that defect.</p> <p>I was worried that tweaking these would risk overfitting public LB.</p> <h2>Defect Pixel Thresholds</h2> <p>I used 600, 600, 1000, 2000. If an image had fewer than this number of defect pixels for a class, all predictions for that class were set to zero.</p> <p>I tested some different values but it actually didn't have much impact.</p> <h2>Component Domination</h2> <p>Since my models were set up to predict 4 classes, I was using <code>sigmoid</code> rather than <code>softmax</code> on their outputs, which meant sometimes I got overlapping defect predictions. I had an idea to look at the size of each component, and have the larger components "dominate" (remove) smaller overlapping components. I got a tiny boost from this, but I think it may simply be because at that stage I didn't have another way of ensuring there was only 1 defect prediction at each pixel.</p> <p>I stopped using this technique in favour of simply taking the highest defect prediction for each pixel.</p> <h2>Dilation</h2> <p>I tried varying amounts of dilation. Sometimes I got a small improvement, and sometimes got worse results so I stopped using it.</p> <h2>Ensemble Averaging</h2> <p><em>Here is where I made the mistake that cost me 1st place.</em></p> <p>I had been using mean averaging (eg. train 5 models, take the mean prediction for each class for each pixel), and was struggling to break into the gold medal bracket. On the last day, I was reading the discussion forums and started comparing the defect distributions of my output with what others had probed to be the true defect distribution.</p> <p>It looked like my models were overly conservative, as the number of defects I was detecting was lower than other people and much lower than the probed LB distribution. So, I started thinking about how I could increase the number of defect predictions. I had done some experimentation with pixel thresholds, and found that changing them didn't have much of an effect. I knew that the score was very sensitive to the prediction thresholds, so I was worried about fiddling with that and potentially overfitting to the public LB. Then, I had an idea:</p> <p>I'd noticed that sometimes I would add new, high-performing models to my ensemble, and my LB score would decrease. I wondered if this might be explained by a majority of models <em>mean averaging</em> out positive predictions too often. If we're detecting faults, maybe we should weight positive predictions more than negative ones? I decided to try <em>Root Mean Square</em> averaging, as this would hug the higher values. For example:</p> <p>``` input: [0.2 0.3 0.7] Mean: 0.40 RMS: 0.45</p> <pre><code>input: [0.1 0.2 0.9] Mean: 0.40 RMS: 0.54 input: [0.4 0.5 0.6] Mean: 0.50 RMS: 0.51 input: [0.3 0.3 0.8] Mean: 0.47 RMS: 0.52 input: [0.1 0.8 0.8] Mean: 0.57 RMS: 0.66 </code></pre> <p><code>`` This looks good. If one model prediction is a</code>.9<code>, and the others are</code>.1<code>and</code>.2`, shouldn't we consider that a defect? (No, no we shouldn't. I was wrong.)</p> <p>But when I tried it, I got a significant improvement on the LB! I went from <code>0.91809</code> to <code>0.91854</code>, which was my best (public) score yet. Unknown to me, my private LB score had just dropped from <code>0.90876</code> (winning score) to <code>0.90259</code> (rank 55).</p> <p>I'm pretty new to Kaggle, and while I'd heard about leaderboard "shakeup", I didn't know it could be this severe. I should have selected a 2nd submission from before I started using RMS to average the results - and if I'd picked any of my recent submissions, I would have taken 1st place.</p> <h2>Classification Model</h2> <p>Others on the discussion forums were advocating use of a two-step submission:</p> <pre><code>1. Use a classifier to determine whether an image contains a each fault anywhere 2. Ignore segmentation predictions for those ruled out by the classifier </code></pre> <p>The rationale was that false positives were very expensive, due to the way the Dice metric is calculated. By doing this, you could reduce FP.</p> <p>I was pretty skeptical of this approach, and thought it would only be useful early in the competition while the precision of people's convolutional models was poor. But, as the competition progressed and I was struggling to climb the LB, I thought I'd better give it a go.</p> <p>Since I'd spent so long tuning my fully convolutional segmentation ensemble, I was worried about allowing an "untuned" classifier to veto my segmentation predictions (and tuning it takes time). I decided on a strategy to use the classification prediction to amplify the defect pixel thresholds:</p> <pre><code>1. When the classifier output is high (fault), we leave the pixel thresholds at their normal level. 2. When the classifier output is low (no fault), we raise the pixel threshold by some factor. </code></pre> <p>The idea was that this would allow a false negative from the classifier to be overruled by a strong segmentation prediction.</p> <p><code> def compute_threshold(t0, c_factor, classification_output): """ t0 : numeric The original pixel threshold c_factor : numeric The amount a negative classification output will scale the pixel threshold. classification_output : numeric The output from a classifier in [0, 1] """ return (t0 * c_factor) - (t0 * (c_factor - 1) * classification_output) </code> Here's an example illustrating how the threshold is scaled with different factors. I tried values 5, 10, and 20.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2116899%2F1012293782d945641d142a392097f779%2Fclassifier-threshold-scaling.png?generation=1572057115647669&amp;alt=media" alt=""></p> <p>Here's a table comparing the results of my submissions with a classifier, to my previous ones. Note I ran it twice with <code>c_scale = 5</code> and changed some weights in my ensemble.</p> <p>| Config | Public LB | Private LB | | --- | --- | --- | | No classifier | 0.91817 | 0.90612 | | c_scale = 5 | 0.91817 | 0.91023 | | c_scale = 5 | 0.91832 | 0.90951 | | c_scale = 10 | 0.91782 | 0.90952 | | c_scale = 20 | 0.91763 | 0.90911 |</p> <p>From looking at my public LB score, I got zero and tiny improvements using a classifier and <code>c_scale=5</code>. When I tried increasing it, it looked like the results got much worse. Unknown to me, this was actually taking my private LB score from rank 11 to significantly better than rank 1! The first result, where my public LB score didn't increase at all, was actually the highest scoring submission I made all competition. As far as I know, no one on the discussion board has reported scoring this high on any of their submissions.</p> <p>I gave up on using a classifier after this, and for the rest of my submissions I used only fully convolutional models. I managed to get similar Private LB scores with a fully convolutional ensemble, but using a classifier may have improved this further. Kaggle has disabled submissions while the efficiency prize is running, so I won't be able to test this.</p> <h2>Final Ensemble</h2> <p>I used the following fully convolutional ensemble for my final submissions:</p> <ul> <li>Unet <ul><li>2x se_resnext50_32x4d</li> <li>1x efficientnet-b5</li></ul></li> <li>FPN <ul><li>3x se_resnext50_32x4d</li> <li>1x efficientnet-b5</li> <li>1x inceptionv4</li></ul></li> </ul> <p>| Averaging Technique | Public LB | Private LB | | --- | --- | --- | | RMS | 0.91844 | 0.90274 | | Mean^ | 0.91699 | <strong>0.90975</strong> |</p> <p>^I re-ran my final submission with mean-averaging after the deadline to check its performance.</p> <h2>Submission Scores</h2> <p>Visualisation of scores in the final week of the competition:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2116899%2Fc6e35f8812b58a09a427e5a22d86b5c0%2Ffinal-week-lb-scores.png?generation=1572057076948200&amp;alt=media" alt=""></p> <p>The dip at the end is when I started using RMS averaging.</p> <h1>Submission Kernels</h1> <p>Here are some public kernels showing the scores. There's a lot of copy-pasted code because of the kernel requirement of this competition - no easy way around it!</p> <ol> <li><a href="https://www.kaggle.com/khornlund/sever-ensemble-classification?scriptVersionId=22207424">Private LB 0.91023 | Classification + Segmentation Ensemble </a></li> <li><a href="https://www.kaggle.com/khornlund/fork-of-sever-ensemble-3?scriptVersionId=22527620">Private LB 0.90975 | Fully Convolutional Segmentation Ensemble</a></li> </ol> <h1>Discussion</h1> <h2>Improvements</h2> <p>Next time I would like to:</p> <ul> <li>Softmax w/ background class</li> <li>Lovasz Loss</li> <li>Inplace BatchNorm (potentially huge memory saving)</li> </ul> <p>And of course, <em>manually choose two submissions that are appropriately diverse</em>.</p>
RSNA Intracranial Hemorrhage Detection
A surprise Gold, GM, and the real 12th place solution*
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First I want to thank my teammate <a href="/zaharch">@zaharch</a> for a great and successful teamwork (this is our 2nd consecutive gold). Until yesterday we had the top Silver medal - 13th place, and today, for some mysterious reason, we got promoted to 12th and Gold. For me this Gold also means GM (5 golds in the last 5 competitions). ^ real because <a href="/appian">@appian</a> was promoted to 11th</p> <p>So lets go for our solution.</p> <p>As most/all the top solutions we also used a two stage solution: 1. Base model for feature extraction per image 2. Shallow model - combining all the output features from a full head scan to predict per image. The 2nd stage also included some post - processing and ensembling.</p> <h2>Base Model:</h2> <p>As base model we used a few different models: * Densenet 169, 161, 201 * SE-ResNet101 * SE - ResNeXt101_32x4d For all models we used 3 folds, for the SE models we also had 5 folds. The SE models with 5 folds gave the best results. The models where trained for ~4 epochs using the usual augmentations: rotation, flip, zoom, position shift, pixel intensity shift.</p> <h3>WSO</h3> <p>As many of the other teams do in their base solutions, we also used 3 windows to handle the large dynamic range of the CT pixels values, but instead of using fixed windows we let the network find the best windows, as described in <a href="https://arxiv.org/pdf/1812.00572.pdf">Practical Window Setting Optimization for Medical Image Deep Learning</a>. The implementation is quit straight forward: <br> adding 3 layers in front of the model: <code> Conv2d(1, 3, kernel_size=(1, 1)) Sigmoid() InstanceNorm2d(3) </code> The convolution layer was initialize with the soft- tissue, blood, bone values. At the end this layer converged to value close to the usual windows values.</p> <h3>Feature pooling</h3> <p>Most of the features at the last layer where zero - we used 8 times pooling to decrease the number of features to ~ 250-300</p> <h3>TTA</h3> <p>We created 4 sets of features from augmented images for each train image and 8 sets for each test image.</p> <h2>Shallow Network</h2> <p>We used two different shallow networks. (I will describe one here and <a href="/zaharch">@zaharch</a> will describe the 2nd later)</p> <p>One network was a FCN.</p> <h3>Input</h3> <p>Features from all the images of one full head scan, ordered by the Z position.</p> <h3>Layer</h3> <ol> <li>9 * Num_features 2D convolution - the output is batch_size * num_images * num _channels * 1 * 6</li> <li>Squeeze</li> <li>1D convolution layer of size 7</li> <li>1D convolution layer of size 5</li> <li>1D convolution layer of size 3 With batch norms, and ReLUs in between.</li> </ol> <p>We trained the shallow network with the TTAed features from the base model and for prediction we used the test features TTA.</p> <h2>Post Processing and ensembling:</h2> <p><a href="/zaharch">@zaharch</a> will and as a comment</p> <h2>Results:</h2> <p>The base models gave 0.68 - 0.66 on LB (first stage) after fold averaging and with TTA averaging. ** We started using the better models after we already had the shallow network, hence we didn't really submitted a full 5 fold average of base model and the numbers are derived from CV. The best single 5 fold full model (base + shallow), with TTA and fold averaging was the SE-ResNet101 which gave LB 0.6 (first stage). </p> <p>One drawback we had - we didn't gain much by ensembling many models, maybe we should have used one model and run it more with different seeds. </p> <p>And as a last word, I want to thank the organizers and moderators <a href="/juliaelliott">@juliaelliott</a> <a href="/philculliton">@philculliton</a> <a href="/lechuck0">@lechuck0</a> for a great competition and for being flexible and changing the rules to let us use the metadata which helped all the top teams get really good and interesting solutions. </p> <h3>Code</h3> <p><a href="https://github.com/nosound2/RSNA-Hemorrhage">The full code can be found here</a></p> <h3>More information</h3> <p>More information about our models can also be found in the following files</p> <p><a href="https://docs.google.com/document/d/1YFwbnmh5QDF77th01eSEqscvb4rWTKyvL0XWSWs-sMg/edit?usp=sharing">Documentation</a></p> <p><a href="https://drive.google.com/file/d/1Kz_3mkA9volBKNau_u_jzE2YQhkPTIXp/view?usp=sharing">Presentation</a></p> <p><a href="https://drive.google.com/file/d/1yX6WC9GysdekivPzowU685AeAuE_LHXI/view?usp=sharing">Video</a></p>
RSNA Intracranial Hemorrhage Detection
6th place solution End to End Sequence to Sequence with sliding window.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all participants and the winners, and myself, I must say, for becoming a Kaggle GRANDMASTER! Catching that elusive fifth Gold medal that I have been chasing for some time now. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F369212%2Ff3b3bd56f8e374c317bc1fadc08a0a72%2FRSNA.png?generation=1573741035979705&amp;alt=media" alt=""></p> <p>Down to business. We utilized an end to end sequence to sequence model. A sliding window approach was used to select a fixed window of “n” slices from the ct volume. The FIG above shows the architecture. The architecture made things nice and simple, training was end to end, no data shuffling and gymnastics. Prediction on the slices was done by the LSTM at each time step. This conveniently also enabled some nice test time augmentation (TTA) with the sliding window approach.</p>
Understanding Clouds from Satellite Images
34th Place Solution + Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,</p> <p>Congrats to the winners! </p> <p>Code for my solution <a href="https://github.com/khornlund/understanding-cloud-organization">here</a>.</p> <p>Explanation copied below.</p> <h1>Summary</h1> <h2>Results</h2> <p>| Rank | Score | Percentile | | --- | --- | --- | | 34 | 0.66385 | Top 2.2% |</p> <h2>Strategy</h2> <p>Originally I had an idea early on very similar to <a href="https://arxiv.org/pdf/1911.04252.pdf">this</a> recent paper. I was going to train a model on the ground truthed data, and then iteratively create pseudo labels for unlabelled data and train on that. I figured this was a good opportunity for such a strategy because there was very little training data (~5000 images), so there was a lot to be gained by generating more training samples. And, because this was not a synchronous kernel competition, I'd be able to create as large an ensemble as I like.</p> <p>Then I realised how noisy the image labels were, and wasn't so sure that pseudo labels would work very well. In particular, I noticed that the validation scores of my models was super noisy - using the same configuration with a different random seed resulted in serious metric differences. I figured I would give up on trying to fine tune individual models and instead focus on engineering a system that would allow me to train and ensemble <em>lots</em> of models.</p> <p>I developed functionality to allow me to automate the configuration, training, and inference of models.</p> <p>I trained an ensemble of ~120 models, using a variety of encoder/decoder combinations. I first averaged them together by their encoder/decoder combinations (eg. all the efficientnet-b2 FPN get averaged together). Then I averaged these mini-ensembles together using a weighted average.</p> <p>With about a week of the competition to go, I saw the Noisy Student paper. I was getting decent results on the LB and figured I'd give pseudo labelling a go. I downloaded ~4200 images using the same resolution and locations as the official data, generated pseudo labels for them, and trained a new ensemble of ~50 models.</p> <p>I only finished training the pseudo labelled models in time to make a few submissions on the final day, and managed to get up to 0.67739 (9th place) on the public LB - but that actually only scored 0.66331 (~45th) on the private LB. My other selected submission was a weighted average of my past 25 submissions, which scored 0.67574 on the public LB and 0.66385 (34th) on the private LB.</p> <p>I had a few unselected submissions that scored 0.666+ (~18th), the best of which funnily enough came from a mini-ensemble of only efficientnet-b2-Unet models.</p> <h2>Reflection</h2> <p>Looking back I realise I made a pretty big mistake not capturing the appropriate metrics for thorough local CV. I was only recording dice coefficient using a threshold of 0.5, and so I wasn't well informed to pick a threshold for my submissions.</p> <p>Also, while the models were each trained on a random 80% of the data, and evaluated on the remaining 20%, this was only done at a per-model level. I didn't keep a hold-out set to validate the ensembles against. Because we only had ~5000 training samples, I got a bit greedy with training data here.</p> <p>I was hoping that by keeping logs of all my experiments, after a while I'd be able to identify which randomly generated configurations (eg. learning rate) worked better than others. This didn't turn out to be the case! I should have spent more time fine tuning each model, as the law of diminishing returns was coming into effect as the size of my ensemble grew.</p> <h1>Details</h1> <h2>Ensemble Pipeline</h2> <p>See <code>uco.ensemble.py</code> for implementation.</p> <p>Each training experiment is configured using a YAML file which gets loaded into a dictionary. I set up a class to randomise these parameters, so I could leave it to run while at work/sleep and it would cycle through different architectures, loss functions, and other parameters.</p> <p>After each training epoch the model would be evaluated on a 20% validation set. The mean dice score was tracked throughout training, and when the training completed (either after a set number of epochs or early stopping) only the best scoring checkpoint would be saved. I set a cutoff mean dice score, and threw away models that scored under that.</p> <p>The saved checkpoint would be loaded, and run inference on the test data. I saved out the <em>raw</em> (sigmoid) predictions of each model to HDF5. I scaled by 250 and rounded to integers so I could save as <code>uint8</code> to save disk space.</p> <p>These raw predictions would be grouped by (encoder, decoder) pair, and averaged together weighted by mean dice scores. Then the groups would be averaged together, with parameterised weights.</p> <p>By saving out the results at each stage to HDF5 (raw predictions, group averages, and total averages), I could re-run any part of the pipeline with ease.</p> <p>I did the above for both segmentation and classification models. The details below are just for the segmentation models.</p> <h2>Models</h2> <p>I used <a href="https://github.com/qubvel/segmentation_models.pytorch">segmentation_models.pytorch</a> (SMP) for segmentation, and used <a href="https://github.com/rwightman/pytorch-image-models">pytorch-image-models</a> (TIIM) for classification.</p> <p><strong>Encoders</strong></p> <ul> <li>efficientnet B0, B2, B5, B6</li> <li>resnext 101_32x8d</li> <li>se_resnext 101_32x8d</li> <li>inceptionresnet v2, v4</li> <li>dpn 131</li> <li>densenet 161</li> </ul> <p><strong>Decoders</strong></p> <ul> <li>FPN</li> <li>Unet</li> </ul> <p>I had terrible results with LinkNet and PSPNet.</p> <h2>Training</h2> <p><strong>GPU</strong> RTX 2080Ti.</p> <p><strong>Loss</strong> I used BCE + Dice with BCE weight ~U(0.65, 0.75) and dice weight 1 - BCE.</p> <p>I used BCE + Lovasz with BCE weight ~U(0.83, 0.92) and lovasz 1 - BCE.</p> <p><strong>Learning Rate</strong> Encoder ~U(5e-5, 9e-5) Decoder ~U(3e-3, 5e-3)</p> <p><strong>Optimizer</strong> RAdam / <a href="https://github.com/catalyst-team/catalyst/blob/master/catalyst/contrib/optimizers/qhadamw.py">QHAdamW</a></p> <p><strong>Augmentation</strong> Compositions are in <code>data_loader.augmentation.py</code>.</p> <p>I made one custom augmentation - I modified Cutout to apply to masks. I wasn't sure if this would actually be better than only applying Cutout to the image - because the ground truth bounding boxes were large and covered areas that actually weren't very cloudy. It wasn't obvious from my experiments which worked better - but they both helped, so I just added them both to the available random configuration options for training.</p> <p><strong>Image Sizes</strong> I wanted to use images sizes divisible by 32 so they would work without rounding effects, so I used the following which maintained the original 1400:2100 aspect ratio:</p> <ul> <li>256x384</li> <li>320x480</li> <li>384x576</li> <li>448x672</li> </ul> <p>Most models were trained using 320x480. I didn't notice any improvement using larger image sizes, but I figured it might help the ensemble to use diverse sizes.</p> <p><strong>Pseudo Labels</strong> I used my ensemble trained on the official training data to predict masks for the ~4000 images I downloaded. I then removed any images without masks, and trained on the rest.</p> <p>In contrast to some of the other people that used pseudo labels, I did not make my thresholds harsher for selecting pseudo labels. My rationale was that since most images included 2+ classes, increasing the thresholds to be 'safe' would likely mean missing the 2nd class in many images - leading to lots of false negative labels in my pseudo labels.</p> <p>I used a <a href="https://github.com/khornlund/pytorch-balanced-sampler">balanced sampler</a> to include 4 pseudo labelled samples per batch (typically batch sizes were 10-16).</p> <h2>Post-Processing</h2> <p><strong>TTA</strong> I used flips from <a href="https://github.com/qubvel/ttach">TTAch</a></p> <p><strong>Segmentation Thresholds</strong> I experimented with a bunch of different ways to threshold positive predictions, as the dice metric penalises false positives so heavily.</p> <p>I started out by using the following threshold rule:</p> <ol> <li>Outputs must have N pixels above some <em>top threshold</em>. I started out using N ~ 8000 for each class, and a top threshold of ~0.57.</li> <li>For predictions that pass (1), produce a binary mask using <em>bot threshold</em> of ~0.4.</li> </ol> <p>I used the continuous output of the classifier to modulate these thresholds. Ie. if the classifier was high, I would reduce the min size requirement, or the top threshold.</p> <p>In the end I simply used maximum pixel prediction and no min size.</p> <p>The distribution of predictions for the different classes is actually pretty interesting:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2116899%2Ff665763ea6d3668f7514f747997ad71d%2Faverage-prediction-distribution.png?generation=1574246988235284&amp;alt=media" alt=""></p> <p>Class 1 has very nice bimodal distribution. This suggests it was the easiest to learn.</p>
The 3rd YouTube-8M Video Understanding Challenge
first silver medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to thank Google and Kaggle for making the competition available. I'm so happy to get this silver medal!!! Last week I failed miserably in IEEE fraud detection, and I decided to do another competition.</p> <p>Most of this silver medal came from running the DBoF baseline provided by the host. I did a little bit hyper parameter tuning to reduce overfitting. </p> <p>I guess the most difficult part was to dare to enter the competition one week before the deadline and walk through / trouble-shooting all the G-Cloud Platform set-up. When I was finally able to generate a solution, it was satisfactory.</p> <p>A little bit feedback to the host: I guess if the guidance to generate a baseline could be easier to follow, and if you could provide a more intuitive illustration about how the baseline models are constructed, this could attract more people to participate (although the entry-barriers definitely helped me win the medal this time...).</p> <p>It was a lucky win. I'll probably spend some time understanding better the models and features used by other people. After all, that's where you learn most.</p>
Understanding Clouds from Satellite Images
22th Place - Lessons learned from a beginner
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First congrats to all the winners. I would like to thank Kaggle for hosting this competition which was the first one I could dedicate myself and won my first medal.</p> <p><strong>What didn't work for me</strong> - Lovasz loss - Deeper encoders (efficientb7,senet) - Pseudo labeling</p> <p><strong>Our solution</strong> Our solution is basically emsemble of segmentation models with post processing to remove masks</p> <p>Models (6 folds each): - ResNet34 - Unet* - EfficientNetB2 - Unet* - EfficientNetB2 - FPN - EfficientNetB2 - LinkNet - EfficientNetB5 - Unet</p> <p>Loss: BCE + Dice * Those models was trained with different image size (320x480, 384x576, 512x512, 704x1056)</p> <p>Post Processing: triplet threshold searching for binarization, remove small masks and binarization again for the remaining masks after the first two steps. All this was done with the validation data from all 6 folds.</p> <p>CV: 0.6651 Public: 0.67556 Private: 0.66498</p> <p><strong>The Good Lesson</strong> I didn't know much about image segmentation, so this competition was a great learning.</p> <ul> <li><p>Read all comments and try to get the tips.</p></li> <li><p>Build a good validation set tuning post processing parameters was only possible without overfitting because of that</p></li> </ul> <p><strong>The Bad Lesson</strong> - Trust in your CV</p> <p>I had a better model that scored:</p> <p>CV: 0.6681 Public: 0.66759 Private: 0.66824</p> <p>Why didn't I choose it? because of the second lesson ...</p> <ul> <li>Trust in you My best model was something different. I trained one model for each mask type, predicted one by one and put it in original format (4 masks stacked) before applying post processing. </li> </ul> <p>This allowed me to compare with the same out of fold data I had so far. A simple blend of ResNet34-Unet + EfficientNetB2-Unet got 0.668 on CV.</p> <p>But I read that some kagglers didn't get good results with this method, I was afraid of having a leak in my validation and public LB was worse. So I gave up on this idea...</p> <p><strong>Acknowledgment</strong> I would like to thank my team and all those who shared in some way.</p> <p>Sharing is a very good thing, but I think it should be done at the right time. As I said I am a beginner, but also someone who worked hard on this competition reading past competition solutions. So I think everyone can do the same.</p>
Understanding Clouds from Satellite Images
6th simple solution, pre-training, single model private 0.66927
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the hosting organization that hosted this competition and Kaggle. Like any competition, this competition was also hot until the end. So, I want to congratulate Kagglers who struggled until the end of this competition.</p> <p>I will summarize and write down the part of my solution that you will be interested in. It's <code>pre-training</code></p> <h1>pre-training</h1> <p>The challenge of this competition is to segment according to the shape of the cloud. Therefore, I tried to pre-train the model to learn the shape of the cloud.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2F7e5d1fb4cbda36a5bcad97879a424176%2F1st-training.png?generation=1574148986382141&amp;alt=media" alt=""></p> <p>Because the clouds are white, I generated <code>cloud_mask</code> with the threshold of "pixel &gt; 115". Then, I used it as a label. (Since the total number of image files is 9244, the cloud_mask also generates 9244.)</p> <p>After pre-training, I tried a 2nd-stage training. Pre-trained(1st stage-training) model are used as the initial value of 2nd-stage model weights.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2F6ed4418967a6461295eed180c047feda%2F2nd-training.png?generation=1574149060524535&amp;alt=media" alt=""></p> <p>This training process boosted my CV 0.005~0.01. So, my single model score is as follows.</p> <p>| model | private | public | | --- | --- | --- | | efficientnet-b4, unet | 0.66927 | 0.67437 | | efficientnet-b4, fpn | 0.66827 | 0.67508 |</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2F95009ca5bb429732cb523d971978e0fd%2F.png?generation=1574149018151413&amp;alt=media" alt=""></p> <p>The rest is not special, so I'll skip the description. 😁 Thanks for your reading!</p>
NFL Big Data Bowl
(not really) 2nd Public LB Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First I want to say a big thanks to my teammates @peterhurford and @christofhenkel for teaming up this competition. Also thanks for the hosts and kaggle for putting on a really fun and challenging competition. When this competition first launched I thought I might have an advantage because of all the time I’ve spent watching the NFL. I also hoped that all the brilliant non-American kagglers might be turned off by American football. Turns out I was wrong on both fronts!</p> <h2>High level summary</h2> <ul> <li>Even though we are 2nd on the public LB, we expect to drop significantly stage 2.</li> <li>Our final solution used a blend of GNN and MLP models.</li> <li>We trained the GNN offline and uploaded model weights. Our MLP is trained completely in the kernel.</li> <li>We knew that our GNN model wouldn't get the benefit of training on the 2019 play data like models trained in the kernel would. Using @gertjac 's approach <a href="https://www.kaggle.com/gertjac/just-to-show-why-i-wonder-about-online-learning">here</a> we could pseudo label some of the public test set to at least provide our model some 2019 data to train on.</li> <li>We knew this would overstate our placement on the stage 1 leaderboard, but also figured it would give us the best chance for success in stage 2. We waited until 2 days before the deadline to submit this overfit model to reduce our impact on the LB, and we renamed our team to signal that we knew we were overfit.</li> <li>I'm honestly surprised that more teams didn't take this approach. It seemed like the best way to incorporate all the data we had available to us while training offline. As it turns out, most of the top teams kept their training in the kernel.</li> </ul> <h2>Features:</h2> <ul> <li>For the MLP we experimented with over 4000 features. These features were mostly based on player positions and labeled players by distance to the runner, but also position laterally and position within the defensive line. @peterhurford did some cool stuff with feature selection that he can expand upon. I think our final MLP only used a hundred or so of these features. We found removing bad features to be equally effective as adding new ones.</li> <li>GNN setup used node and edge features and didn't require us to label players in relation to the runner. <ul><li>Node features were specific to each player (normalized S, direction)</li> <li>We added what we called <code>frame</code> features which were similar to what other top teams did where we simulated advancing of players positions based on their direction, speed and velocity.</li> <li>Edge features involved iterations between players. We ended up using, distance, angles, offsetting S and momentum. We also created edge features that were difference in X, Y speed and momentum but divided by distance.</li></ul></li> </ul> <h2>GNN setup</h2> <ul> <li>Our GNN model was based off of some starter code the @christofhenkel had modified from his CHAMPS solution.</li> <li>We experimented with a lot of different features, learning rate schedulers, scaling/preprocessing.</li> <li>One of the main things I spent time experimenting with was adjusting the design of our edge matrix. The edge matrix consists of binary (0,1) values representing the connections between nodes (player). We tried many different approaches for this: all players connected with each other, players only interacting with opponents, players only interacting with close players. Our final model's edge matrix used interactions between opposing team players AND the rusher regardless of the team.</li> </ul> <h2>Post processing</h2> <ul> <li>As others have mentioned we post processed to force 0's and 1's for yardage that would be beyond either endzones.</li> <li>We used <code>np.maximum.accumulate</code> to ensure our predictions were monotonically increasing.</li> </ul> <h2>Uploading models</h2> <p>Since we trained our models offline- but were not allowed to use external datasets, we were forced to paste plain text with our model weights directly in the kernel. Initially we just copied the string representation of pytorch's <code>state_dict</code>. However when we wanted to bag for our final solution the kernel would not commit if it was beyond 1MB in size (which actually we found to be more like 800kb, I guess kernels round up?). @peterhurford thought of a nice idea of zipping our model into a binary format, which allowed us to squeeze in an extra few models for bagging.</p> <h2>How it unfolded</h2> <p>@peterhurford and I teamed with a few weeks left before the merger deadline and I'm super grateful that he agreed to team. I learned a lot from him about MLP design, feature selection, and was amazed by his engineering abilities. Eventually we got stuck on the LB around 0.0133x and decided to reach out to @christofhenkel because of his experience and knowledge working with NNs. He agreed to join the team in a consulting role. Once we teamed he shook things up for us by suggesting the top teams were using GNN models. I'm really glad he agreed to join our team because he gave us the idea, motivation, and guidance for the GNN design.</p> <p>Thanks for reading and best of luck to everyone in stage 2!</p>
NFL Big Data Bowl
2d-CNN for sparse heatmap images and MLP (1st stage 45th, 2nd stage 42nd place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Summary:</h2> <p>2d-CNN (Convolutional Neural Network) for sparse heatmap images and MLP for tabular data</p> <h3>Input tensor for CNN</h3> <p>Generated heatmap-like field images of 30 ((YardLine - 10) &lt;= X &lt; (YardLine + 20)) x 54 (0 &lt;= Y &lt; 54) yards grid (rounded to integer).</p> <p>After several experiments, the following combinations of 18 (= 3 x 3 x 2) channels worked best.</p> 3 player categories: <ul> <li>11 defense players</li> <li>10 offense players excluding the rusher</li> <li>The rusher (ball carrier)</li> </ul> 3 variables: <ul> <li>A (acceleration)</li> <li>S_X (speed in X axis)</li> <li>S_Y (speed in Y-axis)</li> </ul> 2 frames: <p>Computed another snapshot of 1 second later by adding the speed. (Also tried adding acceleration, but did not improve the performance.)</p> <h2>CNN architecture</h2> <ul> <li>4 layers</li> <li>Kept the X-direction until the dense layer and compressed in only Y-direction (stride=[1, 2]) as the X-direction is related to the outcome variable (Yards).</li> <li>Concatenated different kernel sizes as introduced in the Inception architecture</li> <li>CELU activation (slightly faster training than ReLU)</li> </ul> <p>CNN part of the architecture was configured in YAML for PyTorch as follows.</p> <p>(Please see <a href="https://github.com/Minyus/pipelinex">PipelineX</a> for the syntax) <code>yaml =: torch.nn.Sequential _: - {=: pipelinex.TensorSlice, end: 18} - =: pipelinex.ModuleConcat _: - {=: pipelinex.TensorConv2d, in_channels: 18, out_channels: 10, kernel_size: [3, 3]} - {=: pipelinex.TensorConv2d, in_channels: 18, out_channels: 10, kernel_size: [7, 7]} - {=: pipelinex.TensorConv2d, in_channels: 18, out_channels: 10, kernel_size: [3, 9]} - {=: torch.nn.CELU, alpha: 1.0} - =: pipelinex.ModuleConcat _: - {=: pipelinex.TensorAvgPool2d, stride: [1, 2], kernel_size: [3, 3]} - {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 30, out_channels: 10, kernel_size: [3, 3]} - {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 30, out_channels: 10, kernel_size: [7, 7]} - {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 30, out_channels: 10, kernel_size: [3, 9]} - {=: torch.nn.CELU, alpha: 1.0} - =: pipelinex.ModuleConcat _: - {=: pipelinex.TensorAvgPool2d, stride: [1, 2], kernel_size: [3, 3]} - {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 60, out_channels: 20, kernel_size: [3, 3]} - {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 60, out_channels: 20, kernel_size: [7, 7]} - {=: pipelinex.TensorConv2d, stride: [1, 2], in_channels: 60, out_channels: 20, kernel_size: [3, 9]} # -&amp;gt; [N, 120, 30, 14] - {=: torch.nn.CELU, alpha: 1.0} - =: pipelinex.ModuleConcat _: - =: torch.nn.Sequential _: - {=: torch.nn.AvgPool2d, stride: [1, 2], kernel_size: [3, 14]} # -&amp;gt; [N, 120, 28, 1] - {=: pipelinex.TensorConv2d, in_channels: 120, out_channels: 20, kernel_size: [1, 1]} - {=: pipelinex.TensorFlatten, _: } - {=: torch.nn.CELU, _: } - =: torch.nn.Sequential _: - {=: torch.nn.MaxPool2d, stride: [1, 2], kernel_size: [3, 14]} # -&amp;gt; [N, 120, 28, 1] - {=: pipelinex.TensorConv2d, in_channels: 120, out_channels: 20, kernel_size: [1, 1]} - {=: pipelinex.TensorFlatten, _: } - {=: torch.nn.CELU, _: } </code></p> <h3>MLP (Multilayer Perceptrons)</h3> <p>Add another channel to encode tabular features.</p> Continuous features <p>Max, Min, Mean, Stdev for axis (X, Y) and player categories (Defense, Offense)</p> Categorical features (One-hot encoded) <ul> <li>Down (1, 2, 3, 4)</li> <li>Flag of whether offense is home</li> <li>OffenseFormation</li> <li>DefendersInTheBoxCode</li> </ul> <h3>Computing the CDF output</h3> <ol> <li>Concatenate outputs of CNN and MLP.</li> <li>Add base probabilities computed by counting the Yards in the whole train dataset in 2018 to let the neural network learn the residual.</li> <li>Forced predicted probabilities for the yards beyond the goal line to 0</li> <li>Pad 0 to lower (-99 &lt;= Yards &lt; -10 yards) and upper (90 &lt;= Yards &lt; 100).</li> <li>Divide by the sum through Yards to make the sum to 1 (SoftMax without exponential)</li> <li>Compute cumulative sum through Yards</li> </ol> <h3>Loss function</h3> <p>CRPS with yards clipped to -10 to 29 yards</p> <h3>Other settings</h3> <ul> <li>Subtle augmentation <ul><li>random shift in X-axis: 0.1 yards stdev of normal distribution</li> <li>random shift in Y-axis: 1.0 yards stdev of normal distribution</li> <li>(random flip in Y-axis decreased the performance thus not used.) </li></ul></li> <li>Discarded all 2017 data which was very different from 2018 due to sensor issues and hard to adjust</li> <li>Batch size: 32</li> <li>Optimizer: Adam</li> <li>Learning rate scheduler: LinearCyclicalScheduler (slightly better than CosineAnnealingScheduler)</li> </ul> <h3>What did not work:</h3> <ul> <li>Treat players as graph <ul><li>Use the reciprocal of distance between players as edge weights, compute Laplacian spectrum, count the number of 0 eigenvalues which equals to number of connected subgraphs to use additional features</li></ul></li> <li>Scaling <ul><li>RankGauss</li> <li>StandardScaler</li></ul></li> </ul> <h2>Dependencies available in Kaggle Kernel</h2> <ul> <li>torch==1.1.0</li> <li>pytorch-ignite==0.2.0</li> <li>pandas==0.25.1</li> <li>numpy==1.16.4</li> </ul> <h2>Dependencies not available in Kaggle Kernel</h2> <ul> <li><a href="https://github.com/Minyus/pipelinex">pipelinex</a> (developed with/for this competition and open-sourced)</li> </ul> <h2>Dependencies only for experimentation (not used in Kaggle Kernel)</h2> <ul> <li>kedro</li> <li>mlflow </li> </ul> <h2>The Python code and YAML config are available at:</h2> <p><a href="https://github.com/Minyus/kaggle_nfl">https://github.com/Minyus/kaggle_nfl</a></p>
NFL Big Data Bowl
Private 13th place journey (pocket side)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition was one of the best competition that I can remember. <br> Diverse approaches, scores never stagnating, real world setup, no leaks and interesting data :) <br> So I want to thank everyone involved, including the hosts, kaggle admins, the competitors and of course, my teammates. </p> <p>Since I got an unexpected gold medal, I am going to write about my journey in this competition. <br> By the way, our final model was a single model, best described by TripleLift (<a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119331">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119331</a>). </p> <p>I started this competition early, with my hopes really high. <br> It was a while since I gave kaggle a serious shot, and I was prepared to commit myself to this competition. <br> The competition metric is basically an RMSE for each yards, so I built a LightGBM model for each 199 yards. <br> Of course, this makes runtime really long, but since only the yards -5 to 15+ really matter for the score, <br> I predicted yards -5 to 15 and filled the other yards with average. <br> This gave me first place at that time, and I was fooled to believe this would be a winning approach (so silly looking back, haha) </p> <p>After a while, I started dropping to the silver zone. <br> Because making new features consistently improve my score, I decided to merge with my teammate Emin, who seemed to have a lot of good features. <br> With his NN and features, we reached close to gold zone (but not enough). <br> Since we were so far away from the top, I was thinking that there must be some kind of magic. <br> I searched really hard, noticed the 2017/2018 difference of S and A, tried to make "the magic" happen, but couldn't make it. <br> After a while, I noticed that adjusting the S and A to 2017 improved my LightGBM model, so I stuck with that adjustment. <br> In hind-sight, this was really bad of me, because I should have tried different approaches for NN. <br> This was a extremely important, because our after competition experiments show that adjusting to 2018 like Patrick did <br> (<a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314</a>) would have given us roughly the same score (2nd place). </p> <p>Anyways, I was stuck in silver, and I made a last minute merge with TripleLift and yuhao. <br> The merge went way better than I expected. Their model was a totally different one from ours, with basically no features and a simple embedding NN. <br> I thought that this could have been all the difference, so I put in a lot of efforts to improve the NN. <br> The most positive change I made was adding the "T+n" feature, which is where the players are, n seconds after the start. <br> I also removed a lot of garbage features based on my experience of GBDT in this competition which might have improved the model. <br> The last part was adding a multi-output. We were doing a softmax classification, so I added a regression part as well. <br> This change was good in certain folds, especially the LB, so we used it, but not so sure about how positive it actually was. </p> <p>This ended us just below the gold zone in publicLB, which was really disappointing given all the high hopes I had. <br> I didn't expect much shake of the LB because of how consistent our CV and LB was throughout the competition. <br> However, In the end, I was proven wrong again, and reached a happy gold medal this time. </p> <p>Thanks for reading, and good luck to everyone in the future ;) </p>
NFL Big Data Bowl
Public 8th place overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>model and training</h2> <ul> <li>I used the neural network model based on my public kernel (<a href="https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381?scriptVersionId=22302384">https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381?scriptVersionId=22302384</a>).</li> <li><p>The changes from the kernel are</p> <ul><li>global average pooling for the player dimension</li> <li>mish activation</li> <li>RAdam optimizer</li> <li>binary classification loss (ex, yard &gt; 0 or not)</li> <li>mixup augmentation <ul><li>All strong features are numerical features, so mixup augmentation might be useful <h2>features</h2></li></ul></li></ul></li> <li><p>My best features are based on future player positions calculated from X, Y, A, S, and Dir</p> <ul><li>distance and distance/speed between players at different time points</li> <li>These features made big improvements on LB (about 0.01335 → 0.01275)</li></ul></li> <li>Feature selection was performed by recursive feature elimination with permutation importance <h2>rough LB score history with the main change</h2></li> </ul> <p>0.01385 : my public kernel ( <a href="https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381?scriptVersionId=22302384">https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381?scriptVersionId=22302384</a> ) 0.01350 : add aggregation features group by rusher team, not rusher team, and near rusher players 0.01335 : change neural network model to not depend on the player order of inputs by using global average pooling 0.01275 : features based on future player positions 0.01261 : tuning (feature selection and bayesian optimization of neural network parameters) </p>
NFL Big Data Bowl
1st place solution The Zoo
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We are glad to publish the solution write-up of The Zoo by <a href="/dott1718">@dott1718</a> and <a href="/philippsinger">@philippsinger</a>.</p> <p>We want to sincerely thank the hosts and Kaggle for making this competition possible. We had a lot of fun crafting our solution as it was necessary to think a bit out of the box and come up with something that really reflects the situation on the field. An extra thanks goes to Michael Lopez for actively participating in all the discussions and activities around the competition. That did add motivation to improve and believe that we can bring some value to NFL analytics. Can’t remember the last time we’ve seen such involvement of a host into the competition.</p> <p>There was little problem with the data (2017 measurement differences were disclosed) and there was a nice correlation between CV and public LB. There was also no real chance to cheat as private LB will be on future data. We also want to thank all competitors for not exploiting the possible leak in public LB.</p> <p>We really hope there won’t be any surprises on the private LB data and we hope our kernels will run through. In these types of kernel competitions there is always the risk of something failing, which would be devastating, of course. </p> <p>Regardless of what happens, we are really proud of our solution and strongly believe that it can be a valuable asset to future endeavors in NFL analytics.</p> <p><strong>TL;DR:</strong> It’s a 2d CNN based on relative location and speed features only.</p> <h3>Solution motivation</h3> <p>Few words about how we came up with the model structure. To simplify we assume a rushing play consists of: - A rusher, whose aim is to run forward as far as possible - 11 defense players who are trying to stop the rusher - 10 remaining offense players trying to prevent defenders from blocking or tackling the rusher</p> <p>This description already implies connections between which players are important and which might be irrelevant, later we proved it to be the case on CV and LB. Here is an example of play visualization we used (based on the modified kernel from Rob Mulla [1])</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F85156%2F3304f790cf66a201d247243af8312433%2FPlayers%201.png?generation=1574946183096843&amp;alt=media" alt=""></p> <p>If we focus on the rusher and remove other offense team players, it looks like a simple game where one player tries to run away and 11 others try to catch him. We assume that as soon as the rushing play starts, every defender regardless of the position, will focus on stopping the rusher asap and every defender has a chance to do it. The chances of a defender to tackle the rusher (as well as estimated location of the tackle) depend on their relative location, speed and direction of movements.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F85156%2Fb81666526aa2c9098d2156cd21071787%2FPlayers%202.png?generation=1574946200648463&amp;alt=media" alt=""></p> <p>Another important rule we followed was not to order the players, because that would force an arbitrary criteria into the model, which will not be optimal. Besides, the picture from above gives us the reason to believe each defender should be treated in a similar manner.</p> <p>That points to the idea of a convolution over individual defenders using relative locations and speeds, and then applying pooling on top.</p> <p>At first we literally ignored the data about 10 offense players and built a model around the rusher and defenders, which was already enough to get close to 0.013 on public LB. Probably with proper tuning one can even go below 0.013.</p> <p>To include the offense team player we followed the same logic - these 10 players will try to block or tackle any of the defender if there is a risk of getting the rusher stopped. So, to assess the position of a defender we want to go through all the offense team players, use their location and speed relative to the defender, and then aggregate. To do so, we apply convolution and pooling again. So good old convolution - activation - pooling is all we needed.</p> <h3>Model structure</h3> <p>The logic from above brought us to the idea of reshaping the data of a play into a tensor of defense vs offense, using features as channels to apply 2d operations.</p> <p>There are 5 vector features which were important (so 10 numeric features if you count projections on X and Y axis), we added a few more, but they have insignificant contribution. The vectors are relative locations and speeds, so to derive them we used only ‘X’, ‘Y’, ‘S’ and ‘Dir’ variables from data. Nothing else is really important, not even wind direction or birthday of a player ;-)</p> <p>The simplified NN structure looks like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F85156%2F2b5c9ce8e54f58ba78dcf120ef49c278%2FNN%20structure.png?generation=1574945484839246&amp;alt=media" alt=""></p> <p>So the first block of convolutions learns to work with defense-offense pairs of players, using geometric features relative to rusher. The combination of multiple layers and activations before pooling was important to capture the trends properly. The second block of convolutions learns the necessary information per defense player before the aggregation. And the third block simply consists of dense layers and the usual things around them. 3 out of 5 input vectors do not depend on the offense player, hence they are constant across “off” dimension of the tensor.</p> <p>For pooling we use a weighted sum between both average and max pooling with average pooling being more important (roughly 0.7). In earlier stages of the model, we had different kinds of activations (such as ELU) as they don’t threshold the negative weights which can be problematic for the pooling, but after tuning we could switch to ReLU which is faster and had similar performance. We directly optimize CRPS metric including softmax and cumsum.</p> <p>For fitting, we use Adam optimizer with a one cycle scheduler over a total of 50 epochs for each fit with lower lr being 0.0005 and upper lr being 0.001 and 64 batch size. We tried tons of other optimizers, but plain Adam is what worked best for us. </p> <h3>CV</h3> <p>We were quite fortunate to discover a really robust CV setup. Probably, we will never have such a nice CV again. In the end, it is quite simple. We do 5-fold GroupKFold on GameId, but in validation folds we only consider data from 2018 (similar to how Patrick Yam did it [2]). We saw very strong correlations between that CV and public LB as 2019 data is way more similar to 2018 data compared to 2017 data. Having the 2017 data in training is still quite crucial though. As we are using bagging on our final sub, we also bagged each fold 4 times for our CV, meaning our final CV is a 5-fold with each fold having 4 bags with random seeds.</p> <p>Having such a strong CV setup meant that we did not always need to check public LB and we were quite confident in boosts on CV. We actually had quite a long period of not submitting to public LB and our improvements were all gradual. Based on given correlation, we could always estimate the rough LB score. You can see a plot of some of our CV and LB models below. The x-axis depicts the CV score, and y-axis respective LB score. Blue dots are models actually submitted to LB, and red dots are estimates. You can see that we lost the correlation only a tiny bit in the end, and our theoretical public LB score would have been below 0.01200. Our final CV for 2018 is around 0.012150.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F85156%2F09167c277a7fac56e0b9e44d9246c0bd%2FCV_LB.png?generation=1574945690175061&amp;alt=media" alt=""></p> <h3>Data processing</h3> <p>As we assume most people did, we adjusted the data to always be from left to right. Additionally, for training we clip the target to -30 and 50. For X,Y and Dir there is no other adjustment necessary, however, as most have noted, there are some issues with S and A. Apparently, the time frames were slightly different between different plays.</p> <p>For S, the best adjustment we found is to simply replace it with Dis * 10. A is a bit more tricky as there is apparently some form of leak in 2017 data (check the correlation between rusher A and target). So what we did is to adjust A by multiplying it with (Dis / S) / 0.1. That means we scale it similarly to how we scale S. After all, A only has a tiny signal after this adjustment, and one can easily drop it. As we rely on relative features in the model, we don’t apply any other standardization.</p> <h3>Augmentation &amp; TTA</h3> <p>What worked really well for us is to add augmentation and TTA for Y coordinates. We assume that in a mirrored world the runs would have had the same outcomes. For training, we apply 50% augmentation to flip the Y coordinates (and all respective relative features emerging from it). We do the same thing for TTA where we have a 50-50 blend of flipped and non-flipped inference.</p> <h3>Code optimization</h3> <p>We decided quite early that it is best to do all the fitting within the kernel, specifically as we also have 2019 data available in the reruns. So we also decided early to spend time on optimizing our runtime, because we also knew that when fitting NNs it is important to bag multiple runs with different seeds as that usually improves accuracy significantly and it removes some of the luck factor.</p> <p>As mentioned above, we use Pytorch for fitting. Kaggle kernels have 2 CPUs with 4 cores, where 2 of those cores are real cores and the other 2 are virtual cores for hyperthreading. While a single run is using all 4 cores, it is not optimal in terms of runtime, because you cannot multiprocess each operation in a fit. So what we did is to disable all multithreading and multiprocessing of Python (MKL, Pytorch, etc.) and did manual multiprocessing on a bag level. That means we can fit 4 models at the same time, gaining much more runtime compared to fitting a single model on all 4 cores. </p> <p>Our final subs fit a conservative number of 8 models each, having a total runtime of our subs at below 8500 seconds.</p> <h3>What didn’t work</h3> <ul> <li>Transformers and multihead attention, which seem to approximate the dependencies we explicitly use. We mainly focused on trying out attention to include offense-offense and defence-defence dependencies.</li> <li>LSTM instead of CNN.</li> <li>Adding dependencies like offense-offense and defence-defence explicitly.</li> <li>As soon as all the inputs are vectors, it seems tempting to try complex numbers based NNs. There is even a nice paper and a github repo available with the math and implementation of complex number version of all the layers we are using [3], but in keras. However, all the attempts we’ve made to limit CNNs to vector operations gave worse results.</li> <li>Going deeper and wider.</li> <li>CNN adjustments known from CV like Squeeze-and-Excitation layers or residual networks.</li> <li>Voronoi features.</li> <li>Weighing 2018 data higher than 2017 data.</li> <li>Multi-task learning, label smoothing, etc.</li> <li>Other optimizers, schedulers, lookahead, etc.</li> </ul> <h3>Final subs</h3> <p>Our first sub is our best model fitted on an 8-fold with picking the best epochs based on CV using 2018 and 2019 data (in the rerun, only 2018 in public LB). This model currently has 0.01205 public LB. Our second sub is using full data for fitting with fixed epochs (no early stopping). It currently has public LB 0.01201.</p> <p>In private reruns we incorporate 2019 data into training and we hope that all goes well, but you never know.</p> <p>P.S. Don’t forget to give your upvotes to <a href="/philippsinger">@philippsinger</a> as well - this model is a great example of teamwork.</p> <p>[1] <a href="https://www.kaggle.com/robikscube/nfl-big-data-bowl-plotting-player-position">https://www.kaggle.com/robikscube/nfl-big-data-bowl-plotting-player-position</a> [2] <a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314</a> [3] <a href="https://arxiv.org/abs/1705.09792">https://arxiv.org/abs/1705.09792</a></p>
NFL Big Data Bowl
Things that worked
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'm very curious to learn about what others did, here's my approach:</p> <ul> <li>augment the data by copying all rows twice, once with Yards-1 and once with Yards+1</li> <li>engineer hundreds of features, including features on indiviual players, sorted by their distance to the rusher (within offense/defense)</li> <li>forward feature selection, using a linear (Ridge) model, with strong regularization and a QuantileTransformer to reduce overfit and outlier effects; greedy selection helped to find different feature sets; I even searched pairwise products of all features</li> <li>modeling: weighted ensemble of LightGBM (5-10 leaves), MultiLayerPerceptron (4 hidden layers) and Logistic regression (all classification models) and the regression model using ExtraTrees that I shared as a notebook</li> <li>special trick: add last known X/Y by adding S x 0.5 x Dir (because S was calculated using distance half a second before and after handoff)</li> <li>postprocessing: set predictions that are impossible (Yardline + Yards below 0 or above 100) to 0/1</li> </ul>
NFL Big Data Bowl
Public LB 10th Place - NN vs. GBM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I guess everyone who competed in this competition realized fairly quickly that NNs were going to be much better than GBMs for this data. My team provides a case study in the pros and cons of the two approaches.</p> <p>@sasrdw and I teamed up early in this competition, on Oct. 20. At that time, we both had top-10 solutions. His was an NN and mine was an LGB model. We decided to work on our respective models separately but share FE and other ideas.</p> <p>The NN model started outscoring the LGB model fairly quickly and extended its lead as time went on. In the end, our best NN model scored 0.01265 on the LB and our best LGB model scored only 0.01297. Originally, we thought we would be able to blend or ensemble the two models, but ultimately we couldn't do this and still remain within the time limit.</p> <p>In retrospect, it would have made more sense for me to abandon the LGB model and work up an alternative NN model, but I felt the LGB model, even though it was under-performing, was still useful for testing new features and approaches because it was fast. This turned out to be only partly true because many of things that worked for the LGB model didn't work for the NN model, and vice versa. Here are a few examples:</p> <ul> <li><p>Handling 2017 S &amp; A data: The best approach I found for the LGB model was to make all S &amp; A values relative per play based on the max values. So the fastest players per play received a value of 1 and the others were some fraction of 1. (The absolute speeds were still represented in the Dis feature). For NN, Russ found it was better to treat the 2017 &amp; 2018 values as entirely separate features.</p></li> <li><p>Augmentation: For the LGB model, it helped to augment the data with two duplicates, modifying the labels +1 and -1, as @gertjac mentioned in his solution description. This didn't help the NN model.</p></li> <li><p>Bagging: Bagging with different seeds and different groups of training data had almost no effect on the LGB model but helped the NN model significantly.</p></li> </ul> <p>Some things that helped both models:</p> <ul> <li><p>Validation: We had a few different validation methods but the one that worked best for both models was to use the last 3 games of all teams as the holdout set.</p></li> <li><p>Label clipping: Clipping labels to reduce the effect of outliers (and speed up training) helped both models. For LGB, the best clipping values were -7 to 22. For NN, I think it was -10 to 35.</p></li> <li><p>FE: Most of the geometric features (distances, voronoi areas, etc.) helped both models. Also, simulating the plays forward by n seconds helped a lot. In the end, the NN model had about 110 features and the LGB model about 160.</p></li> </ul> <p>It's really interesting to see how everyone approached this problem, and I'm looking forward to seeing the final results. We found that just a few long runs can have a large impact on scores, so I'm expecting a fair amount of shake-up.</p>
NFL Big Data Bowl
10th Place Sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First thanks for NFL and Kaggle for such a competition. It was a journey with lots of learning for us in terms of NFL NFL domain and trying different Data Science ideas.</p> <p>What we did is simply this: Try a lot and fail a lot. But find some useful at the end. Since none of us in our team were used to complex DL models, we were only using MLP and we were mostly focusing on extracting smart features and how to feed them into MLP.</p> <p>We are happy with our score because we were able to get 1268 with only MLP and many features.</p> <p>Let's pass on more detail:</p> <p><strong>Model &amp; Features</strong> Our model is just a normal NN with 5 inputs. 2 of them are for categorical features, 1 is for distance/speed &amp; distance/speed projection features, 1 is for angle &amp; angle projection features, 1 is for aggregation statistics features. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Ff0aa8a21247e396004aca0a811bde56f%2Fdownload%20(4" alt="">.png?generation=1574945297226928&amp;alt=media)</p> <p><strong>Validation</strong> We used 5 GroupKFold on ‘Week’, but actually the result is similar to grouping on ‘GameId’ We validated and early stop our model with &gt; 2017 CV only, so in stage 2 it will early stop based on 2018+2019 data.</p> <p><strong>About Rusher’s Speed and Acceleration</strong> Rusher’s acceleration is useful to predict Yards in 2017 but not 2018: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Fa4d23ca3edc0ed556c7db290a3cb3d0f%2Fimage.png?generation=1574945361392996&amp;alt=media" alt=""></p> <p>Rusher Speed in 2018 is super linear with the distance but not in 2017: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Fca31bd3f1d128b4418da55ff72ee9f68%2FScreenshot%20from%202019-11-01%2002-11-55.png?generation=1574945400449410&amp;alt=media" alt=""></p> <p>Since the host said both S and A were calculated by Dis, we started to think the root cause is the location tracking. But then the host said the location tracking has been accurate. Therefore, our next guess is about the sampling frequency. The chances are the timestamp of sensors’ readings got messed up, or a resolution problem: because the sensors in 2017 and 2018 are different, their reading may have been forced to the same frequency 10 Hz. We don’t know the answer, but we then try the following 2 things:</p> <p>1) Rescale 2017 A and S with 2018 mean and std -&gt; a little improvement in LB 2) S = Dis * 10 -&gt; not improvement so we dropped it.</p> <p>Ok let's talk about the magic jumps:</p> <p><strong>Jump 1: Yards Post Processing</strong> Same as other teams, we corrected some impossible Yards prediction based on YardLine.</p> <p><strong>Jump 2: Angles</strong> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Ffde1a54e0ed43d047ca0db8df4a1bc36%2FIMG_8D149B586699-1.jpeg?generation=1574945481716293&amp;alt=media" alt=""></p> <ul> <li>Alpha: The relative angle between a player and the rusher</li> <li>Theta: Reverse of Alpha</li> <li>Beta: The different between Rusher’s Direction and the Theta, it indicates if a player is blocking the rusher’s path. Small abs(Beta) indicates the rusher is running towards the player.</li> </ul> <p><strong>Jump 3: Future Features Estimation</strong> Features about what is going to happen, in next 0.5, 0.75, 1, 1.25, 1.5 seconds 1. the distance between rusher and closest players. 2. the distances between defensive players and offensive players. 3. Their distances to the yard line are important. 4. Speed is also projected, the projected horizontal/vertical speed of the rusher is important.</p> <p><strong>Jump 4: Standardizing Rusher Direction</strong> Consider the below 4 plays. The Black arrow indicating the rusher’s direction. We flipped all play with rusher’s direction pointing downwards. And the result is the blue arrows, after the flipping, you can see 1 &amp; 2 are the same, 3&amp;4 are the same. As a result, all rusher’s direction will belong to [270, 360] and [0, 90].</p> <p>This flipping is tricky, we have to make sure the entire play is mirrored along the middle line. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F690886%2Fdf3a3a717a1eabbb5ebf6a7cfa854395%2Ffullsizeoutput_19d0.jpeg?generation=1574945602992218&amp;alt=media" alt=""></p> <p><strong>The end</strong> Thanks to the host for offering this nice competition. We have learned a lot here. Also after reviewing other talents' approaches, we really need to update our deep learning skills. </p>
NFL Big Data Bowl
4th Place (Public LB) Efforts
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I wanna say thanks to organizers hosting such a exciting competition. To be honestly, I’d know less about NFL and the rules of American Football Game before this competition started, but now I really enjoy the matches every week;) And also I appreciate all kaggler sharing info and attending discussions, especially mrkmakr, kenmatsu4, CPMP and charlie_s. Their kernel and insights much affected our models.</p> <p>Our team is consisted of owruby, David and me. I and David mainly took on the role of feature engineerings and owruby developed NN model. Team members ware fixed just before team merge deadline At that point I and owruby were using Python and David was using R, so main work of the final week was translation from David's features made in R script to Python code. It has been so tough work but produced diversity of features. </p> <p>I’ll share brief summary and essence of our approach because it is not so simple and elegant like other high ranker’s solutions.</p> <h2>Overview</h2> <p>We made a lot of play-level and player-level features. I think our feature’s interesting point is some variations of voronoi features. Main model is MLP with kinds of attentions and sub-model for ensemble is logistic regression. </p> <h2>Validation Schemes</h2> <p>We used two kinds of validations. The first is ordinary Group 5-fold CV with GameId and the second is time-split validation predicting the plays of 2018/12. Each of them don't have consistency to LB much, but when scores of both improved apparently, also LB score nearly always improved. </p> <h2>Models</h2> <h3>MLP</h3> <p>MLP model was mainly developed by owruby, so I cannot write down details here. It can be inputted play-level and player-level features separately and then they are concatenated using kinds of attentions. The network is trained by three kinds of losses which is consisted of CPRS loss of Softmax Layer, MAE loss of Linear Layer and CPRS loss of Gaussian Layer with weights of 1.0 : 0.01 : 0.1. The former two losses were from mrkmakr's kernel and The latter one was inspired from kenmatsu4's kernel. Gaussian Layer didn't much improve LB score, but we think it contributed to robustness of training. </p> <h3>Logistic Regression</h3> <p>Logistic regression model was trained only by play-level features with year weighted sample (2017 : 2018 : 2019 = 0.1 : 1.0 : 2.0). LR model had not so good score (0.00030 behind to MLP), but weighted ensemble model ( MLP : LR = 1.0 : 0.3 ) improved LB a little (~0.00005). </p> <h2>Features</h2> <p>We made various features representing player’s absolute and relative locations, S and A. As others do, we firstly standardized player’s coordinates so that offensive are moving to right side and calculated near future location and speed (0.5sec and 1.0sec after hand-off). And then we created varieties of features like distances, angles between each players and voronoi features with a view of both of offensive and defensive side.</p> <h3>Voronoi Features</h3> <p>We used some kind features from voronoi calculation. The most efficient feature is x and y lengths of ballcarrier and other's voronoi region at each time point. Our voronoi features were originally developed in R by David. A package deldir in R is easily calculate voronoi regions with boundaries, so area of regions and coords of edges are always finite value and it was natural idea to using them as feature. However voronoi function in Scipy cannot calculate bounded voronoi regions, so area and coords of voronoi were often calculated as infinite value and I'd strugled to translate calculating process of voronoi features. But thanks to discussion between CPMP and charlie_s in his kernel, I knew I could calculate virtually bounded voronoi regions by making mirrors of the points on four sides and I deployed it. The whole voronoi features improved CRPS with about 0.00010.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F312139%2F497c9572f92c257410ccd7744569f245%2F2019-12-02%2016.54.28.png?generation=1575276864053926&amp;alt=media" alt=""></p> <h2>Training Augmentation and TTA</h2> <p>We’ve noticed that we can do training augmentation and TTA by flipping coordinates and they improve model performance. But we only used TTA because execution time was limited and using only TTA was almost as effective as doing both.</p> <h2>Final Submission and Execution Time</h2> <p>We selected 4 seed averaged MLP model and weighted averaging model of 4 seed averaged MLP and 5-fold averaged logistic regression as final submissions. Execution time of the former is ~9000sec with about 4000sec training and the latter is about 10500sec with ~5000sec training. I implemented “try except” everywhere in my code and our kernel submit naive average values in case of error at model prediction or time over. </p>
NFL Big Data Bowl
Public LB 34th Solution Sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank everyone for the great work. I'm impressed by how elegant the solution of @philippsinger is. Congrats to everyone and I wish our kernels will survive stage 2.</p> <p>I'll just share some ideas that really helped me.</p> <ol> <li>Model: my model structure was mostly based on the kernel shared by @mrkmakr</li> <li>Features: cosine/sine of speed, distance, sum(exp(- defender distance from rusher)) etc</li> <li>Simulate player positions after 1 second and compute derived features</li> <li>Data Augmentation by symmetry (flipping X/Y as most said)</li> <li>Data Augmentation by adding random noise to X/Y, for me uniform random 1 yard helped most</li> <li>CV: GroupKFold by week</li> </ol>
NFL Big Data Bowl
2nd Private LB/ 3rd Public LB Solution - Transformer
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>Data Cleaning</strong></h1> <p>As we all know data in 2017 is different from 2018, data cleaning is very important in this competition.</p> <ul> <li>Orientation: 90 degree rotation in 2017</li> <li>A: I cannot find a good way to standardize A, I replace A in 2017 by the mean, surprisingly this improve my LB by 0.0002</li> <li>S: if we look at 2018 data, we can see that S is linearly related to Dis<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1904629%2F08b63866e2b7f4cc9fca6ff4fcc4c30c%2FPicture1.png?generation=1574907774244324&amp;alt=media" alt=""><br> While data in 2017 is not very fit,<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1904629%2F10b4500caa590ff46ba64efa8abb5a23%2FPicture2.png?generation=1574907822172309&amp;alt=media" alt=""><br> By fitting a linear regression on 2018 data, the coefficient of lr is 9.92279507, which is very close to 10, so finally I replace S by 10 * Dis for both 2017 and 2018 data. This also gave me 0.0002 improvement.</li> </ul> <h1><strong>Features</strong></h1> <p>total 36 features, ['IsRusher','IsRusherTeam','X','Y','Dir_X','Dir_Y',<br> 'Orientation_X','Orientation_Y','S','DistanceToBall',<br> 'BallDistanceX','BallDistanceY','BallAngleX','BallAngleY',<br> 'related_horizontal_v','related_vertical_v',<br> 'related_horizontal_A','related_vertical_A',<br> 'TeamDistance','EnermyTeamDistance',<br> 'TeamXstd','EnermyXstd',<br> 'EnermyYstd','TeamYstd',<br> 'DistanceToBallRank','DistanceToBallRank_AttTeam','DistanceToBallRank_DefTeam',<br> 'YardLine','NextX','NextY',<br> 'NextDistanceToBall',<br> 'BallNextAngleX','BallNextAngleY',<br> 'BallNextDistanceX','BallNextDistanceY','A']</p> <h1><strong>Cross Validation</strong></h1> <p>Always include 2017 data for training, 3 group folds by week for 2018 data, use only 2018 data for evaluation. In this way the CV score is close to public LB.</p> <h1><strong>Model</strong></h1> <p>Transformer (2 layers encoder + 2 layers decoder), large number of attention head is the key<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1904629%2Fe16225854085897e92b85fbdd704bc84%2FUntitled%20Diagram(2).png?generation=1574922346398274&amp;alt=media" alt=""></p> <h1><strong>Model ensemble</strong></h1> <p>Optimizer: RAdam + lookahead<br> Number of epoch: 30<br> Batch Size: 32<br> Weight Decay: 0.1<br> Ensemble: snapshot ensemble (pick models at epoch 11, 13,…,17,29)<br> Learning rate scheduler: 8e-4 for epoch 0-10,12,14,…,28. 4e-4 for epoch 11,13,…,29</p> <p>Since we are only given 4 hours CPU training, snapshot ensemble seems to be a perfect choice as it won’t increase our training time and is significantly better than single model.<br> In my final submission, I repeat the training (use all data) for 11000s and 9000s (safe mode).</p>
NFL Big Data Bowl
Public 5th place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to give a big shout out to the National Football League and to Kaggle for hosting such an interesting competition! Personally, what I liked most about this competition was that it challenged you to build a custom model which reflects the dynamics of the game.</p> <p>The competition instantly reminded us of the <a href="https://www.kaggle.com/c/champs-scalar-coupling">Molecular Properties competition</a>, where one was asked to predict scalar coupling constants based on the spatial configuration of atoms. With that, our idea was to incorporate ideas from there into our models. We had the 2nd place solution of Quantum Uncertainty in mind: &gt; This immediately triggered the idea of using transformer layers (encoders) stacked taking as an input x,y,z (normalized but otherwise as-is), and atom type and j-coupling type embeddings; just concatenated… nothing fancy.</p> <p>We therefore did minimal feature engineering and used mainly the data preprocessing steps which can be found in the public kernels. All plays were transformed such that the play direction is towards the right. Ordinal features were transformed using StandardScaler.</p> <p>Our model consists of three distinct blocks, a <strong>player-player interaction block</strong>, a <strong>player block</strong> and a <strong>global game block</strong>. </p> <p><strong>Player-player interaction block</strong>: Input is a tensor <code>X_1</code> of shape <code>(batch_size, num_players, num_players, num_features)</code>, where <code>X_1[:, i, j]</code> contains information about player i and player j - their relative distance, velocity and acceleration (distance also extrapolated into the future), their relative distances/velocities/accelerations in x and y direction respectively, as well as their team - offense/defense/rusher. <code>X_1[:, i]</code> is then fed into one of three possible Transfomer-type blocks. That is, we have one Mini-Transformer (just as in BERT with 2-3 Blocks) for offense players, one for defense players and one for the rusher. After these blocks, we concatenate the outputs together (same shape as <code>X_1</code>) and take the mean over the tensor dimension 1. With that, we have a tensor <code>X_1’</code> of shape <code>(batch_size, num_players, num_features’)</code></p> <p>This output is then concatenated along the last axis to the <strong>Player input tensor X_2</strong>, which has shape <code>(batch_size, num_players, num_features’’)</code>. It contains features such as position of each player, distance to rusher, velocity in x and y direction, etc. The concatenated tensor is processed with another Mini-Transformer (usually 2 Blocks). After that, we keep only the first dimension of the output <code>X_2[:, 0]</code>, where index 0 is the position of the rusher, and concatenate it with the <strong>play input</strong>. </p> <p>The play input contains information such as distance to yardline, etc. The concatenated tensor is then processed with the prediction head (just 2-5 linear layers) to predict the cumulative probability distribution of shape <code>(batch_size , 199)</code>.</p> <p>Before the sigmoid output, we add a mask such that “inaccessible” yards (as determined by the current yardline) are predicted correctly (the mask is either -100, 0 or 100, such that <code>sigmoid(x+ mask)</code> will be 0 or 1 for regions for which we know the actual label). This gave us a boost of about ~ 0.0003 and stabilized the training of the model.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1435684%2Fc3223d9e355c93a3aebbc4c7c514ecac%2FModel%20Overview.svg?generation=1574927943599355&amp;alt=media" alt=""> Model development was performed using time-based 5fold splits. As the validation loss varies a lot throughout the folds (from 0.010x to 0.013x), we added additional metrics which computes the MSE loss on long runs (which we defined as runs with Yards&gt;=15) and short runs (runs with Yards&lt;15). The long run loss is about ~10 times larger (0.07-0.08) than the short run loss (0.007-0.01, depending on whether some rushers run negative yards). The difference in validation loss can be thus explained by the percentage of long runs which varies among the folds. We used MSE loss, a batch size of 32, 5% swap dropout for certain features after epoch 4, and learning rate decay with a start learning rate of 5e-4. The transformer hidden size of our models ranges between 32-48, with either 1 or 2 attention heads. We did not scrape the test data for model development.</p> <p>There were numerous tweaks on how to improve the model’s performance, e.g. it was important to leave the distance features in the player-player interaction block unscaled (no preprocessing). Also, the splitting the interaction into different parts (offense/defense/rusher) improved our score. We implemented a TimeDistributed layer which processes X_1 in parallel. This reduced training time by about 40% compared to a python loop over the players. We also applied some data cleaning on the 2017 games. For our final submission, we used 3 and 4 different Model architectures (the three model notebook is conservative regarding the run time). Each model is trained up to 12 Epochs and the two best epochs per model are kept for prediction. </p> <p>What did not work: A lot! We tried to incorporate the last play’s features into the model, used aggregated information about the rusher and added several features which we thought were meaningful input. In all cases the model’s performance was about the same. I guess this is due to face that the geometrical information is the most important factor, whereas other features’ impact tends to fade out if it they are not crucial. We also experimented with different loss functions, giving higher loss to either long or short runs without any decisive advantage.</p>
NFL Big Data Bowl
25th Public LB, A great learning experience
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First at all, good luck to everybody for the second stage, and thank you to all the people who with their topics, kernels and, now, solutions make this a great competition where you can learn so much. </p> <p>In a personal note I’m very happy, not because of the position (as I think I’m going to get worse on private or probably my submissions will crash) but because for me, as I said in the title, this has been a great learning experience. I started it knowing almost nothing about NN or python and I end it getting a python NN submission good enough to get to a good place. I still have to learn a lot but I think is a good start. Specially because is something that I really wanted it to do but I had been postponing as I feel quite comfortable with R and GBM models. This competition worked as a very good push to do it. It was out of necessity because as @dmitriyguller said in the using R post:</p> <blockquote> <p>Unfortunately my experience has been that taking part in this competition using R is like running a marathon while breathing through a straw.</p> </blockquote> <p>Sometimes I think it would had been better if the hosts had limited the competition to python kernels, but then maybe I wouldn’t had started so I don’t know. </p> - <p>With R, and a 199 multi-class LGB I was able to get to .01305 on the LB, at that moment a gold medal position, but I couldn't get better, so I started to learn about NN and Keras and almost from the beginning, using the same variables, I observed that you could get better results. The problem was that keras doesn’t work on R kernels so I had to learn python. </p> - <p>My final solution, is not very different to some of the solutions already explained: most of the variables are positions and distances to the rusher in different moments of time using S and Dir, and then some max, mins, means and sd over them. I also flip the Y and did post process to overwrite to 0 or to 1 impossible yardages. I also engineered some features trying to capture possibles blockages of the non rusher offenders over the defenders. I feed all this variable into a simple sequential NN (512,256,128) + dropouts(0.5) with elu activation and adam optimizer, and softmax final layer . Before that I run a Catboots regressor to predict the number of yards and use the prediction as a feature for the NN.</p> - <p>I spent quite a lot of time trying to get a more complicated NN to work. I got a first promising try getting the positions of the player into a grid an feeding into a CNN as it were and image, then using the positions in different moments of time and feed them into a CNN + LTSM NN as it were a video, and finally trying different combinations to get the player variables into a CNN. I had the feeling that being able to mix it was the magic to get a really good result. I think the Zoo solution proves my intuition was right but I was not even close to make it work. As I said, I still have to much to learn.</p> - <p>Because all of this I wasn’t able to do submissions for almost the last 15 days of the competition and I got my first working python solution two days before deadline. The worst thing of the transition from R to python and the final rush, is that I lost the correlation between my CV scores and LB that I had when I was using R and LGBM so I’m not very confident that my final submissions do well in a possible shake out. Also my CV using python keras is 0.00007 worse than my local R keras CV. I don’t know if is because of randomness of folds or NN or because I didn’t translate well some variables from R to python. I probably make some errors and it wouldn't be strange if my submissions doesn’t work with the new data. It wouldn’t make me very sad as I’m very happy about the things I learn. </p> - <p>This is why I want to thank again to all the people who share their insights and wisdom and make this a great community. Medals, leaderboard positions, and competition in general are very good things to push you to try harder, but the important, at the end, is the knowledge you acquire. </p>
NFL Big Data Bowl
Public 15th place overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a very interesting and enjoyable competition. My final solution used a simple feedforward neural network, 3 hidden layers 100 units each with a strong regularization of 0.5 Dropout and categorical cross-entropy. The main CV used the last few weeks of 2018. A separate CV was predicting different weeks for both 2017 and 2018, (so 34 folds).</p> <p>The main (and practically the only) idea was to use minimal distances from the ball carrier at different time frames. This would assume that everyone was following at the same trajectory for the next 2 seconds. Adding that to basic features gave a large boost from around LB 0.01364 to LB 0.01304. (To avoid overfitting it helps to fine-tune the time interval, I used 2 seconds updated by 0.2 seconds).</p> <p>The rest was a long slog. Adding <br> 1, averages of Physics features (Energy, Force) inversely weighted by distances from the ball 2, pretending the ball carrier turned slightly left or right 3, fine-tuning regularization, number of hidden units 4, bagging 10 neural nets improved it to 0.01282. </p> <p>Then an even longer slog. Lots of small tweaks: <br> 1, data augmentation by flipping Y to 53-Y, 2, adding noise, changing Adam optimizer to Nadam 3, clipping impossible distances, 4, using two additional outputs to also predict average yards, and touchdown improved it to 0.01272. <br> At the end of the competition small changes that improved one CV made the other CV worse, sometimes by a lot : 0.00004, so it was hard to select the "best" 2 versions. ( Instead I selected seemingly "safe" ones that gave balanced CV values and ran faster). It will be interesting to see the final results. Good luck for everyone.</p>
NFL Big Data Bowl
My solution was public from the beginning
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry for the click bait title but it is 90% true. It is basically combination of these 2 kernels: <a href="https://www.kaggle.com/divrikwicky/lightweight-version-of-2-65-custom-nn">https://www.kaggle.com/divrikwicky/lightweight-version-of-2-65-custom-nn</a> <a href="https://www.kaggle.com/divrikwicky/nfl-lofo-importance">https://www.kaggle.com/divrikwicky/nfl-lofo-importance</a></p> <p>First one is from the chemistry competition. I have used almost the same model in this competition. Conv1D with Global Pooling on player vs rusher interaction features instead of atom interaction features. Only difference is that this time I had 2 of them: rusher vs teammates sub-model and rusher vs opponents sub-model.</p> <p>Second one is Leave One Feature Out method with my open source implementation. I have used LOFO to understand overall importance of the raw and generated features. I have also run LOFO separately for different seasons. It made it very quick for me to decide on which features have stable importance over time. I first run LOFO on target MAE with lightgbm and then remove/replace the features from my NN. This way I both benefit from speed of lightgbm and having double validation by two models even though I use only NN model for my submission at the end.</p> <p>Since I am the author of the repo ( <a href="https://github.com/aerdem4/lofo-importance">https://github.com/aerdem4/lofo-importance</a> ), I am looking for feedback from people who have tried it in this competition. Please let me know any of your positive or negative stories with lofo-importance. Any suggestion is welcome.</p>
NFL Big Data Bowl
Graph Transformer With Minimal FE, with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had a hunch since the start of competition that the use of transformer inspired architectures would do well here as they did in the recent Molecular properties. Indeed, here we have a 2D version of the molecular problem in a way. Not surprisingly, many teams thought of it and use that analogy better than I did, based on public LB score. </p> <p>Anyway, here is my version. Please indulge me, this is the first time I create a complex (well, complex for me) NN). I did what I could in 10 days. I'm sure it can be improved in many ways. </p> <p><strong>The transformer</strong></p> <p>I decided to implement it from scratch using Keras because I wanted to learn the transformer architecture insetead of using someone else' implementation. For those not familiar with the transformer I recommend these two tutorials, they helped me a lot:</p> <p><a href="http://jalammar.github.io/illustrated-transformer/">http://jalammar.github.io/illustrated-transformer/</a></p> <p><a href="https://nlp.seas.harvard.edu/2018/04/03/attention.html">https://nlp.seas.harvard.edu/2018/04/03/attention.html</a></p> <p>My thinking was heavily influenced by top team models in molecular competition, especially #6 solution for its simplicity:</p> <p><a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106407">https://www.kaggle.com/c/champs-scalar-coupling/discussion/106407</a></p> <p>This model had a small number of layers, which was very interesting given the limit on cpu time here. I also look at his author git repo to sort out doubts like: is layer norm performed before or after dropout?</p> <p><a href="https://github.com/robinniesert/kaggle-champs">https://github.com/robinniesert/kaggle-champs</a></p> <p><strong>Data cleaning</strong></p> <p>I normalized data like in <a href="https://www.kaggle.com/cpmpml/initial-wrangling-voronoi-areas-in-python">my public notebook</a>. I wish I had used 10*Dis rather than S, and also replaced A in 2017 by a constant. Adding these after deadline improved my CV by almost 0.00015. I wish I could see the effect on LB.</p> <p>In addition to flipping along the X axis as in my notebook, I also flipped along the Y axis if need be so that rusher always moves towards top right when dislaying plays.</p> <p><strong>Features</strong></p> <p>My model uses almost no feature engineering. It uses all in all:</p> <ul> <li>Player features: <code>'X', 'Y', 'X_dir', 'Y_dir', 'X_S', 'Y_S', 'S', 'A', 'IsRusher', 'IsOnOffense'</code>, X and Y are relative to rusher position.</li> <li>Distance matrix: square of inverse distance matrix. </li> <li>Play features: rusher position and yardline.</li> </ul> <p>That's it.</p> <p><strong>Architecture</strong></p> <p>Similar to the Molecular solution I started from, I start with embedding players features into a latent vector via a dense layer. I used 64 long embeddings.</p> <p>Then I use a distance attention bloc. To update a given player embedding I use a weighted sum of the other players embeddings. The weight depends on the distance. I tried various ways, and a normalized squared inverse was best. I was about to try other transforms when I decided to have them learnt by the model, via a 1x1 convolution bloc on the data. </p> <p>All my convolution blocs have a skip connection and 2 convolution layers with ReLU activation. As in the transformer I used a Glorot uniforrm weight initialization everywhere I thought of. </p> <p>The distance attention is added to the skip connection, then normalized with a custom LayerNorm, followed by dropout. I use dropout 0.25 everywhere.</p> <p>Next bloc of layers is a vanilla transformer multi head attention. Well, that's what I tried to implement, and any difference with the transformer is a mistake and unintended. If someone has the courage to read my code and provide feedback then I'd be extremely grateful! I used 4 attention heads, and length of 16 for queries, keys, and values.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F898d580b95f4591dedd56449075264f4%2Fgnn.png?generation=1574952801881228&amp;alt=media" alt=""></p> <p><strong>Isotonic regression</strong></p> <p>The output of multi head attention restricted to the rusher embedding is concatenated with the embeddings of play features. Then this is fed into two output layers. The first one is a linear layer with 199 output units followed by a sigmoid activation. The output is the 199 probabilities. </p> <p>Issue with this is that there is no reason why these probabilities are monotonically increasing. I didn't like the fix used in many public kernels, which was to replace each probability by the max of all probabilities until it. I didn't like it because if you did the transformation the other way round, starting from the right and taking the min, then you did not get the same result. </p> <p>I tried to output a softmax and then compute cumsum, but this was slower. I ended up running an <a href="https://scikit-learn.org/stable/auto_examples/plot_isotonic_regression.html">isotonic regression</a> to make the output monotonically increasing. Isotonic regression improved a bit CV and LB over using max from start.</p> <p><strong>Logistic output</strong></p> <p>I was still unhappy with output. We are asked to output a cumulated distribution function, hence we should base it on a distribution. I looked at various distributions, but none were perfect fit. I ended up doing some EDA. Start with the cumulated histogram of all yards values in train, it looks like a sigmoid skewed on the right:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fb9fc1af4bb648c98c14767bf407d070d%2Fbaseline.png?generation=1574953256245004&amp;alt=media" alt=""></p> <p>This made me think of taking the logits of this cdf. It yields:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F17e09c2ba8d3ee15fa906b7082da270d%2Flogits.png?generation=1574953330643322&amp;alt=media" alt=""></p> <p>This plot is very interesting. We see that logits increase linearly up to near 0, then there is a smooth transition, and logits increase again linearly past 0, albeit with a much smaller slope. This can be approximated quite well with two half lines as show below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fe356bb45fd5660615a6ae0d9c9518b30%2Fapprox.png?generation=1574953954703110&amp;alt=media" alt=""></p> <p>All I needed are the slopes of the two lines, and the x, and y of where they meet. Said differently, I could recreate the output from 4 numbers. I implemented a custom layer that outputs the two straight lines values, followed by a sigmoid activation. I later settled on 3 numbers only, the two slopes, and x where they meet. This was as good if not better.</p> <p>This new output was better than the simple one, but using both outputs was better. Probably because optimizing two different outputs adds some regularization.</p> <p><strong>Training</strong></p> <p>I used 12 fold time based (unshuffled) CV with validation folds drawn from 2018 only. Ie also down weighted 2017 samples by 0.5. This made CV way more in line with LB. Down weighting 2017 may have helped a bit given I had not standardized S and A correctly. I used my local machine with 2 1080 Ti for developing the model. I had been burned too much by Kaggle kernels being reset for no reason. For final submission, I uploaded my notebook, and only used last 2 folds. </p> <p>I used Adam optimizer with a learning rate decay on plateaus, and early stopping. From what I read, a predefined linear decay was best, but I hadn't time to tune it.</p> <p><strong>Data augmentation</strong></p> <p>Last day of competition, my team mate Reza made me think of using predicted future positions of players. For a given play I created 2 copies, after 0.3 and 0.6 seconds, assuming straight trajectories and constant acceleration. I'm still wondering if acceleration is always in same direction as speed. Indeed, it could be that some players decelerate... Of course, play copies were put in same fold to avoid overfitting.</p> <p>For final prediction I also created 2 copies of each test play, then averaged the predictions of the 3 plays. This yields almost 0.00010 improvement on CV.</p> <p>Data augmentation led to a LB of 0.01299 less than one hour before deadline...</p> <p><strong>Lessons</strong></p> <p>First, I should have followed my hunch much earlier. I guess I was a bit intimidated by the task. Second, I wish I had cleaned data more, esp S and A as shared by many top teams. Third, I should not use Kaggle kernels for model development, they are too unreliable when running times exceed one hour. They get reset even if they are attended and used interactively.</p> <p>One thing I don't regret is to have teamed with Reza. He helped me understanding NFL Football. Also his implementation of influence and pitch control was very enlightening. I want to use a similar idea (Gaussian mixture) in a layer to preprocess distances before distance attention. Last, but not least, his models are better than mine :D</p> <p>All in all, even if we probably will miss gold I am quite happy because I learned a lot. And now I can follow writeups of people who also used the transformer architecture!</p> <p>The code can be seen at <a href="https://www.kaggle.com/cpmpml/keras-80?scriptVersionId=24171638">https://www.kaggle.com/cpmpml/keras-80?scriptVersionId=24171638</a></p> <p>Edit: a much better model can be seen in the latest version of the notebook: <a href="https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998">https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998</a></p> <p>I have improved my code in several ways, including: - implemented encoder/decoder attention of the original transformer architecture - added squeeze and excitation to the convolutions - S and A cleaning</p> <p>My CV improved by about 0.00025, not enough to make it a top model, but still interesting. Data cleaning brings about 0.00010. </p> <p>I think the main interest is the transformer implementation done with Keras functional api. It yields a code much more compact that what we can find online. Here is the updated NN architecture:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fe50d024a978c324b67182a554a2b66bd%2Ftransformer.png?generation=1575305725966574&amp;alt=media" alt=""></p> <p>The decoder part is simpler than the transformer one as I didn't include convolution bloc nor a self attention bloc, because the decoder input is so simple (only 3 features). The code can be seen at <a href="https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998">https://www.kaggle.com/cpmpml/graph-transfomer?scriptVersionId=24417998</a></p>
NFL Big Data Bowl
Private 7th(Public 12th) solution&code with keras GCN library
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am very grateful to competition organizers and Kaggle for holding this great competition. There are many different approaches like CNN, Transformer, GCN, MLP, Boosted Trees, and many feature engineering, and there was much to learn :)</p> <p>My codes are available from : <a href="https://github.com/vinmuk/NFL-predict-yards">https://github.com/vinmuk/NFL-predict-yards</a> Useful keras GCN library : <a href="https://github.com/danielegrattarola/spektral">https://github.com/danielegrattarola/spektral</a></p> <h1>Overview</h1> <ul> <li><p>Data fix</p> <ul><li>No data modification.</li> <li>With some features, there was a difference in distribution, however, in my case, fix them by adjusting mean and standard deviation makes score worse.</li></ul></li> <li><p>Model</p> <ul><li>2-layer GCN with multi output.</li> <li>Attention sum pooling layer (attention is computed by dot products of trainable weights and features of each player) follows each GCN layer </li> <li>GCN layer is my original layer which combine Graph attention networks (GAT) and GraphConvSkip layer.</li></ul></li> <li><p>Optimizer</p> <ul><li>Adam(lr=1e-3)</li></ul></li> <li><p>Loss function</p> <ul><li>Binary crossentropy and mae for the last layer</li> <li>Binary crossentropy loss with divided output into 50 sections are located immediately after each GCN layer. This hastens the time to convergence.</li></ul></li> <li><p>Data Augment</p> <ul><li>Flip y axis</li> <li>Add data after a few seconds doesn't work</li> <li>TTA doesn't work</li></ul></li> <li><p>Feature engineering</p> <ul><li>115 features for each player</li> <li>Separate player feature and play feature made score worse</li> <li>Below features after 0~2 seconds significantly improved score. <ul><li>the number of opponent players around each player.</li> <li>whether to collide with rusher when traveling a certain distance</li> <li>distance from rusher</li></ul></li></ul></li> <li><p>Post process</p> <ul><li>Padding by 0 or 1 for parts that cannot be reached from the current position.</li></ul></li> </ul>
NFL Big Data Bowl
The Most Important Thing... (50th Public LB Solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The Most Important Thing is that I beat the #1 seed in my Fantasy Football League in Week 12. Wait, you wanted to read a writeup? Okay, here goes.</p> <p>Thank you to my teammates @siavrez and @kainsama and to all the competitors and the sponsors for making this such a fun competition. It will be exciting to see our scores live over the coming weeks. I want to give a quick summary of our team's solution.</p> <p>My wonderful teammates invited me who were better on the LB and we eventually reached 0.01330 on LB. We were combining our features and adding some new ones/changing the NN architecture a bit.</p> <p>The biggest boost in our score after merging was using Permutation Importance to get rid of unimportant features (0.01330 -&gt; 0.01320).</p> <p>We tried SOOOOO much stuff that did not improve our score of 0.01320. The most shocking to me is that adding Embeddings of RusherPosition, OffenseFormation, and Offense/Defense Teams did not help.</p> <p>My teammates and I also learned SOOOOO much stuff in Tensorflow. We were trying to think out of the box, so we were trying everything, including: - Cleaning Orientation and Speed of 2017 season (no improvement) - Adding small amount of noise (no improvement) - Adding L1 distances in addition L2 (no improvement) - Representing the play as an image and running a CNN on it concatenated with the Multilayer Perceptron (no improvement) - Fitting to Dense(2) being mu and sigma, and then creating a Gaussian from the mu and sigma and using that for a CDF (no improvement) - Predict the parameters for a Log-Logistic function for each play, and use that for a CDF (this is because the kernel density plot of yards was shaped like Log Logistic) (no improvement) - Making different models for different RusherPositions, e.g. WR vs RB (no improvement) - Using FloatingPoint16 or FloatingPoint64 in the neural network (no improvement) - Changing Batchsizes (no improvement) - Running Convolutional1D on many features sorted by player's distance to rusher (eg perform Conv1D on [S_closest_defender1, S_closest_defender2, ..., S_closest_defender3]) then GlobalMaxPooling/GlobalAveragePooling and concatenating with previous Multilayer Perceptron (no improvement) - Running LSTM/GRU on those same sorted features and GlobalMaxPooling/GlobalAveragePooling and concatenating with previous Multilayer Perceptron (no improvement) - The same as before, but doing BiDirectional (better than before, but no improvement) - Changing Adam to AdamW (no improvement) - Adding Warm Restart Weight Decay Scheduler to AdamW Optimizer (no improvement) - Feeding NflIdRusher via an Embedding (if you have &lt;10 occurrences of NflIdRusher you get mapped to "Rare") (no improvement) - CuDNNLSTM/CuDNNGRU (looked interesting, but realized we have no GPU therefore cannot use this layer) - Use Kolmogorov-Smirnov statistic to remove features that have different distribution between train and test (no improvement) - Concatenate "Season" with the Team's name and feed into an Embedding, because teams can change season-to-season (no improvement) - Using Gert's method to bin/cut predictions and modify them (no improvement) - Learned how to use TensorflowProbability to predict a NormalDistribution for Yards (MAE) and then use the standard deviations that it produces to obtain a CDF (no improvement) - We also tried using GammaDistribution for TensorflowProbability (no improvement) - Multiobjective loss (fit to both Dense(200) CRPS and Dense(1) MAE) (small improvement) - Multiobjective loss v2 (fit to Dense(200), Dense(25), Dense(1)) (small improvement) - Remove outlier Yards where abs(Z-score) &gt;= 3 (no improvement) - Switching to SGD with small learning rate after finding optimum with Adam (small improvement) - Adding GaussianNoise/GaussianDropout layers - Trying to smooth the CRPS output by averaging with neighboring predictions - Fitting to the Yards MAE, then using the standard error of the Yards predictions to create Gaussian CDFs (this is what 1st placed did in the previous competition that used CRPS metric. Unfortunately, this brought us no improvement). - Post-processing based on the min/max yards of the NflIdRusher_Season (no improvement) - Post-processing based on the min/max yards allowed of the DefenseTeam_Season (no improvement) - Post-processing based on the min/max yards gained of the OffenseTeam_Season (no improvement) - Post-processing based on the min/max yards gained by that Position (no improvement) - Online Learning (yes, we had implemented online learning into our CV using <code>train_on_batch</code>. We found the improvement minimal and only decided to try it on the penultimate day. So we did not have time to implement it into test predictions).</p> <p>As you can see, we were stuck at a wall with no improvements. We tried SO much stuff. In the end, we could not break 0.01320. However, I am proud to learn so much (Tensorflow Probability is awesome, and I am already using it at my work). In the end, we submitted two models: a 10Fold Multilayer Perceptron with AdamW, Warm Restart Weight Decay, QuantileTransformer, softplus activations, MultiObjective (softmax+mae) model. And the second submission is the exact same, except it is trained on all data without validation bagged 10 times, stopping when categorical_crossentropy loss reaches 2.52 (this value was found by submitting to LB and seeing what worked best).</p> <p>Things we didn't try that I think would've been helpful: - Using a holdout validation set instead of KFold (allows you to fit more models, which lets you blend and yield better score) - Graph Neural Networks/Pointnet (if anyone was able to fit this onto the CPU in 4 hours, you are an amazing data scientist) - Treating the 22 NflId sorted by distance to rusher as a sentence, and running Seq2Seq NLP architecture on this. We wanted to try this, but seeing how feeding NflIdRusher through an embedding made the score SO much worse, I think it was good that we did not travel down this rabbit hole.</p> <p>Once again thank you to my talented teammates, it was a pleasure working with you, and thank you to the NFL for creating a fun competition. P.S.: A funny joke, but the whole competition, our slack channel was named "NLF World" because of a typo ;)</p>
NFL Big Data Bowl
how a newbie got to public 19th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>(First, I'm heartbroken to see what happened with the Zoo. Their solution is the most elegant one I've seen, and they do deserve a top place in this competition. Hope they will reappear on leaderboard after next week's rerun.)</p> <p>I didn't share my solution earlier because there's no much highlight in my work. I'm new to ML, and this is literally my first NN project. <strong>What I did was taking this <a href="https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381">great kernel</a> by <a href="/mrkmakr">@mrkmakr</a> (many thanks! You saved my life!)</strong>, working very hard to learn all the magic done there, and making my small changes based on my intuition. -- But then I thought it might be helpful to some other newbies to learn <strong>how a newbie with minimum ML knowledge could approach this competition.</strong> So here I am. </p> <p><strong>1. Data augmentation:</strong> - First flip all data so the play direction goes from left to right. This is a standard procedure. - Train augmentation: flip the play along Y axis (so X'=X, Y'=53.3-Y, similarly for other variables). - <strong>Test augmentation</strong>: flip the play along Y axis, predict it. And return the average of the original test prediction and the augmented test prediction. </p> <p><strong>2. FE</strong> - For all speed, distance features: it's helpful to split to x and y axis based on Dir - Meet time with rusher, and the distance that the rusher could travel within such time - Game dynamic: the relative distance after 0.5s, 1s, 2s if everyone goes according to current direction. - Potential positions of rusher after 1s (if he runs at current S&amp;A towards: front left, front right, straight front) - and for each of those positions, calculate distance and travel time needed for each player from the current position. - For all distance, meet time features, for defend team and offense team, each calculate the 1, 3, 6 smallest values. Also calculate the difference between the 1,3,6th teammate and defender. (This step produces a lot of features. I use them in the "Rank" layer as discussed below.)</p> <ul> <li><strong>Wind</strong> + wind direction + stadium: am I the only one who finds wind directions useful? (About 0.00005 boost). <ul><li>The idea is to first split wind direction to only 2 dimensions (N, W) - and S, E are just negative N, W. Thus each wind direction can be represented by two numbers. For example, "NE" = (0.7, -0.7), "S" = (-1, 0), etc. Then multiply this by wind speed - gives the speed in each direction. </li> <li>This info is stadium specific - because it's relative to the angle between the stadium and earth. </li> <li>Also it matters whether the stadium is indoor or outdoor.</li> <li>I had to be very careful when doing all these wind speed with flipped/augmented data - wind speed needs to be flipped as well.</li></ul></li> </ul> <p><strong>3. NN structure</strong> - As mentioned earlier, mainly based on the kernel by <a href="/mrkmakr">@mrkmakr</a>. But I did make it more complicated by adding more layers. - "Game" layer: game features (except for the ones listed below) + player features for the closest defend player and closest teammate. - "Rusher", "Offense", "Defend" layers: rusher/offense/defend team specific features. For offense and defend, there're multiple players, so I do AveragePooling. It helped to have several dense layers for these, before concatenating results with other layers. - "Wind" layer: directional wind speed (North, West) + embedding of Stadium + embedding of Stadium type (outdoor or indoor) - "Rank" layer: lots of calculated features based on the 1,3,6th rank of various of distance, speed, etc, as discussed in FE. This needs to be a separate layer because these are a lot of very strong but correlated features, thus needs to reduce dimension before concat with other layers. - "All" layer: concat of all above, and 3 dense layers after concat before prediction.</p> <p><strong>4. Control overfitting</strong> - Overfitting needs to be carefully controlled when I use a complicated NN structure with so few data. So I did the following. - Set a small batch size. (128 for me - would be 64 if I didn't do BatchNormalization) - Dropout: is super helpful. My dropout was pretty high (ranges from 0.3 after player layers to 0.6 after most other layers). - BatchNormalization: can be very helpful when I use it in the right way, (or otherwise hurt me very badly when used improperly). It really helped me by enable me to use a large learning rate (0.01) and a larger batch size. It took me a whole day to figure out the right way of using BN: it has to be used after a dense layer, but before activation. This means I had to set activation=None in Dense layer, and then use an activation layer after BN. - Early stopping</p> <p><strong>5. Issue with 2017 data (hence overweight 2018)</strong> - As many others have noted, A and S has very different distribution in 2017 compared with 2018. I tried very hard to adjust the 2017 data, but failed. Then I decided to overweight 2018, which was super helpful. - My best attempt in adjusting 2017 A,S data was to note that different types of players have very different A,S. Thus split the 2017 data by player type, and do a rank based mapping to the 2018 data of the same player type. (For example if a certain entry has S ranked 26% within 2017 rushers, the I replace it by the 26% ranked value within 2018 rushers.) This didn't work - I guess 2017 data has larger measurement error, so can't be adjusted in a systematical way.</p> <p><strong>6. Small tricks</strong> - Post processing: Manually assign 0 probability to the "infeasible" yards that are beyond the boundaries of the field. - Bet anything &lt;-20 can't happen. So I'm predicting a vector with 129 dim instead of 199. - Try except in prediction stage: set a prior = distribution of train data. Return the prior if prediction fails for a certain play.</p> <p><strong>7. My two submissions:</strong> - "Safe": only uses 2017/2018 data to train. Hence remove features that might be season specific, like team names, player names, etc. This one scored 0.01279 on original public leaderboard (19th place), and gets me 0.01145 after first week's rerun (13th place), 0.01153 after 2nd week's rerun (16th place), 0.01183 after 3rd week's rerun (17th place), and currently 0.01200 after 4th week's rerun (17th place). - "Aggressive": includes and overweights 2019 data, as well as season specific info. Also slightly overweights the later half of each season. This one scored 0.01282 on originally, and drop to 0.01320 after first week's rerun. My guess is this one fails during the train process, and thus only outputs my "prior" estimates for all test plays. (Don't understand why as I almost try-except everywhere during FE. Anyway, the "safe" submission was designed for situations like this.)</p> <p>Finally thank you everyone for sharing your thoughts via discussions and kernels. I couldn't imagine learning NN within 4 weeks without all those great resources you shared.</p>
NFL Big Data Bowl
30th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>1 - Features</h3> <p>First of all, we used the 62 features below. You can understand what they mean by their names:</p> <ul> <li>'Yards_to_touchdown',</li> <li>'YardLine_ref',</li> <li>'X_defense_spread',</li> <li>'X_defense_std',</li> <li>'Y_defense_spread',</li> <li>'Y_defense_std',</li> <li>'X_offense_spread',</li> <li>'X_offense_std',</li> <li>'X_offense_centroid',</li> <li>'X_defense_centroid',</li> <li>'DistToBallCarrier_Offense_Mean',</li> <li>'DistToBallCarrier_Defense_Mean',</li> <li>'DistToBallCarrier_Offense_std',</li> <li>'DistToBallCarrier_Defense_std',</li> <li>'DistToBallCarrier0.5_Offense_Mean',</li> <li>'DistToBallCarrier0.5_Defense_Mean',</li> <li>'DistToBallCarrier0.5_Offense_std',</li> <li>'DistToBallCarrier0.5_Defense_std',</li> <li>'Y_offense_spread',</li> <li>'Y_offense_std',</li> <li>'Y_offense_centroid',</li> <li>'Y_defense_centroid',</li> <li>'Seconds_since_quarter',</li> <li>'Distance_to_down',</li> <li>'DefendersInTheBox',</li> <li>'Average_tackle_time',</li> <li>'Min_tackle_time',</li> <li>'Season_cat_2017',</li> <li>'Season_cat_2019',</li> <li>'A1',</li> <li>'S1',</li> <li>'X_estimated1',</li> <li>'Y_estimated1',</li> <li>'Dir_cos1',</li> <li>'Dir_sin1',</li> <li>'S1_vs_S12',</li> <li>'A12',</li> <li>'S12',</li> <li>'X_ref12',</li> <li>'Y_ref12',</li> <li>'X_estimated12',</li> <li>'Y_estimated12',</li> <li>'DistToBallCarrier12',</li> <li>'DistToBallCarrier0.5_estimated12',</li> <li>'Dir_cos12',</li> <li>'Dir_sin12',</li> <li>'Min_tackle_time12',</li> <li>'S_horizontal12',</li> <li>'S_vertical12',</li> <li>'A13',</li> <li>'S13',</li> <li>'X_ref13',</li> <li>'Y_ref13',</li> <li>'X_estimated13',</li> <li>'Y_estimated13',</li> <li>'DistToBallCarrier13',</li> <li>'DistToBallCarrier0.5_estimated13',</li> <li>'Dir_cos13',</li> <li>'Dir_sin13',</li> <li>'Min_tackle_time13',</li> <li>'S_horizontal13',</li> <li>'S_vertical13'</li> </ul> <p>A key point to improve our score was to sort the dataframe by using this line of code: <code> train = train.sort_values(by= ["PlayId", "IsOnOffense", "IsBallCarrier", "DistToBallCarrier0.7_estimated"], ascending=[1, 0, 0, 1]) </code> The “DistToBallCarrier0.7_estimated” feature is the estimated distance of a player to the rusher after 0.7 seconds.</p> <h3>2 - Cross Validation</h3> <p>We tried RepeatedKFold and GroupKFold with “GameId”, “Week” and “NflIdRusher” separating the groups. The RepeatedKFold strategy gave a better result: <code> rkf = RepeatedKFold(n_splits=3, n_repeats=4, random_state=1301) </code></p> <h3>3 - Training</h3> <p>In order to train our model we used Keras with a very simple neural network: ``` model = keras.models.Sequential([ keras.layers.Dense(units=256, input_shape=[X.shape[1]]), keras.layers.BatchNormalization(), keras.layers.LeakyReLU(0.2), keras.layers.Dropout(0.4),</p> <pre><code> keras.layers.Dense(units=256), keras.layers.BatchNormalization(), keras.layers.LeakyReLU(0.2), keras.layers.Dropout(0.5), keras.layers.Dense(units=256), keras.layers.BatchNormalization(), keras.layers.LeakyReLU(0.2), keras.layers.Dropout(0.2), keras.layers.Dense(units=199, activation='sigmoid') ]) </code></pre> <p>```</p> <h3>4 - Post Process</h3> <p>For post processing we used these three lines of code: <code> y_pred[:, :50] = 0 y_pred[:, -yards_covered:] = 1 y_pred[:, :100-yards_covered] = 0 </code> Intuition: - The 50 first predictions is 0 because there isn't any team which lost a huge amount of yards on the training set. - The two last lines of code mean that if the yardline is, let’s say 75, the maximum number of yards the team can gain is 25. So every prediction on the right side of +25 should be 1. The same applyes for the case of lossing yards, where the team can losse only 75 yards and every prediction on the left side of -75 should be 0.</p> <h3>5 - Score</h3> <p>CV score: 0.01213 Public LB: 0.01297 Private LB: 0.012669</p> <p><strong>Thanks for reading 👍 and many thanks to my teammates <a href="/jayjay75">@jayjay75</a>, <a href="/rafiko1">@rafiko1</a> and <a href="/bolkonsky">@bolkonsky</a>!</strong></p>
Lyft 3D Object Detection for Autonomous Vehicles
Three Rank Jump 41->38 in two days on private set
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft 3D Object Detection for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>How did this happen? Kaggle removed multiple accounts?</p>
NFL Big Data Bowl
Private 21st - 1st Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>After reading and following Kaggle competitions for a few years I finally decided to take the plunge. I am a huge New England Patriots fan and also love Deep Learning so this was the perfect competition for me. I had a ton of fun and learned so much. I am extremely glad I did it. To anyone reading this who hasn't done a competition yet - just go for it! The community is amazing and I promise you will have fun.</p> <h3>Lessons Learned</h3> <ol> <li><strong>Data Analysis</strong> - I was pretty naive that the Public notebooks had enough data analysis in them. I was amazed at how much information was publicly shared but you still need to go through everything in detail. Next time I will make sure to spend a significant amount of time at the start on this. I missed the issues with S and A in the 2017 data.</li> <li><strong>CV/PLB/Submit Early</strong> - I didn't full appreciate how important this is. Without accurate metrics its easy to go with stuff that doesn't help and throw away stuff that does help. I didn't submit to the LB until 3 days before the deadline. That was a huge mistake. My CV was way lower and I didn't have time to figure it out.</li> <li><strong>Try Every Easy Idea</strong> - I convinced myself future time steps wouldn't help (too non-linear/random/players passing through each other) and never tried it. It would have been easy to hack up some linear next steps,</li> <li><strong>Don't Fiddle With Params</strong> - I wasted a lot time fiddling with NN parameters. I think I spent weeks tinkering.</li> <li><strong>Don't expect the model to figure out everything</strong> - I was amazed though at how well the Transformer did. I think I should have tried to help it out a bit more especially with the limit of 4 hours of CPU time.</li> <li><strong>Code Competitions - Leave Time</strong> - I ran out of time and am not thrilled with my final two submissions. This was pretty tricky though with not having the 2019 data until after the submission.</li> </ol> <h3>Solution</h3> <p>I basically went with a bunch of Transformer Encoder Layers with tied weights. Spent most of the time fiddling with its configuration and inputs/outputs. I missed the issues with 2017 data which in hindsight led to a lot of my difficulties in getting it to run correctly. Because of this I also never figured out the issues with the CV. My best submission had a <strong>0.01190 5-fold random CV and a 0.01295 PLB</strong>.</p> <p>My best discovery was definitely <strong>concatenating the raw runner inputs onto every player</strong>. Looks like this allowed the model to figure out many of the features that the other competitors discovered to be important.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1156376%2F6ec577fc64cb4ede33084d77ac27a7b7%2Fnflkaggle%20(4" alt="">.png?generation=1575034228981776&amp;alt=media)</p> <p>oX and oY are cos(Orientation) and sin(Orientation), respectively. Position and Season are embeddings. (# values, # dims) Positions = ['RB', 'OL', 'QB', 'TE', 'WR', 'CB', 'DL', 'LB', 'SS']. Looks like I should have worked on removing this feature though based on reading about better models. I have my fingers crossed that by embedding the season the model can figure out the issues with 2017 data. 🙈 </p> <p>I played around a lot with parameter sharing in the transformer layers. I found that 2 sets of weights interlaced seemed to work the best:</p> <p><code> for _ in range(self.n_layer // 2): players = self.encoder_layer(players) players = self.encoder_layer2(players) </code></p> <p>Although the gains were minor compared to just tying all weights together as was shown in: ALBERT - <a href="https://arxiv.org/pdf/1909.11942.pdf">https://arxiv.org/pdf/1909.11942.pdf</a></p> <p>Still not sure why adding FC Layers at the end didn't help. Guessing it might have had to do with the issues with S and A in 2017 and overfitting to it. It was cool though to see a 10K parameter network (16 dim) do pretty well at making predictions.</p> <p>Also the 107 outputs were for -7 to 99 yard range. Interestingly I found -5 to 21 came out with the same scores. I wasted a lot of time on a "long run" predictor. I could never get it to be even close to predicting long runs.</p> <p>Thanks to the hosts and all the competitors! Good luck everyone!</p>
Lyft 3D Object Detection for Autonomous Vehicles
1st place solution (0.220 public LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft 3D Object Detection for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, the post may be a little late. Thanks to organizers and congratulations to all!</p> <h1>Background</h1> <p>For the record, I want to state that I am really new to Kaggle and actually really new to machine learning. I only started to work on machine learning 9 month ago. My background is in 3D Computer Graphics so I looked into the problems more in the view of 3D than in machine learning. Pardon me if I made any silly mistake with regard to machine learning.</p> <p>Before this competition, I won 2nd place in CVPR 2019 competition WAD-Beyond Single-Frame Perception hosted by Baidu on a similar topic: 3D detection with Lidar points (<a href="http://wad.ai/2019/challenge.html">http://wad.ai/2019/challenge.html</a>). The tricks we used is actually based on that competition. So we just adopt and modified the tricks and have no idea about the performance without the tricks. </p> <h1>Models</h1> <p>Our method is based on Voxelnet (<a href="https://github.com/traveller59/second.pytorch">traveller59</a>) But our tricks can be applied to any other network. I gave a try with PointRCNN but got no luck. </p> <p>Basic Setting: - No external data - Data Augmentation: flip x and y, random rotation, random scaling and random translation - No Ground Truth Augmentation - Detection Range [-100,-100,-5,100,100,3] - No direction classifier - No specific post processing but score thresholding (0.1) and NMS</p> <p>Two types of voxelnet are implemented in second: FHD and PointPillars. We use both. Based on the idea that ensembling models with more difference results in better results, we use different settings to train a set of models. E.g. we use voxel size 0.1, 0.125, 0.2, 0.25 for PointPillar, 0.1 and 0.125 for FHD. Different Voxel Feature Extractors are also used for different models(e.g. PillarFeatureNet, PillarFeatureNetRadius,PillarFeatureNetRadiusHeight). RPN is modified so that the final resolution is 200 X 200 or 250 X 250. </p> <h1>Key Tricks</h1> <p>The key to our method is Test time augmentation (TTA) and model ensemble with a specific 3D box fusion method.</p> <h2><strong>Test Time Augmentation</strong></h2> <p>We transform the point cloud into several copies. Each copy is then feed into the network and get the predicted boxes. Then the predicted boxes are transformed back. For example, we can rotate the point cloud 20 degrees. After we got the predicted boxes (after NMS), we rotate the predicted boxes -20 degrees (both the center position and yaw angle are changed). We do this for many copies and fuse the results. During the competition, we only use 4 copies: original, flip x, flip y, flip x and y. We tried more copies with rotation, scaling and translation. It gives slightly better results but needs more inference time.</p> <h2><strong>Model ensemble</strong></h2> <p>We use TTA to get the predicted boxes for each model and fuse the results.</p> <p>Best Single Model (No TTA): &gt;0.175 (I only got the score for a model with TTA score 0.193. But I got a better model with TTA score 0.197) Best Single Model (TTA): 0.197</p> <p>Ensembling 7 best models: 0.220</p> <h2><strong>3D box fusion</strong></h2> <p>After we got several copies of predicted boxes (from TTA or model ensemble), we need to fuse them. We choose boxes with same label and IOU &gt; 0.6 to fuse into one. After the competition,we tried Weighted Boxes Fusion: ensembling boxes for object detection models. It actually gives better results (0.222 Public LB) but is slower. To fuse boxes, the center and size are directly weighted averaged. The key is the yaw angle. If you predict the direction and can trust it. You may use sin and cos interpolation.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3701925%2F7c611366e6f5e39e70bbbe3d0b46d12e%2Fyaw.png?generation=1577071532448853&amp;alt=media" alt=""></p> <p>However, in our implementation, we did not use direction classifier (it won’t affect the score and it can not be 100% accurate). We need to deal with one ambiguity: yaw angle is the same to yaw angle + 180 degrees. Say for one box, one prediction might give yaw angle 175 degrees or -5 degrees. <br> It does not matter since it is of the same IOU to ground truth. However, when we do box fusion, it matters. Say for the same box, another prediction gives yaw angle 5 degrees. If the first prediction is -5 degree. The average is 0 degree. It is okay. However, if the first prediction is 175 degree, the averaged boxes will be with yaw angle 90 degrees.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3701925%2F1b3f051743db56d52c52e61e943091c8%2Fyaw2.png?generation=1577071794662291&amp;alt=media" alt=""></p> <p>To deal with this problem, we use an intuitive method: average 2 * yaw instead of yaw. The idea is that sin and cos interpolation of yaw angle can remove 360 degrees ambiguity. So sin and cos interpolation of 2 * yaw angle can remove 180 degrees ambiguity. I have no theoretical proof yet but intuitively feel it is correct. Actually the idea came into my head when I was telling bedtime story to my daughter. She asked me to tell the story over and over again so I just repeated it😂 !</p> <p>I uploaded our code and pretrained models. You may give a try. Have fun!</p> <p>p.s. For sparse convolution, since spconv used in second might be patented, we replace it with <a href="https://github.com/facebookresearch/SparseConvNet">this one</a>. It is much slower. </p>
ASHRAE - Great Energy Predictor III
1st Place Solution Team Isamu & Matt
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to Kaggle and ASHRAE for hosting this competition. The decision to use only non-leak data for the private test set helped to make this competition fair. Thank you to all those who contributed to the kernels and discussions and especially those who made the leaks public. Last but not least, I'd like to thank and congratulate my teammate <a href="https://www.kaggle.com/yamsam">Isamu Yamashita</a> for being a great teammate and becoming a Competitions Master.</p> <p>During the competition, we shared ideas and discussed progress within our team, but we tested and trained separate models. This helped us maintain diversity in our final ensemble. This is a combined summary our team's solution. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F331718%2Fc9acb654b6bdae701c9a9aa54c1bdd33%2FScreenshot%20from%202020-01-07%2013-15-21.png?generation=1578431784407091&amp;alt=media" alt=""></p> <h2>Preprocessing</h2> <h3>Remove anomalies</h3> <p>As others have noted, cleaning the data was very important in this competition. The assumption is that there are unpredictable and hence unlearnable anomalies in the data that, if trained on, degrade the quality of the predictions. We identified and filtered out three types of anomalies: 1. Long streaks of constant values 1. Large positive/negative spikes 1. Additional anomalies determined by visual inspection</p> <p>We noticed that some of these anomalies were consistent across multiple buildings at a site. We validated potential anomalies using all buildings in a site--if an anomaly showed up at the same time at multiple buildings, we could be reasonably certain that this was indeed a true anomaly. This allowed us to remove anomalies that were not necessarily part of a long streak of constant values or a large spike.</p> <h3>Impute Missing Temperature Values</h3> <p>There were a lot of missing values in temperature metadata. We found that imputing the missing data using linear interpolation helped our models.</p> <h3>Local Time Zone Correlation</h3> <p>As noted in the competition forum, the timezone in the train/test data was different from the timezone in the weather metadata. We used the information in this <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/112841">discussion post</a> to correct the timezones.</p> <h3>Target Transformations</h3> <p>Like most competitors, we started by predicting <code>log1p(meter_reading)</code>. We also corrected the units for site 0 as per this <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/119261">discussion post</a>. </p> <p>Near the end of the competition, we tried standardizing <code>meter_reading</code> by dividing by <code>square_feet</code>; i.e., we predicted <code>log1p(meter_reading/square_feet)</code>. Isamu came up with the idea after reading this <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/122263">discussion post</a> by <a href="https://www.kaggle.com/artoby">Artyom Vorobyov</a>. The models trained with the standardized target added diversity to our final ensemble and improved our score by about 0.002. If we had more time we would have liked to explore this idea further; for example, we could have tried to predict <code>log1p(meter_reading)/square_feet</code> or created features using the standardized targets.</p> <h2>Feature Engineering and Feature Selection</h2> <p>We took different approaches to feature engineering and feature selection in this competition. Isamu took a conservative approach and carefully selected features; on the other hand, Matt took a brute force approach and used most of them. Here are the features that helped:</p> <ul> <li>Raw features from train/test, weather metadata, and building metadata</li> <li>Categorical interactions such as the concatenation of <code>building_id</code> and <code>meter</code></li> <li>Time series features including holiday flags and time of day features</li> <li>Count (frequency) features</li> <li>Lag temperature features similar to those found in the public kernels</li> <li>Smoothed and 1st, 2nd-order differentiation temperature features using Savitzky-Golay filter (see the figure below)</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F331718%2Fd5ec815eedab54ae3262680957de2fbd%2Fsmoothed_temperature.png?generation=1578280466482319&amp;alt=media" alt=""></p> <ul> <li>Cyclic encoding of periodic features; e.g., <code>hour</code> gets mapped to <code>hour_x = cos(2*pi*hour/24)</code> and <code>hour_y = sin(2*pi*hour/24)</code></li> <li>Bayesian target encoding (see this <a href="https://www.kaggle.com/mmotoki/hierarchical-bayesian-target-encoding">kernel</a>)</li> </ul> <h2>Models</h2> <p>We trained CatBoost, LightGBM, and MLP models on different subsets of the data: * 1 model per meter * 1 model per site_id * 1 model per (building_id, meter)</p> <p>Our team tried different approaches to validation in this competition. Like other competitors, we tried K-Fold CV using consecutive months as the validation set. The following code shows one approach to getting validation months: <code> &amp;gt;&amp;gt;&amp;gt; import numpy as np &amp;gt;&amp;gt;&amp;gt; def get_validation_months(n): ... return [np.arange(i, i+n) % 12 + 1 for i in range(12)] ... &amp;gt;&amp;gt;&amp;gt; get_validation_months(6) [array([1, 2, 3, 4, 5, 6]), array([2, 3, 4, 5, 6, 7]), array([3, 4, 5, 6, 7, 8]), array([4, 5, 6, 7, 8, 9]), array([ 5, 6, 7, 8, 9, 10]), array([ 6, 7, 8, 9, 10, 11]), array([ 7, 8, 9, 10, 11, 12]), array([ 8, 9, 10, 11, 12, 1]), array([ 9, 10, 11, 12, 1, 2]), array([10, 11, 12, 1, 2, 3]), array([11, 12, 1, 2, 3, 4]), array([12, 1, 2, 3, 4, 5])] </code> Trying different validation schemes allowed us to train models that added diversity to our final ensemble.</p> <h2>Ensembling</h2> <p>To reduce the risk of overfitting to the public LB and improve robustness, we ensembled predictions from many different models. Here are some of the things we did: * Used cleaned leak data as a holdout set to tune our second stage model * Averaged log values; i.e., <code>expm1(mean(log1p(x)))</code> rather than averaged the raw values * Used the generalized weighted mean and tuned the parameters using Optuna * Hedged our bets by including leak-free public kernels on cleaned data: * <a href="https://www.kaggle.com/rohanrao/ashrae-half-and-half">ASHRAE: Half and Half</a> by <a href="https://www.kaggle.com/rohanrao">Vopani</a> * <a href="https://www.kaggle.com/aitude/ashrae-kfold-lightgbm-without-leak-1-08">ASHRAE- KFold LightGBM - without leak (1.08)</a> by <a href="https://www.kaggle.com/aitude">Sandeep Kumar</a> * <a href="https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks">Ashrae: simple data cleanup (LB 1.08 no leaks)</a> by <a href="https://www.kaggle.com/purist1024">Robert Stockton</a></p> <p>This <a href="https://www.kaggle.com/mmotoki/generalized-weighted-mean">kernel</a> shows how we ensembled our predictions. Our final ensemble was a plain average of our top 4 submissions with respect to the public LB score. </p> <h2>What Didn't Work</h2> <ul> <li><p>Again, following this <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/122263">discussion post</a> by <a href="https://www.kaggle.com/artoby">Artyom Vorobyov</a>, we thought of ensembling the predictions for buildings with the same meter_readings; e.g., buildings 1225 and 1226 with meter 0. However, we were not able to improve our public LB result with this approach. It is possible that we did something wrong here, but we didn't have enough time to go back and explore this idea further. </p></li> <li><p>Smoothing the predictions of our models. We had initial success with smoothing the final predictions of our models, but after a certain point, we started to find that smoothing hurt our public LB score. Our guess is that smoothing helps spiky low quality predictions, but our ensemble predictions were already sufficiently smooth. </p></li> </ul>
ASHRAE - Great Energy Predictor III
[3rd Place] Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to Kaggle and ASHRAE for hosting this competition</p> <p>My solution is likely to disappoint some of you given the lack of sophistication. </p> <p>I spent most of my time (too much) on trying feature engineering, trying to look a cv split correlated to public LB (didn’t happen), trying to find a neural network architecture that will give me a boost in local cv and public LB (didn’t happen) and browsing research papers related energy prediction in the subway. </p> <p>I am quite clueless about web scraping but thanks to <a href="https://www.kaggle.com/gunesevitan" target="_blank">@gunesevitan</a> for showing the way and teaching me a lot in the process nevertheless. </p> <p>Given diverse experiments with different CV schemes I did over the period of the competition, I decided to simply combine all the results (over 30), I got into a single submission using a simple average after selection by pearson correlation (6th on private LB). <br> In two instances over the last 3 weeks, I used a subset without leak of these experiments and use the leak to ensemble them but I gave up because the public LB was quite poor despite a better local CV and I thought it was overfitting. However, as I selected this method as my alternative solution, it appears this was the best on the final private LB.</p> <h1>Preprocessing:</h1> <p>By lack of time, I only used the ideas and code of some excellent public kernels. </p> <p>I also wrote a script that run for a few hours where I eliminated all 0s in the same period when these 0s occurs in the same site, at the same period and across all meters. <a href="https://www.kaggle.com/ganfear" target="_blank">@ganfear</a> wrote an excellent visualization <a href="https://www.kaggle.com/ganfear/missing-data-and-zeros-visualized" target="_blank">here</a>. My goal was to eliminate a maximum of "vertical" lines especially if they were simultaneous. I believe that this is the trick that gave me an advantage. Using this preprocessed data in the top public kernels always gave me a better LB, so I was on the right track.</p> <h1>Feature Engineering:</h1> <ul> <li>Row feature from train/test, weather metadata, and building metadata</li> <li>Count features and combination of features (see excellent write up on IEEE competition)</li> <li>Lag on row features but only the temperature feature lags seemed to be actually useful.</li> <li>Features mentioned in public kernel such as RH, feel_likes and presence of other meters seems to have a marginal upside.</li> <li>A feature that I only included in the other submission in the last days found in a research paper was working well with meter==1 was :</li> </ul> <pre><code>latitude_dict = {0 :28.5383, 1 :50.9097, 2 :33.4255, 3 :38.9072, 4 :37.8715, 5 :50.9097, 6 :40.7128, 7 :45.4215, 8 :28.5383, 9 :30.2672, 10 :40.10677, 11 :45.4215, 12 :53.3498, 13 :44.9375, 14 :38.0293, 15: 40.7128,} train_df['latitude'] = train_df['site_id'].map(latitude_dict) train_df['solarHour'] = (train_df['hour']-12)*15 # to be removed train_df['solarDec'] = -23.45*np.cos(np.deg2rad(360*(train_df['doy']+10)/365)) # to be removed train_df['horizsolar'] = np.cos(np.deg2rad(train_df['solarHour']))*np.cos(np.deg2rad(train_df['solarDec']))*np.cos(np.deg2rad(train_df['latitude'])) + np.sin(np.deg2rad(train_df['solarDec']))*np.sin(np.deg2rad(train_df['latitude'])) train_df['horizsolar'] = train_df['horizsolar'].apply(lambda x: 0 if x &lt;0 else x) </code></pre> <p>it is supposed to calculate the solar horizontal radiation coming into the building.</p> <h1>Models:</h1> <p>I trained Keras CNN (@aerdem4 style), LightGBM and Catboost on diverse version of cleaned data, various feature selection (including removal of building_id) - without ever beating the best kernel without leak to my disappointment. </p> <p>I had better success on local CV with meter level model using decision trees rather than using all the meter in the same decision trees model. Catboost and Lightgbm are clearly using different selection of features for the splits, so they are quite complementary.</p> <p>Only towards the end of the competition, I realized that the NN was giving approximately the same results on both local cv and public LB than decision trees models and that most of the performance was due to the cleaning more than my feature engineering. However, the results were quite uncorrelated, so it was good for diversity.</p> <h1>Ensembling:</h1> <p>My best submission is an ensemble of a small samples of leak free (~10) predictions among all my experiments. Using lightgbm, I used these different predictions as feature with the addition of the original feature “meter” in order to reduce overfitting. The leaky rows are used as training set in the ensembling model.</p> <p>My idea was that it is likely that the prediction of first 11% of the test set (included in the public LB) are going to be much better than the last 11% of the test set (private LB), so if I were to use the leak on the test set, I may be able to correct the last 11%. I was expecting a small hit on public LB but my public LB took such a larger hit than expected (+.025) that I thought I was overfitting and didn’t pursue that path. I kept the submission as my second final submission which became my best on private LB.</p> <h1>Didn’t work for me or didn’t give any significative improvement:</h1> <p>Pseudo labeling (read <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> posts for more information)<br> NN decoder-encoder using the leak in <a href="https://www.kaggle.com/mjahrer" target="_blank">@mjahrer</a> style (reniew seems to have succeed in this direction: see here (<a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/122203#699375" target="_blank">https://www.kaggle.com/c/ashrae-energy-prediction/discussion/122203#699375</a>)<br> Thermometer encoding on cloud coverage<br> Treat temperature features as categorical variables (given limited number of values) and use target encoding.<br> Adaptation of <a href="https://www.kaggle.com/anokas" target="_blank">@anokas</a> tool for time series feature<br> …</p> <p>Otherwise, I would like to thank all the people who taught me a lot on Kaggle among whom <a href="https://www.kaggle.com/Raddar" target="_blank">@Raddar</a>, <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> and <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> during the years.</p> <p>Update:<br> github code available <a href="https://github.com/chabir/kaggle_materials/tree/master/ASHRAE%20-%20Great%20Energy%20Predictor%20III%20-%203rd%7C3614" target="_blank">here</a></p>
ASHRAE - Great Energy Predictor III
40th place Public solution. My part.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As we are still waiting for the private leaderboard I have decided to share our solution that led us to the 40th place at public LB. Enjoy the reading, good luck with a shakeup and a happy new year to everyone.</p> <h1>Data cleaning</h1> <p>The best score boost in this competition was achieved thanks to data cleaning. There are plenty of training examples that might be removed. For example building_id's from 0 to 104 have all 0's (or almost 0's) for the period starting from January 1st 2016 until May 20th 2016. So all of this samples should be removed from the training set. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1696976%2F9fe426ea143a9f69c307e6f6c3d5b818%2Fexample1.png?generation=1577093876606046&amp;alt=media" alt=""></p> <p>Some buildings have suspicious 0 readings somewhere in the middle, like building_id 1066 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1696976%2Fc0cbed9e66824cc0ec29b507b2b66ad4%2Fexample2.png?generation=1577093910902899&amp;alt=media" alt=""> Code to remove data: <code>"not (building_id == 1066 &amp; meter == 0 &amp; timestamp &gt;= "2016-02-13 20:00:00" &amp; timestamp &lt;= "2016-02-29 11:00:00")"</code></p> <p>And another example - building_id 1250 &amp; meter 2. It has all 0's until the middle of December. So my guess was that this type of meter was not functioning, but started at December. Based on that guess the small amount of non-zero data is very useful, especially because the readings are so high (around 8000). <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1696976%2F71207e050b418773c037b7c38566adce%2Fexample3.png?generation=1577094162553267&amp;alt=media" alt=""> Code to remove data: <code>"not (building_id == 1250 &amp; meter == 2 &amp; timestamp &lt; "2016-12-21 16:00:00")"</code></p> <p>I have been using my plots from <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/115248">this post</a> to visually understand which readings might be removed.</p> <p>Overall our file with data cleaning queries contains 102 lanes. That is including both training set and leaked set. Yes, we did cleaned leaked data and used it as a training examples.</p> <h1>CV strategy</h1> <p>2 Folds strategy from <a href="https://www.kaggle.com/rohanrao/ashrae-half-and-half">Half and half</a> showed a really good performance. But since it splits all data simply in the middle it has disadvantages because some of the buildings have samples only starting from June, July or even November. So they will be only presented in one fold.</p> <p>So I have created a slightly different strategy - using half and half, but now for the whole training set, but for every meter of every building in it. In other words - if building have readings only for 6 months, starting from July, then 1st fold will contain all the readings from July till September and 2nd fold will contain readings from September till December.</p> <p>Below is a picture with an example. Pay attention that data for building_id 555 starts only from July. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1696976%2F7144e4deac39294f9c60440415f73fbe%2Fexample_cv.png?generation=1577094834892628&amp;alt=media" alt=""></p> <p>And here is the code to create two folds for the whole dataset, splitting each meter of each buildings somewhere in the middle:</p> <p>``` fold1 = [] fold2 = [] folds = [fold1, fold2]</p> <h1>Iterating over each building</h1> <p>for building in tqdm_notebook(range(1449)): # And over each possible meter type for each building for meter in range(4): sub_df = train[(train['building_id']==building) &amp; (train['meter']==meter)] # Dividing by two, because we will train on 2 folds samples = int(sub_df.shape[0] / 2) # If there are any samples for the specific meter type of the specific building... if samples: # Filling indexes for both folds. # They will be splitted by time, since at this moment train is sorted by time. fold1 += list(sub_df.index[:samples]) fold2 += list(sub_df.index[samples:]) ```</p> <p>This CV strategy performs better and OOF score better aligns with LB.</p> <h1>Feature engineering</h1> <p>We have used around 50 features in our model. Some of the features are well known from public kernels, such as lags. But some of them were mined using CV strategy.</p> <p>One of the features is a temperature difference between two readings for each building_id. It improved a score a bit. Code example: <code>train['air_temperature_diff1'] = train['air_temperature'] - train.groupby(['building_id', 'meter'])['air_temperature'].shift(1)</code></p> <p>Also we have used holidays as a features, SIN\COS transformations and so on.</p> <h1>Model</h1> <p>We mostly used LightGBM as it showed best performance. I believe we used only one CatBoost model for the blending. Neural Network didn't show a good performance. Final submission is a blend of around 10 models (or some number around that).</p> <h1>Leaked data</h1> <p>Yes, we used leaked data both for training and in the submission file to replace our predictions with ground truth labels.</p>
ASHRAE - Great Energy Predictor III
4th Place Question
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>According to the ranking published <a href="https://www.kaggle.com/sohier/ashrae-interim-scores#final_leaderboard_order.csv">here</a>, I got the 4th place. This is the first time that I have won a gold medal or even a bonus. I am very happy.But I'm a newbie and I don't know much about the following rules of <a href="https://www.kaggle.com/c/ashrae-energy-prediction/overview/prizes">PRIZE page</a>: `Because this competition is being hosted in coordination with the ASHRAE Organization and its Winter or Annual meetings in 2020, winners will be invited and strongly encouraged to attend the conference, contingent on review of solution and fulfillment of winners' obligations.</p> <p>Note that in addition to the standard Kaggle Winners' Obligations (open-source licensing requirements, solution packaging/delivery, presentation to host), the host team also asks that you create a short video (under 5 minutes) summarizing your solution.`</p> <p>Would you mind telling me what obligations I should fulfill and what should I do? Thank you very mych!</p> <p>And 4th Place Solution is being prepared</p>
NFL Big Data Bowl
7th place (public LB) solution : NN + Survival LGBM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First a big thanks to the organizers for this very interesting competition. The task is challenging and the players tracking data is so fascinating. Here is a summary of our final solution. It is a mix between automatic features learning and manual features engineering. The final solution is a blend of 1 deep learning model and 1 LGBM. The DL model and write-up is mostly my part while Sebastien worked on the Survival modelling with LGBM. </p> <p>Thanks <a href="/sebastienconort">@sebastienconort</a> it was a pleasure to work together on this competition.</p> <h3>Validation</h3> <p>We used last 2000 play_id as validation. At the beginning is has a stable gap of 0.0001 with LB but in the end we started to overfit the validation. A posteriori, we may have treated this too lightly and some other teams have share better validation scheme.</p> <h3>Survival modelling with LGBM for learning “Yard” distribution - by <a href="/sebastienconort">@sebastienconort</a></h3> <p>In this part, unlike many competitors, we did not approach the problem as a multi-class problem, but as a regression problem, with a survival modelling approach on the number of Yards reached. The “time“ variable is the number of “yards run so far” and the “death” event is the rusher being stopped. If you are interested for more details, see the note at the end o this post: (Notes on Survival Modelling for Yard prediction) Depending on time and feedback from readers of this post, we might publish a tutorial notebook describing this approach which is very useful for churn prediction, or insurance pricing 😄 Without the NN embeddings descrived hereunder, and with descriptive variables on Defense and Offense players ordered by their distance to the Rusher, this LGBM approach reached 0.0131 on the public LB. </p> <h3>NN Architecture</h3> <p>The LGBM takes as inputs some intermediate embedding of the NN with other variables. </p> <p>The deep learning architecture has differents parts: * An MLP that generate an interaction embedding for each couple (player_i, player_j). * These interaction embeddings is summed for each player_i then concat with other inputs for players_i. This is then fed into another MLP to create players_embedding * The sum of all players embedding is concat with play-level features before feeding to final classifier.</p> <p>Globally the interaction part is like a Graph NN with 1 round of message passing. Separating player features, context features, and the terrain limits features (YardLine and DistancetoYardline) and treat them separately make it easier to regularize the network.</p> <h3>Features</h3> <ul> <li>Most values are from players positions and speed, we found that play-level variables helped a little. </li> <li>We projected positions in 0.5s, 1s, 1.5s with the Speed and these were the best features for LGBM. However we did not see improvement when doing this with NN We did not see improvement when adding voronoi features on top of distances.</li> <li>For NN, we found big gain to include in the interaction features the relative position of player_j projected on the direction from player_i to the rusher. </li> </ul> <h3>Deployment</h3> <p>One of the challenge of the competition is the deployment. We did modify our code to be quite conservative in the end. We also made the choice to paste the weights in the kernels and refinetune the model from that weight. LGBM is always retrain since it is fast (~1 min) Finally for the 2 submissions, we chose 1 with NN finetuning and 1 without</p> <h3>Tricks that helped</h3> <ul> <li>For deep learning, we fit a blend of 10 best parameters found by random search then distill the output to 1 DL model. The weight of this final model is pasted in the solution.</li> <li>We made a mistake as some point to do batchnorm on inputs. Which means measures in X and Y axes are scaled differently. This make learning very unstable. Removing this and scale inputs manually helped.</li> <li>Some tricks that helped a little bit each time: <ul><li>augment data by flipping Y-axis</li> <li>add noise to target in training</li> <li>add a l1 penalization in the output of consecutive bins of the softmax (fused lasso)</li> <li>Rotate Orientation by 90° in 2017</li></ul></li> </ul> <h3>What did not helped</h3> <ul> <li>We tried Transformer many times but it did not do better than our architecture. Maybe we should try again after reading other solutions :)</li> <li>We tried to add more layer of interactions between players but it did not seems to help either</li> <li>We tested different activation like ELU and SELU but ReLU does much better.</li> <li>Normalizing S and A in 2017 did not work for us. We did not think of treating these as missing values like <a href="https://www.kaggle.com/wimwim">https://www.kaggle.com/wimwim</a>. Huge regret and big kudos for him with this nice data prep work.</li> <li>We tried to predict the real distance travelled by the rusher instead since it is more natural in our pov. However, to our surprise it is worse, our hypothesis is that because the defense is organized around the yardline more than around the runner ? But we are interested to hear experts opinion about this.</li> </ul> <h3>Notes on Survival Modelling for Yard prediction</h3> <p>Survival modelling consists in learning the distributions of a “death” event while dealing with potential censorship or truncature in the observation of that event: either the Survival Function (= 1 – Cumulative Probability Distribution) is learned, or the Cumulative Hazard Function, or the Hazard Function. Knowing one of these functions enables to know all the others, as there is 1 to 1 mapping. In this challenge, there was neither truncature nor censorship, but we were eager to test this survival approach anyway. A nice generic approach to learning the Hazard Function is by assuming it is piecewise constant (this approach is referred to in literature as Piece-Wise Exponential Model, and is implemented for instance for linear models in the Lifelines python library). Here we assumed the constant step of our piecewise constant function was 1 yard. This approach enabled us to use a LGBM regressor with a Poisson objective. Indeed in the case of the Piece-Wise Exponential Model, the negative log-likelihood of the survival problem is equal (+ a constant term not important for the loss minimization) to the negative log-likelihood of a Poisson regression, the so-called “Poisson trick” as explained in chapter 7.4 of these Princeton class notes (<a href="https://data.princeton.edu/wws509/notes/c7.pdf">https://data.princeton.edu/wws509/notes/c7.pdf</a> ) To be able to apply this approach, the data set has first to be extended: each line is duplicated in multiple lines from -99 to “Yard” which is the death event. Then two new columns are added in the dataset : a descriptive variable being “yards run so far” taking values from -99 to “Yard”, and a new target (let’s name it “Stop” variable) being equal to 0 everywhere except at lines where “yards run so far” == “Yard” (death event). The “yards run so far” variable is very important to be included in the regression, for the LGBM to learn the shape of the Hazard Function. The disadvantage of this approach is that it increases RAM consumption because of data duplication. But it has the advantage of having only one regression target. As in the train set the minimum observed value for “Yard” is -14, we suppressed all lines with ‘yards run so far” equal from -99 to -15, and extrapolated manually with a 0 value our Hazard Function in post processing. Once our LGBM is trained, it is able to predict the Hazard Function for different “yards run so far” values, which gives the probability of the rusher being stopped knowing he has not been stopped so far. To compute the Cumulative Distribution Function, we predict the Hazard Function for values of “yards run so far” from -99 to 99, and we postprocess the resulting numpy array “HF” with the following formula : 1 – np.exp (- np.cumsum(HF)).</p>
ASHRAE - Great Energy Predictor III
[497th place shake down solution] PUBLIC LB 1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Since we can “feel free to post solutions and write-ups” now according to <a href="https://www.kaggle.com/c/ashrae-energy-prediction/discussion/122296">https://www.kaggle.com/c/ashrae-energy-prediction/discussion/122296</a> I summarize my solution.</p> <p>I appreciate ASHRAE and Kaggle for this really interesting competition. In spite of some terrible leakage, this competition is not destroyed thanks to sincere kagglers and rational rules clarification by organizers.</p> <p>[Feature Engineering and Feature Selection] -Final model used only 18 features. -I tried a lot of features but many were not useful. -Though dew temperature has high feature importance, I removed dew temperature because CV was better without it. I think air temperature is more important and dew temperature harms the contribution of air temperature. -Categorical Multiplication had a slight positive impact. For example site_id_and_meter.</p> <p>[Validation Strategy] -Mainly based on [1-6] train [7-12] valid. [1-6] means January to June. -Also watched [7-12] train [1-6] valid, and [1-9] train [10-12] valid. -After leakage, I also validated by site0,1,2,15.</p> <p>[Outlier Removal] -The most important part. -I plotted and watched all time-series carefully, and extracted so many rules that may detect outlier. I evaluated over 100 rules by the above Validation Strategy one by one. -Examples of effective rules: “Meter 0 and long 0 segment”/”Meter 2 and start before April and long 0 segment”/”0 segments in many building at the same time”/”day average is too small in January comparing with December”.</p> <p>[Single Model] -Only LGBM. -I tried CatBoost/time-series based methods but not worked. -Tuned Parameters: num_leaves=127, feature_fraction=0.6.</p> <p>[Ensemble] -Totally 32 LGBM models. -4 random seed averages for each model. -[1-6] train model and [7-12] train model, similar to half-and-half kernel. -Optimized weights for each model (Important). --Basically, weight for [7-12] model was set higher than weight for [1-6], since [7-12] model was created from recent data. --Furthermore, weight for [7-12] model was set higher for test data from July to December. --Weight for [1-6] model was set higher for test data whose time-series has few train data (due to outliers) in [7-12].</p> <p>[PostProcess] -Plotted all time-series before submission and extracted some effective postprocess rules that may improve LB score. --Modified to smaller value by applying piecewise linear function f. For example, 9.8 before log1p is modified to f(9.8)≒9.7. I searched optimal f and LB improved around 0.002. I think it is effective because there is some “unpredictable 0” but there is few “unpredictable large value“ (Important). --Clipped to [0.1, 10.5] before log1p.</p> <p>[Leak] -Replaced leaked label just before submission (site0,1,2,15 and part of site4) -Used for validation (site0,1,2,15) -No other external data, no leaked data in training data, no other methods to exploit leakage.</p> <p>Thank you for all comments! I will try to reply all questions within 24-48 business hours.</p>
ASHRAE - Great Energy Predictor III
25th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey Dear Kagglers :)</p> <p>First of all, congratulations to all winners and participants! And many thanks to all contributors for their amazing kernels and topics!</p> <p>Wanted to put my 2 cents in - so you can find a brief overview of my solution below.</p> <h2>Motivation</h2> <p>Started work on the competition quite late - just two weeks before the end and my main motivation was to experiment around the use of deep learning with tabular data (and for this kind of problems).</p> <h2>Architecture</h2> <p>Like probably most of the participants I ended up with 2 layer learning architecture: base layer and ensemble.</p> <h3>Base layer:</h3> Feature engineering <p>Because of my late start, I didn't really have the time for something fancy (FE is usually my main focus. Not this time though :smile: ). I ended up with 4 different datasets. On all, I applied cleansing (filtered out the bad rows) as per this great kernel (please upvote: <a href="https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks">https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks</a>). Used interpolation to fill in the missing values (on all), added "is_missing" features (to one of the sets). In addition generated some: * weather-based features (aggregates/grouping, lag/rolling (moving averages/max/min etc)) * time-based features (including holidays for one of the sets) * target encodings (in one of the sets).</p> <p>I draw inspiration from many great kernels in the process - check the list below and please go and upvote the amazing work of the authors - they deserve it!</p> Models <p>My main focus was here and my goal was to have as diverse models as possible. I did NOT touche the leak data in any of the models of the base layer. Trained a total of 19 models. Amongst those:</p> Deep NNs <p>As I mentioned my goal was mostly to play around these - so I put significant effort here. My best NNs didn't disappoint - they were approaching the performance of the best public LGBMs (1.09 LB) - but of course, had a very different "point of view" on the problem. That made them extremely useful in the final ensemble, and to that, I mostly attribute my good final score. Some things that worked great here: * entity embeddings for the categorical variables * Radam optimizer (that came as a bit of surprise - Radam didn't appear to make a huge difference for me on CV competitions - but here it showed significant advantage. Maybe the size of the network, or the complexity of the problem ... will dig deeper into that for sure) * Adam with CyclicLR + ReduceLROnPlateau schedulers + longer training * weather lag features + (few) time features Things that didn't work: * more features (not surprised) * "is_missing" features (kind of surprised)</p> <p>In short - very happy with my findings. Deep NNs will definitely be part of my considerations in future tabular data competitions.</p> LGBMs (of course :smile: ): <ul> <li>models per site, meter, half-half and on all data - trained on different datasets.</li> <li>my best scoring model was per site (performing significantly better than the best public models (1.072 LB))</li> </ul> L1, L2 regression models - just to have one more "opinion": <ul> <li>per site and all data</li> <li>made sense for this more linear kind of a problem.</li> </ul> <h3>Ensemble</h3> <p>The second major thing that boosted my score. Based on my previous experience, stacking works better than blending when testing data is not cardinally different (and that can be argued of course). (NOT) very small disclaimer here: * this is not valid in all cases * well... how similar is that test data really... * how proper your base layer is - is it diverse enough? Are you overfitting there already? Do you introduce leak in the meta model training data if use diverse validation/oof prediction schemes for the base models? * there are also many other (important) variables - like what algorithm will be used for the meta learner, how it will be tuned, validated etc. * very risky approach in general (overfitting is BIG concern...).</p> <p>Still ... it worked well for me in the past. And worked this time as well. Let me give you some numbers: * my best stack: 1.245 (PL) * my best blend: 1.270 (PL)</p> <p><strong>What worked well here:</strong> simple stuff (GLM, Gaussian, 5-fold CV, some regularization, adding meter to the training set). Performed very well for me also in the past (Elo Merchant Category Recommendation competition for example). <strong>What didn't work that great:</strong> more sophisticated stuff - Random Forest and even XGB, LGBM. All tend to overfit on my base data...</p> <p>These are pretty much the points, I want to make. One more time - please spend a minute to upvote the amazing work of the contributors below. And upvote this, in case you find it useful :smile: Cheers folks and "see" you on the next competition.</p> <p>Great kernels that deserve your vote: <a href="https://www.kaggle.com/rohanrao/ashrae-divide-and-conquer">https://www.kaggle.com/rohanrao/ashrae-divide-and-conquer</a> <a href="https://www.kaggle.com/corochann/ashrae-training-lgbm-by-meter-type">https://www.kaggle.com/corochann/ashrae-training-lgbm-by-meter-type</a> <a href="https://www.kaggle.com/isaienkov/keras-nn-with-embeddings-for-cat-features-1-15">https://www.kaggle.com/isaienkov/keras-nn-with-embeddings-for-cat-features-1-15</a> <a href="https://www.kaggle.com/roydatascience/ashrae-energy-prediction-using-stratified-kfold">https://www.kaggle.com/roydatascience/ashrae-energy-prediction-using-stratified-kfold</a> <a href="https://www.kaggle.com/iwatatakuya/ashrae-kfold-lightgbm-without-building-id">https://www.kaggle.com/iwatatakuya/ashrae-kfold-lightgbm-without-building-id</a> <a href="https://www.kaggle.com/kailex/ac-dc">https://www.kaggle.com/kailex/ac-dc</a> <a href="https://www.kaggle.com/rohanrao/ashrae-half-and-half">https://www.kaggle.com/rohanrao/ashrae-half-and-half</a> <a href="https://www.kaggle.com/nz0722/aligned-timestamp-lgbm-by-meter-type">https://www.kaggle.com/nz0722/aligned-timestamp-lgbm-by-meter-type</a> <a href="https://www.kaggle.com/aitude/ashrae-kfold-lightgbm-without-leak-1-08">https://www.kaggle.com/aitude/ashrae-kfold-lightgbm-without-leak-1-08</a> <a href="https://www.kaggle.com/mimoudata/ashrae-lightgbm-without-leak">https://www.kaggle.com/mimoudata/ashrae-lightgbm-without-leak</a> <a href="https://www.kaggle.com/starl1ght/ashrae-stacked-regression-lasso-ridge-lgbm">https://www.kaggle.com/starl1ght/ashrae-stacked-regression-lasso-ridge-lgbm</a> <a href="https://www.kaggle.com/tunguz/ashrae-histgradientboosting">https://www.kaggle.com/tunguz/ashrae-histgradientboosting</a> <a href="https://www.kaggle.com/yamsam/ashrae-leak-data-station">https://www.kaggle.com/yamsam/ashrae-leak-data-station</a></p>
NFL Big Data Bowl
Public 14th Solution - Transformer
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for such an interesting competition! This is the first time for me to use NN as my main model, and I've learned a lot from this competition ;)</p> <p>After a quick glance at training data, I thought this competition is quite similar to CHAMPS Predicting Molecular Properties competition - so I decided to use transformer in the early stage of the competiton.</p> <h3>Model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1532198%2F4c07f028121a2d4eacc48a4c78f1ab57%2FUntitled%20Diagram%20(1" alt="">.png?generation=1574937511409845&amp;alt=media)</p> <p>My architecture is 3 pointwise convolution layers (like PointNet) plus 4 transformer encoders followed by few dense layers. I tried multi-headed attention, but it didn't work for me. Implementation of transformer is based on PyTorch 1.3, but slightly modified (change post-LN architecture to pre-LN. see this paper in detail <a href="https://openreview.net/pdf?id=B1x8anVFPr">here</a>).</p> <h3>Features</h3> <p>I use 17 features in total. Except for YardLine and Distance, 15 features are calculated for each player (so 15 x 22 feature vector are fed into transformer encoder). All features are quite simple, but the most important one is an angle from Rusher's moving direction. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1532198%2F6b57f4d27053e5bc50d04731485524bd%2FUntitled%20Diagram.png?generation=1574935570947304&amp;alt=media" alt=""></p> <p>This polar coordinates based on rusher's dir gave me a big boost (~0.0003). I also calculate this angle in 0.001s after the given time and calculate the difference between θ(t) and θ(t+0.001). </p> <p>List of all features used:</p> <ul> <li>YardLine</li> <li>Distance</li> <li>X, Y, S, A, Dis, PlayerWeight</li> <li>IsBallCarrier</li> <li>IsOffence</li> <li>sin(Dir), cos(Dir)</li> <li>dX, dY, atan(dY, dX) from Rusher</li> <li>angle, angle variation from Rusher's direction (described above)</li> </ul> <p>That's all :) CV and Adversarial validation is used to remove irrelevant or suspicious features.</p> <h3>Validation</h3> <p>In addition to standard GroupKFold, I also tried time-split validation (discussed in <a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/113861#latest-658998">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/113861#latest-658998</a>). Both worked well for me but its validation score differs a lot. So I choose one from each validation strategy for the final submission.</p> <h3>Other techniques (maybe) worked</h3> <ul> <li>AdamW optimizer (slightly better than Adam)</li> <li>Gaussian noise and Y-shift augmentation on player vector</li> <li>TTA (20x augmenting and adding gaussian noise on Y and Dir)</li> <li>Snapshot Ensemble</li> <li>Downsample 2017 data (randomly drop 40% of data for each epoch)</li> <li>Keep the length of epoch constant (12,000 samples = 1 epoch), regardless of the total number of training samples <ul><li>This helped me estimating running time in 2nd stage</li></ul></li> </ul> <p>EDIT: Here is my solution code. <a href="https://www.kaggle.com/nyanpn/pytorch-transformer-public-14th-private-22th">https://www.kaggle.com/nyanpn/pytorch-transformer-public-14th-private-22th</a></p>
ASHRAE - Great Energy Predictor III
PUBLIC LB 24th SOLUTION by team Electrium Z
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ASHRAE - Great Energy Predictor III <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As we are spending these festivities glancing and glancing again the LB waiting for the definitive results, we decided to already share our approach in this competition in order to let some time pass sharing some knowledge and ideas and also because we may be too much shocked later (it could be either in positive or negative ¯_(ツ)_/¯ ) for writing it down later. </p> <p>Before starting, a big mention to all the team that made this possible and that fought until the very end, in spite of a lot of disappointing events (cheating and private sharing) about which you already know: Pietro Marinelli, Stefano Morelli, Serigne Cisse, Amine Marzouki :-)</p> <p>Here we start:</p> <p>[Feature Engineering and Feature Selection]</p> <p>We first processed the weather data, by integrating the missing timestamps by site, then we estimated the missing values for "air_temperature", "cloud_coverage", "dew_temperature", "precip_depth_1_hr", "sea_level_pressure", "wind_direction", "wind_speed", using interpolation. </p> <p>We then extracted temporal data, both based on UTC time and on the local time we presumed for each site. Apart from month, week, day, hour, notably we extracted also the day of the week, and the day of the year, correcting for the fact that 2016 was a bisestile year (and the following two were not). Then we fused all training, weather and building metadata together.</p> <p>As for as feature creation: • We created the delta min-max air temperature by hour at site • We computed the sin and cos transformation of time data, such as hour, day of the week, day of the year (it actually didn’t prove so much useful) • We created group stats (min, max, mean, median, variance) for weather measures by grouping by site, month, day and hour. • We found useful to compute the delta wind in respect of 1, 2 and 3 hours before. • We computed some holidays by site and modelled them using the radial basis function in order to catch their effect not simply on the holiday day. • We used a 7 day rolling mean for air temperature that we used in order to compare with the actual temperature • We computed the expected average floor size of the building</p> <p>[Validation Strategy] We adopted two learning strategies: the first requiring downsampling the meter 0 and meter 1 cases (by 50% and 35% respectively) in order for a model to learn all the meter together, the second building a single model for each meter. In the end we also tried varying the strategy, such as building a model for working days and another one for weekend and festivities or such as a model for each meter by site combination, but the first two strategies proved the most efficient. Stefano also tried some pseudo-labelling. In all these cases we learned a model for each fold iteration in cross-validation. Initially we opted for a 5 fold stratified CV, but alter we found out that a 3 fold strategy brought better LB results. We also noticed that by increasing the number of folds we obtained better oof scores, which didn’t translate in better LB performances, anyway. </p> <p>[Outlier Removal] We observed for outlier removal, both in train and in prediction phases. In the training phase we decided to remove the building 1099 and all the readings from building 104 for meter 0 before 2016-05-20. In the prediction phase we monitored for outlying predictions, but we noticed that by ensembling anomalous values were naturally flatten out.</p> <p>[Leak] After the leak was exposed for sites 0,1,2,15 and it become public to everyone, we started using the leaked data for ensembling, that is for finding the best combination of our models. In our submissions to the LB, after finding the best combination, we also replaced our estimations for sites 0,1,2,15 with the leaked labels since we expected everyone did the same O_o and because it helped us to understand if we were improving also on the other sites, which we were sure they would have been used for the final test. We didn't use the leaked data in any other way.</p> <p>[Single Models] We built a large number of models: lightgbm (also dart), catboost, xgboost, dnn, random forest. In the end we had over 30 models, though we decided to use only about half of them, because the most effective accordingly to our ensembling strategy.</p> <p>[Ensembling] We ensembled using the leaked data as a test data. In fact, we expected our model to predict well on all sites if it was performing fine both on the leaked data and on the public leaderboard. In order to obtain a fine ensembling for this purpose, we developed a Bayesian optimization that used a linear combination of the different meter predictions. Since we noticed that certain models worked best on specific meters, we decided to give a separated weight for each meter in a prediction. We also put a constrain on the optimization, thus all weights were positive or at most zero. Through successive iterations we found out that there was a sweet point, that is optimizing more for the leaked data at a certain point the LB score decreased. This information helped us to focus on certain models to be blended and exclude others.</p> <p>[No one expects the Spanish Inquisition, aka the “shake up”] Well, but we did. Since long it has been clear for all of the team that optimizing for the leaked sites could have proved a good strategy for scoring high on the public LB but it may also have turned into a huge overfitting penalizing predictions for other sites (in spite of our hopes for generalization). Consequently, we decided to prepare a B-plan, what we called with the codename “the Big Mean”. In this alternative plan, we simply took all our best LB submissions and their oof and we averaged them, slightly overweighting the submissions whose oof scored higher on sites that were not part of the leak. Based on our experience of many other forecasting challenges we took, especially when you do have to forecast far in the future (as in the case of our competition), sometimes simple stuff like a plain mean does more the kind of magic you need.</p> <p>Will all our work lead to a good final position? We don’t know, but in the end we found that we would have liked to explore more about holidays at the different sites and doing more data cleaning of possible outliers in training data, instead of relying on a blend that could smooth out wrong estimations. Anyway, as a team we tried really a lot,with everyone bringing ideas and testing solution. In any case we all learned a lot and enjoyed exploring solutions for this competition. Let’s see what happens with the new year :-) Meanwhile have great festivities!</p>