Title of Competition
stringclasses 168
values | Title of Writeup
stringlengths 9
139
| User
stringclasses 1
value | Writeup
stringlengths 419
61.5k
|
---|---|---|---|
Recursion Cellular Image Classification | 8th place solution summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all on a great competition, and especially to top3. Please do share the NeurIPS content you present. </p>
<p>Shout out to Catalyst team <code>Reproducible and fast DL & RL</code>
<a href="https://catalyst-team.github.io/catalyst">Catalyst</a>
Huge thanks to Albumentations:
<a href="https://github.com/albu/albumentations">Albumentations</a>
And kudos to pytorch-toolbelt, for TTA on GPU:
<a href="https://github.com/BloodAxe/pytorch-toolbelt">PyTorch-toolbelt</a></p>
<h3>Main points are …</h3>
Preprocessing
<ul>
<li>Concat 6 channels</li>
<li>Controls from train and test added as additional classes</li>
<li>Treat each sirna sit separately for loading</li>
<li>Normalize images by experiment/plate/channel</li>
<li>Augmentations from albumentation - filp, rotate, transpose, cutout holes, shiftscale rotate</li>
<li>Apply mixup/cutout by batch (big help)</li>
<li>5-fold CV</li>
<li>Built 256x256 and 512x512 based models</li>
</ul>
Modelling
<ul>
<li>Denesent121 was workhorse, also used EfficientNet-B1, SE-Restnet101 and Densenet169</li>
<li>Cosine LR policy and sawtooth policy worked pretty well, incl. Ralamb + Lookahead.</li>
<li>Apex mixed precision helped speed up training</li>
<li>Pseudo labels for test</li>
<li>Label smoothing of ~ 0.1</li>
<li>Continue training per individual experiment group (only worked without pseudo)</li>
</ul>
Post processing
<ul>
<li>No arcface - simply average train LOGITs on experiment level, and get cosine similarity of test LOGITs to nearest sirna</li>
<li>TTA - flip, rotate, transpose</li>
<li>Apply Plate Leak</li>
<li>Prediction balancing from Doodle @Pavel <a href="https://github.com/PavelOstyakov/predictions_balancing">https://github.com/PavelOstyakov/predictions_balancing</a></li>
</ul> |
The 3rd YouTube-8M Video Understanding Challenge | 5th place - Partial solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of I would like to thank the organizers for setting up another interesting competition. I hope that the Kaggle team will also do the right thing and clear the leaderboard of all the cheaters and competitors that broke the rules. </p>
<p>It has been a great pleasure working on this challenge with both Mikel and David. Unfortunately, we lacked time to go into details and explore "exotic" solutions.</p>
<p>Here I will describe part of the solution and Anokas will describe his part. We ended doing ranked sum ensemble for our final submission.</p>
<p>There are 3 part to the solution presented here:
1. Video level network
2. 5 Frame network
3. Localization network</p>
<h2>Video level network - P_V</h2>
<p>This was basically based on last year’s 1st placed solution - <a href="https://github.com/miha-skalic/youtube8mchallenge">link</a> . Essentially, for each fragment we would factor in prediction based on the whole video sequence.</p>
<h2>5 frame network - P_5</h2>
<p>We trained three models: 1x DBoF, 2x VLAD on sequences from 2nd year data, sampling 5 frames. Then we fine-tuned the model on annotated fragments from this year.</p>
<h2>Localization network – P_L</h2>
<p>This network took in a sequence of frames and a target label. The target label would be passed to an embedding layer and then concatenated with the sequence of frames. The concatenated sequence would then be processed by an LSTM to predict whether target label was predicted for each frame. Non-annotated frames would be masked out.
Downside of this approach was that we needed to run inference 1000 times. 1x for each target label.</p>
<h2>Combining the 3 models</h2>
<p>Multiplying the probabilities (weighted geometric mean) gave the best results.
Essentially probability for each fragment-class combination, p(fc), was computed as:
p(fc) = Log (P_v) + 2/3 Log (P_5-VLAD1) + 2/3 Log (P_5-VLAD2) + 2/3 Log (P_5-DBoF) + Log(P_l(*|c))
From here on we would just sort fragments based on p(fc) and for each class c report the top fragments f.</p>
<p>Detailed solution will be provided as workshop submission.</p> |
Predicting Molecular Properties | 35th Place Solution - Basic MPNN & First Medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I want to thank everyone who participated in this competition and the competition organizers. </p>
<p>This was only my 2nd competition and I am really happy to have achieved my first ever medal and my first ever silver medal! </p>
<p><strong>Overview</strong>
- Every coupling was treated as its own graph
- For the same molecule, graphs of 2 different couplings were different from each other.
- Used the MPNN from the Gilmer paper <a href="https://arxiv.org/abs/1704.01212">https://arxiv.org/abs/1704.01212</a>
- Used basic chemical features like atomic number and basic geometric features like angles and distances.
- Had same features for all types but different connectivity for 1JHX, 2JHX and 3JHX
- Most important part was not the model but how the molecular graph was connected together
- All geometric features were relative to the atoms at atom index 0 and 1 and 1 or 2 other atoms which I found.</p>
<p><strong>Molecular Graph Representation</strong>
In the Gilmer Paper, a molecule is represented as a fully connected graph i.e. there are the default bonds (real bonds) and on top of that each atom is connected to each atom through a fake bond. In the paper, the point is to predict properties that belong to the whole graph and not to a particular edge or a node. So, in order to adapt to the nature of this competition, I used the following representation:</p>
<ul>
<li>Each coupling was a data point i.e. each coupling was its own molecular graph</li>
<li>If a molecule had N number of couplings, then all N graphs are different from each other</li>
</ul>
<p><em>Type 1JHX</em>
- Connected each atom to the 2 target atoms (atom index 0 and 1) on top of the default real bonds (note how this is not the same as the Gilmer paper where the graph is fully connected)
- All geometric features were calculated as relative to the 2 target atoms.</p>
<p><em>Type 2JHX</em>
- Found the atom on the shortest path between the 2 target atoms. So there were now 3 target atoms (atom index 0, atom index 1, atom on shortest path)
- Connected each atom to the 3 target atoms on top of the default real bonds.
- Features were calculated relative to all 3 target atoms e.g. distance & angle to atom index 0, atom index 1 and the atom on shortest path.</p>
<p><em>Type 3JHX</em>
- Found the 2 atoms on the shortest path between the 2 target atoms. So there were now 4 target atoms (atom index 0, atom index 1, 1st atom on shortest path, 2nd atom on shortest path)
- Connected each atom to the 4 target atoms on top of the default real bonds.
- Features were calculated relative to all 4 target atoms.</p>
<p>Also, I made all the graphs fully bidirectional. Using a fully bidirectional graph gave me a significant improvement over a one-directional graph which was used in the paper.</p>
<p><strong>Model</strong>
- The model was really basic with some additional layers and slightly larger dimensions, very similar to what is written here <a href="https://github.com/rusty1s/pytorch_geometric/blob/master/examples/qm9_nn_conv.py">https://github.com/rusty1s/pytorch_geometric/blob/master/examples/qm9_nn_conv.py</a>.
- I added very little Dropout and BatchNorm in the initial linear transformation layer which actually led to the model performing better.
- I experimented with adding Dropout in the MLP used by the NNConv and it showed promising results but they were too unstable so I decided to not go through with it.
- I tried adding an attention mechanism over the messages passed by the network but did not see an improvement in score (most likely implemented it incorrectly)
- I also tried using the node vectors of the target atoms only to predict the scc but this actually performed way worse (probably because the way I am representing my molecules does not translate well to using just the node vectors of a subset of nodes)
- I only trained a single model for each type (8 models total) so did not do any ensembling</p>
<p><strong>Train only data</strong>
Unfortunately, towards the end of the competition I was busy with some other work so could not get a chance to play around the fc, pso etc features. </p> |
Recursion Cellular Image Classification | 5th place solution. AdaBN[domain==plate] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I would like to thank Recursion and Kaggle for organizing this interesting challenge, and thank team Double strand for their contribution.</p>
<p>Here's a summary of my solution.</p>
<h3>In this competition we have a multi-source and multi-target domain dataset. And we have domain labels.</h3>
<h3>What is "domain" in this dataset?</h3>
<p>cell, experiment, or plate</p>
<p>My EDA suggests [domain==experiment]. But [domain==plate] worked better in practice. This is reasonable since batch effects and plate effects exists.</p>
<h3>How to use domain labels? Simply <a href="https://arxiv.org/abs/1603.04779">AdaBN</a>.</h3>
<p>In short, do not challenge your model(with BN layers) with cross-domain batches.</p>
<p>In training, use domain(plate) aware batch sampling.</p>
<p>In testing, use domain batch statistics in BN layers.</p>
<p>With batch norm done right, this competition <strong>IMMEDIATELY</strong> becomes a regular classification challenge: training converges smoothly and I had consistent validation/LB scores (except for HUVEC-05).</p>
<p>Some results of my early experiments, ResNet50, 224x224 input, same hyperparameters
- random batch sampling, val acc 40+%
- sample batches in the same cell type, val acc 50+%
- sample batches in the same experiment, val acc 60+%
- sample batches in the same plate, val acc 70+%</p>
<h3>Model</h3>
<p>Sequential(BatchNorm2d(6), backbone, neck, head)</p>
<p>backbone: DenseNet201, ResNeXt101_32x8d, HRNet-W18, HRNet-W30</p>
<p>neck: gap or gap+bn</p>
<p>head: 5 fc layers (1 shared and 4 for different cells)</p>
<h3>Loss:</h3>
<p>ArcFaceLoss(s=64, m=0.5) for gap+bn neck</p>
<p>ArcFaceLoss(s=64, m=0.3) for gap neck</p>
<h3>Exemplar Memory</h3>
<p>In this dataset, we could get more supervision than siRNA labels.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F381412%2F27dd6f8424d2b8e2e3d48a55be5bf12a%2Fdataset.png?generation=1569561415969234&alt=media" alt=""></p>
<p><a href="https://arxiv.org/abs/1904.01990">Exemplar Memory</a> fits this structure perfectly.</p>
<p>Fine tuning with 19 exemplar memory modules (HUVEC-05 and 18 test experiments) gave me ~1% LB boost, and HUVEC-05 validation accuracy increased from ~35% to ~46%(~65% with 277 linear assignment)</p>
<h3>Training</h3>
<p>1108-way classifier with treatment only.
input 512 -> random crop 384 -> random rot90 -> random hflip
loss = 0.5 * loss_fc_cell + 0.5 * loss_fc_shared</p>
<p>No pseudo labeling was used.</p>
<h3>Prediction</h3>
<p>input 512
Use fc_cell.
No TTA.
Averaging two sites.
lapjv for linear assignment.</p>
<h3>DenseNet201 results</h3>
<p>| | Public | Private | Public (leak) | Private (leak) |
| --- | --- | --- | --- | --- |
| train data only | 0.92620 |0.97325 | 0.98307 | 0.99303
| + exemplar memory fine tune | 0.95531 | 0.98321 | 0.98691 | 0.99394</p>
<h3>Some interesting finding</h3>
<p>HUVEC-05 prediction of fc_shared is always better than fc_cell. What's wrong with this experiment?</p> |
Recursion Cellular Image Classification | 3rd place solution. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2Ffade7b8de4750f88c191d2d23fc6dfb8%2F2019-09-29%2012.13.24.png?generation=1569687562171949&alt=media" alt=""></p>
<p><a href="https://github.com/SeuTao/CellSignal-Cell-Image-Analysis">https://github.com/SeuTao/CellSignal-Cell-Image-Analysis</a></p> |
Recursion Cellular Image Classification | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all prize and medal winners!
Our brief solution summary:</p>
<p>1) heavy model ensemble
- 1108-way classification
- seresnext50, 101, densenet, efficientnet x 6C6, 5C6 4C6 channel selection x cross validation
- 512x512 input, 90rot+clip aug</p>
<p>2) solve linear sum assignment problem to make the most of 'the groups of 277 per plate restriction'
<a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/102905#latest-624588">https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/102905#latest-624588</a></p>
<p><code>
from scipy.optimize import linear_sum_assignment
plate_prob = plate_prob / plate_prob.sum(axis=0, keepdims=True)
row_ind, col_ind = linear_sum_assignment(1 - plate_prob)
</code></p>
<p>3) make psuedo label and go back to 1 (with a few selected models like efficientnet)</p>
<p>The code can be found in <a href="https://github.com/ngxbac/Kaggle-Recursion-Cellular">https://github.com/ngxbac/Kaggle-Recursion-Cellular</a>
(mainly the 1) part)</p> |
APTOS 2019 Blindness Detection | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>My Submissions</h1>
<p>My best public LB: <code>0.842</code> (with quick submit, so no private LB available)
My FINAL submit v1, with public LB <code>0.838</code> , private LB <code>0.932</code> , is based on my 6 models with best local CV.
My FINAL submit v2, with public LB <code>0.840</code> , private LB <code>0.933</code> , with more variance in ensemble models.
My best single model, with public LB <code>0.840</code> , private LB <code>0.929</code>, which is EfficientNet-B7 with 224x224 input.</p>
<h1>Preprocessing</h1>
<h2>Crop From Gray (Both training and predicting)</h2>
<p>This function is shared in <a href="https://www.kaggle.com/ratthachat/aptos-updatedv14-preprocessing-ben-s-cropping">here</a> (search keyword <code>crop_image_from_gray</code> in the page, thanks <a href="/ratthachat">@ratthachat</a> for sharing!)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F448347%2F0405798d336f82857988977e629f108b%2F_20190908171800.png?generation=1567930710772532&alt=media" alt=""></p>
<h2>Apply Image Type (Only For Training)</h2>
<p>I apply image type for every image in training set (2015 & 2019) using the brightness of image boundary.
From left to right in the following picture: <code>Type_0</code> , <code>Type_1</code> , <code>Type_2</code> </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F448347%2F0fd5e78191a5373707c4faf3c83f1ad6%2F_20190908211528.png?generation=1567944954429547&alt=media" alt=""></p>
<h2>Transform All Image to Type 2 (Only For Training)</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F448347%2F8439f5c3e2a27fd2d548e460eb2da46c%2F_20190908173231.png?generation=1567931590044112&alt=media" alt=""></p>
<h1>Augmentation</h1>
<p>After crop & resize all images to type_2, I heavily transforming those images by:</p>
<p>Dihedral, RandomCrop, Rotation, Contrast, Brightness, Cutout, PerspectiveTransform, Clahe.</p>
<h1>Model</h1>
<p>I find that large model training on high resolution images is much more likely to be overfitting, due to the small size of training set. So I use small model for high resolution / large model for low resolution.</p>
<ul>
<li>EfficientNet-B7 (224x224) LB 0.840</li>
<li>EfficientNet-B6 (240x240) LB 0.832</li>
<li>EfficientNet-B5 (256x256) LB 0.831</li>
<li>EfficientNet-B4 (320x320) LB 0.826</li>
<li>EfficientNet-B3 (352x352) *LB Don't Know</li>
<li>EfficientNet-B2 (376x376) LB 0.828</li>
</ul>
<p><strong>LB is came from 5-fold testing x 8 Dihedral TTA</strong></p>
<p>Despite the difference in Public LB, all of these model have a similar local 5-fold CV around 0.933</p>
<p>↑ My FINAL submit v1 is simple avg these models.</p>
<p>And the FINAL submit v2 is based on v1, replaced B2 & B3 models with a B7 and a B5 that trained on Ben's preprocessing data (image type, along with other tricks are also applied).</p>
<ul>
<li>EfficientNet-B7 (224x224 Ben's) LB 0.834</li>
<li>EfficientNet-B5 (256x256 Ben's) LB 0.833</li>
</ul>
<p>local 5-fold CV of these 2 models is around 0.927</p>
<h1>Training Process</h1>
<p>Pretrain on 2015 full dataset (both train and test) for 25 epochs without validation. Then do 5-fold CV on 2019 dataset.</p>
<h1>Predicting Process</h1>
<p>Because of the 5-fold CV, I got 5 model files for each experiment.
For each experiment do 5 models * 8 Dihedral TTA predicting.
<strong>I only do image preprocessing once for 5*8=40 test as for saving time.</strong> In this case 40 times test cost ~10mins for predicting 1928 images.
With ensemble 6 models, the kernel took 50mins for committing, 6h for submitting.</p> |
APTOS 2019 Blindness Detection | 12th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to everyone, I am very happy to get my first gold medal! Here is my solution.</p>
<h2>models</h2>
<ul>
<li>efficientnet-b5</li>
<li>efficientnet-b6
My final submission includes 80% regression models and 20% classification models.
<h2>Data preprocessing and Augmentations</h2></li>
</ul>
<p>I used two sets of data_crop augmentations:
- Cropped black background and keep the aspect ratio resize to 320x320.
- Random add black background to image and resize to 288x288.</p>
<p>Both crop methods use: fliplr, flipud, random_angle_rotate, random_erase, random_blur, brightness_shift, color_shift,hue_shift and gaussian_noise.</p>
<h2>Training strategy</h2>
<ul>
<li>pretrain on 2015 data and finetune on 2019 data, Thaks to DrHB <a href="/drhabib">@drhabib</a> , I get a start from your methods. </li>
<li>5 fold CV(Select the two or three highest accuracy folds to submit)</li>
<li>Finetune should preserve the optimizer's parameters from pretrain, it made a big difference to me.</li>
<li>TTA*2</li>
<li>threshold: [0.5, 1.5, 2.5, 3.5]</li>
<li>Classification models using the voting method to ensemble.
<h2>Pseudo labels</h2></li>
</ul>
<p>Using pseudo labels give me LB 0.840->0.842, but I'm worried about overfitting, so I only give a very low weight to the pseudo-tag model. As it turned out, I was wrong.</p>
<h2>Conclusion</h2>
<ul>
<li>A good pretrain is helpful. </li>
<li>Pseudo labels is attractive and dangerous, but it's worth trying.</li>
<li>Hold out until the last minute!!!💪 </li>
</ul> |
APTOS 2019 Blindness Detection | 1st place solution summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to APTOS and Kaggle for hosting this interesting competition.
I also would like to thank those who generously contributed in the kernels and discussions in this competition, and to the top teams that shared solutions and findings in the 2015 competition. Those findings and solutions have greatly impacted my strategy.</p>
<p><strong>Validation Strategy</strong></p>
<p>One of the most popular topics in every competition are proper validation strategies. During the early stage, I tried using 2015 data (both train and test) as train set, and 2019 train data as validation set. Unfortunately, the validation results and public LB were very different, I was not able to make their performance correlate well. In some discussions and kernels, other participants were also reporting inconsistent performance between CV and LB. Because of this, I did not know how to move forward, so I shifted to some other competitions for a few weeks. When I came back to this competition, I made the decision to combine the whole 2015 and 2019 data as train set, and solely relied on public LB for validation.</p>
<p><strong>Preprocessing</strong></p>
<p>I don't think it's necessary to preprocess images to help with the modelling, the image qualities are perfect as input for deep neural networks. So, no special preprocessing, just plain resizing.</p>
<p><strong>Models and Input sizes</strong></p>
<p>My final submission was a simple average of the following eight models. Inceptions and ResNets usually blend well. If I could have two more weeks I would definitely add some EfficientNets. </p>
<p><code>
2 x inception_resnet_v2, input size 512
2 x inception_v4, input size 512
2 x seresnext50, input size 512
2 x seresnext101, input size 384
</code></p>
<p>The input size was mainly determined by observations in the 2015 competition that larger input size brought better performance. Even though I did not find a lot beneficial to go beyond 384 based on the public LB feedback, I still push to the extreme the input size because the private set might benefit.</p>
<p><strong>Loss, Augmentations, Pooling</strong></p>
<p>I used only <em>nn.SmoothL1Loss()</em> as the loss function. Other loss functions may work well too. I sticked to this single loss just to simplify the emsembling process.</p>
<p>For augmentations, the following were helpful</p>
<p><code>
contrast_range=0.2,
brightness_range=20.,
hue_range=10.,
saturation_range=20.,
blur_and_sharpen=True,
rotate_range=180.,
scale_range=0.2,
shear_range=0.2,
shift_range=0.2,
do_mirror=True,
</code></p>
<p>For the last pooling layer, I found the generalized mean pooling (<a href="https://arxiv.org/pdf/1711.02512.pdf">https://arxiv.org/pdf/1711.02512.pdf</a>) better than the original average pooling. Code copied from <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">https://github.com/filipradenovic/cnnimageretrieval-pytorch</a>.</p>
<p><code>
from torch.nn.parameter import Parameter
def gem(x, p=3, eps=1e-6):
return F.avg_pool2d(x.clamp(min=eps).pow(p), (x.size(-2), x.size(-1))).pow(1./p)
class GeM(nn.Module):
def __init__(self, p=3, eps=1e-6):
super(GeM,self).__init__()
self.p = Parameter(torch.ones(1)*p)
self.eps = eps
def forward(self, x):
return gem(x, p=self.p, eps=self.eps) <br>
def __repr__(self):
return self.__class__.__name__ + '(' + 'p=' + '{:.4f}'.format(self.p.data.tolist()[0]) + ', ' + 'eps=' + str(self.eps) + ')'
model = se_resnet50(num_classes=1000, pretrained='imagenet')
model.avg_pool = GeM()
</code></p>
<p><strong>Training and Testing</strong></p>
<p>The training process can be divided into two stages. In the first stage, I routinely trained the eight models and validated each of them on the public LB. To get more stable results, models were evaluated in pairs (with different seeds), that's why I have 2x for each type of model. When probing LB, I tried to reduce the degree of freedom of hyperparemeters to alleviate overfitting, for example, to determine the best number of epochs for training I used a step size of five. The following are the optimized results after stage1 training:</p>
<p>inception_resnet_v2 public: 0.831 private: 0.927
inception_v4 public: 0.826 private: 0.924
seresnext50 public: 0.826 private: 0.931
seresnext101 public: 0.819 private: 0.923 (the 2nd best result, missing the best)
ensemble public: 0.844 private: 0.934</p>
<p>In the second stage of training, I added pseudo-labelled (soft version) public test data and two additional external data - the Idrid and the Messidor dataset, to the stage1 trainset. The labels of Idrid also have five levels, to mitigate the labeling bias (my guess), the Idrid labels were smoothed by averaging the provided labels with the predicted labels from stage1 models. For the Messidor dataset which we only have four levels, I grouped the stage1 predicted soft labels by the provided groundtruth labels, calculated the mean of each group, and bounded the outliers using the mean values. For example, if the mean value of a group is 2.2, and within the same group an image has stage1 prediction of 1.1, then the label is adjusted to 2.2-0.5=1.7. After the preparation of all the labels and data, each stage1 model were trained for 10 more epoch. Finally, the LB improved from public:0.850 and private:0.935. Hours before the deadline, I made the last shot by changing the qwk thresholds from [0.5, 1.5, 2.5, 3.5] to [0.7, 1.5, 2.5, 3.5] and private improved to 0.936 ...</p> |
The 3rd YouTube-8M Video Understanding Challenge | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks organizers for hosting this competition! Congratulations for all the top teams! I hope that you have enjoyed this challenge! 🎉🎉 🎉 With <a href="https://www.kaggle.com/daokouer">@daokouer </a> we had a great time exploring the interesting topic.</p>
<p>Here we will present briefly our solution for this challenge. For this temporal localization problem , we regarded it as video segment classification problem. We trained our models based on video segment feature with or without context information and predictions were made for each segments.</p>
<h2>Models</h2>
<p>We trained two types of the models: sequence model and frame level model. These two types of models made their decisions based on different parts of input.</p>
<h3>Sequence Modeling Model</h3>
<p>We used Transformer and BiGRU as our sequence models. The whole video feature was taken as input and predictions were made for each five frames. For sequence models we believed that they focused on long-term temporal dependency. </p>
<h3>Frame Level Model</h3>
<p>NeXtVLAD were used as the frame level model. The frame level took exact five frames as input and output one prediction. We believed that with limited reception field, the NeXtVLAD focused more on the static feature of the segment.</p>
<h2>Model Pre-train</h2>
<p>Models above had a great number of parameters to learn but we only had few segment level labels. We thus wanted to take advantage of the huge training data with video label. We used EM-like process to make use of the training set. We initialized our model <code>f</code> using the same method as the training process in the official baseline code. During E-step, we estimated the segment label of the training set using model <code>f</code>. During M-step, we trained a new model <code>f</code> using the generated label and fine-tuned it on the segment label. We operated two EM iterations in our experiments. <br>
We also tried several multi-instance learning methods to make use of the video label, e.g. we performed max pooling over segment predictions and took it as video prediction to calculate the loss, etc. But we did not find a MIL method that outperformed the EM-like method. </p>
<p>Please refer to workshop submission for detailed solution. </p> |
Recursion Cellular Image Classification | 10th place - Pure Magic Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners and thanks to the host and Kaggle hosted such an interesting competition. Huge thank to my teammates you are the greatest!</p>
<p><strong>Overview:</strong>
All our models were trained with 512x512x6 images. We sampled a random site for each sample per epoch. All our were 2-headed.
First head: classification head had an output with 1139 neurons,
Second head: embeddings.</p>
<p><strong>Challenges:</strong>
Validation
Specific experiments like U2OS-4 failed </p>
<p><strong>Validation for CNNs:</strong>
We noticed that specific experiment types perform very differently from others. We chose the hardest experiment for our model and built the validation based on it. </p>
<p><strong>Training Augmentations:</strong>
- flips (horizontal, vertical)
- rot90
- transpose
- shift (0.25) with 101 reflections
- rot360 (-180, 180)
- cutout (32x32, 2 holes)
- noise (gaussian, localvar, poisson, salt&pepper, speckle)
- clahe
- gamma (0.9,1.1)</p>
<p><strong>Data Pre-Processing:</strong>
per channel (img - img.mean()) / (img.std() + 1e-6)
We noticed that due to heavy augmentations we had a true_division warning. This warning caused batchnorms to feel bad, so we did this trick to prevent zero division.</p>
<p><strong>Model training:</strong>
We train our models with NVidia Apex and Pytorch 1.2 on 8x1080ti GPU server for 110 epochs. The training takes 1-3 days depending on the model.
We did not use oversample.
Because of the long iteration, we didn’t use any K-Fold training. We wanted to train it closer to the end of the competition, but we didn’t have much time. </p>
<p><strong>Optimizer:</strong> SGD</p>
<p><strong>Scheduler:</strong> init LR 0.1,
5 epochs warmup,
drop LR every 40 epochs with factor 0.1</p>
<p><strong>Loss Functions:</strong>
Smooth CrossEntropy for classification, Center Loss for embeddings. </p>
<p><strong>Model:</strong>
Our final model is Senet154 trained on train data + pseudo-labels.
To produce pseudo-labels were using an ensemble SeNet154, SeResnext50, SeResnext101, Polynet, EfficientNet-b6, ResNeXt101-wsl. </p>
<p><strong>Test time augmentations:</strong>
output = (model(img1) + model(img2) + model(img1.flip(2)) + model(img1.flip(3)) + model(img2.flip(2)) + model(img2.flip(3)) / 6.</p>
<p><strong>Post-processing:</strong>
Hungry experiment re-calibration and plate re-calibration
<a href="https://www.kaggle.com/c/quickdraw-doodle-recognition/discussion/73803#latest-438270">https://www.kaggle.com/c/quickdraw-doodle-recognition/discussion/73803#latest-438270</a> </p>
<p><strong>What didn’t work or had about the same performance:</strong>
- Mix-up, manifold mixup
- ArcFace
- Last stride 1
- Focal Loss
- XGBoost ensembling
- Metric search based on the embeddings distances
- GeM
- Different optimizers like Adam, Radam, Ranger, etc.
- ResNeXt101-wsl performed worse than expected. </p> |
APTOS 2019 Blindness Detection | 7th place solution w/ code [Catalyst, Albumentations] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I joined this competition on a purpose to secure a solo gold medal, which is required to obtain a GM badge. Solving Kaggle challenges alone is hard, and I'm grateful for all who writes post-morten solutions. It's cool to learn other's approaches and realize we all came to more or less similar solutions. Here's my 5 cents on this competition:</p>
<p>Code is available here: <a href="https://github.com/BloodAxe/Kaggle-2019-Blindness-Detection">https://github.com/BloodAxe/Kaggle-2019-Blindness-Detection</a></p>
<p><strong>TLDR</strong>
0. SeResNext50, SeResNext101, InceptionV4
1. Pre-train models on data from 2015, validate on this competition's data and Idrid (test)
2. Train on this competition's data, Idrid (train) and Messidor
3. Pseudo-label test dataset, Messidor 2, previous competition's data
4. Fine-tune models
5. Predict with horizontal flip TTA</p>
<h2>Preprocessing</h2>
<p>I've tried different preprorcessing methods, including Ben's and some in-house ones based on Clahe, dropping red channel and sperical image unwarping. <em>All of them failed to outperform using images as-is</em>. My explanation to this is the fact, that models we all used here are way more complex (in a number of parameters) and have much bigger capacity and generalization power, so preprocessing has vanishing impact.</p>
<p>My preprocessing pipeline therefore was very simple:</p>
<ul>
<li>Crop image to get rid of black regions outside of eye's boundary</li>
<li>Pad image if needed to square size</li>
<li>Resize to 512x512 pixels (All models were trained in this resolution)</li>
</ul>
<h2>Models</h2>
<p>I had an impression, that simple average pooling will not be enough to detect decease cases like "Have hemorrhages in all 4 quadrants". So I experimented with more advanced designs of heads:</p>
<ul>
<li>Global weighted average pooling</li>
<li>Concatenation of average and max pooling</li>
<li>Coord-Conv + FPN and max-pooling from each FPN layer</li>
<li>Coord-Conv + LSTM pooling </li>
</ul>
<p>Most heads performed equally or worse than global average pooling (GAP), so I used GAP in my final models ensemble.</p>
<h2>Task formulation</h2>
<p>Started as classification problem, then switched to classical regression problem and later end up with ordinal-like problem. Unlike some authors suggested to use encoded target vectors, my approach was a bit different - my final linear layer predicted tensor of 4 elements with sigmoid activation followed by summation. By formulating problem like this I was able to use MSE loss and still have predictions bounded at [0;4]. To me, this problem formulation worked best.</p>
<h2>Loss functions</h2>
<p>For classification, I experimented with soft-CE (label smoothing) (which was better that plain CE), focal + kappa (which was better than soft-CE)
For regression I started with MSE, then switched to WingLoss.
For ordinal regression I used Huber loss (aka smooth L1 loss) but at the end used Cauchy loss.</p>
<h2>Training</h2>
<p>To increase batch size I used mixed-precision training in fp16 using Apex and 3x1080Ti. </p>
<p>To prevent over-fitting my models and my augmentation pipeline was quite heavy one. Thanks to <a href="https://github.com/albu/albumentations">Albumentations</a> package, it was extremely easy to combine and use them in my training loop:</p>
<p><code>
return A.Compose([
A.OneOf([
A.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.1,
rotate_limit=15,
border_mode=cv2.BORDER_CONSTANT, value=0),
A.OpticalDistortion(distort_limit=0.11, shift_limit=0.15,
border_mode=cv2.BORDER_CONSTANT,
value=0),
A.NoOp()
]),
ZeroTopAndBottom(p=0.3),
A.RandomSizedCrop(min_max_height=(int(image_size[0] * 0.75), image_size[0]),
height=image_size[0],
width=image_size[1], p=0.3),
A.OneOf([
A.RandomBrightnessContrast(brightness_limit=0.5,
contrast_limit=0.4),
IndependentRandomBrightnessContrast(brightness_limit=0.25,
contrast_limit=0.24),
A.RandomGamma(gamma_limit=(50, 150)),
A.NoOp()
]),
A.OneOf([
FancyPCA(alpha_std=4),
A.RGBShift(r_shift_limit=20, b_shift_limit=15, g_shift_limit=15),
A.HueSaturationValue(hue_shift_limit=5,
sat_shift_limit=5),
A.NoOp()
]),
A.OneOf([
ChannelIndependentCLAHE(p=0.5),
A.CLAHE(),
A.NoOp()
]),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5)
])
</code></p>
<p>For the training, I've used to PyTorch framework and <a href="https://github.com/catalyst-team/catalyst">Catalyst</a> DL framework. With just a bit of extra code for adding Kappa metrics.
It may looks like a promotion of these libraries and, in fact, is it, since I contribute to both ;) No, really. I strongly recommend you to try it out. There are many alternatives to fast.ai )</p>
<h2>What didn't work</h2>
<ul>
<li>I had really high hopes for Unsupervised Data Augmentation (<a href="https://arxiv.org/abs/1904.12848">https://arxiv.org/abs/1904.12848</a>). I even extended their approach for regression problem. For me it didn't work in terms for LB score. However it may be a good contribution to Catalyst.</li>
<li>Decease augmentation. I tried to artificially generate aneurysms, cotton wools and hemorrhages and change diagnosis for augmented image. This didn't work at all. Probably my fakes were too obvious.</li>
<li>Mixup / MixMatch. Not worked at all</li>
<li>EfficientNet. Wasted couple of days on training it from scratch, but without success.</li>
<li>Heavy models (Senet154, Resnext152, Densenet201). Even with 50% dropout, L2 regularization 1e-3 these monsters were over-fitting badly.</li>
<li>Stacking. None of KNN, LogisticRegression, Trees didn't work for me at all. Simple averaging of regression scores worked best.</li>
</ul>
<h2>What did work</h2>
<ul>
<li>Pre-training on 2015 data</li>
<li>Heavy augmentations</li>
<li>Pseudo-labeling AND fine-tuning of head ONLY.</li>
<li>RAdam. This becomes by default optimized.</li>
</ul>
<h2>Summary</h2>
<p>Final ensemble was average of predictions of 4 models: SeResNext50, SeResNext101, InceptionV4 + InceptionV4 (finetune on pseudolabeled data)
After revealing private LB, I realized I had submission with optimized thresholds scored at Top 4. Am I disappointed? No. At the time of choosing final submissions I did not consider it as being too risky.</p>
<p>Am I satisfied with a results? Yes.
Can I do better? Yes.</p>
<p>So see you in the next competitions.</p> |
APTOS 2019 Blindness Detection | 142nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to all the winners. I am gonna describe all the things that we tried and what worked for us and what didn't. </p>
<p><code>What didn't work in our case?</code>
1. We experimented with a lot of different models. Starting from ResNet-50 to get a baseline, we tried all variants of ResNets including ResNeXt but none of them gave us a good score on the leaderboard.
2. We approached the problem from both ends. I tried solving it as a classification problem while <a href="https://www.kaggle.com/konradb">Konrad</a> and <a href="https://www.kaggle.com/abhishek">Abhishek</a> approached it as a regression problem. After a while, I realized that treating it as a classification problem isn't providing a boost to the score.
3. Ben's preprocessing: In fact, all models where we used this resulted in a very low score.
4. Label smoothing didn't work very well for classification models</p>
<p><code>What worked for us?</code>
1. For preprocessing, we found out that cropping out the black area in the original images is good enough.
2. Heavy augmentation: Random horizontal and vertical flips weren't enough.
3. <code>EfficientNets</code>, especially <code>B4</code> and <code>B5</code>, proved more effective than anything else. We ended up using <code>B4</code> only
4. We used both <code>old competition</code> data as well as <code>new competition</code> data but not blindly. Both the datasets have huge <code>class imbalance</code> with classes <code>0</code> and <code>2</code> as majority classes. We tried to balance the classes as much as possible. </p>
<p><strong>One crazy idea</strong>
For the last attempt, we tried a crazy idea regarding <code>sampling</code> of the classes from the old data. We were positive that it would work but we weren't very sure at any point in time. We randomly sampled the old dataset in a fashion such that the class distribution for old dataset looked like this:
<code>
3 2087
4 1914
2 1000
1 1000
0 1000
</code>
We didn't want to throw away any data point in the new dataset. So, we <code>undersampled</code> the classes that were in <code>majority</code> in the new dataset. Doing this would induce a bias in the model during <code>pretraining</code> phase towards the classes <code>3</code> and <code>4</code>. Once we switch to <code>fine-tuning</code> the model on the new dataset, the distribution is reversed now and the model should be able to correct the bias. With this distribution and heavy augmentation, our best model, <code>EfficientNet-B4</code>, with 5-folds TTA scored <code>0.920</code> on the final leaderboard. </p>
<p><code>What we didn't try?</code>
1. We didn't try pseudo labelling.
2. We didn't ensemble our models, especially all variants of <code>EfficientNets</code>.</p>
<p>We didn't try these things because during the last month because we all three were having too much work on our end. I was diagnosed with <code>dengue</code> and couldn't contribute fully in between. Nevertheless, we are happy that with a <code>single model</code> we made a jump of <strong>655</strong> positions on the leaderboard and didn't end badly. Thanks to my amazing teammates @konradb and @abhishek </p> |
APTOS 2019 Blindness Detection | 76nd place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>### Team</p>
<p>Mamat Shamshiev (<a href="/mamatml">@mamatml</a>), Insaf Ashrapov (<a href="/insaff">@insaff</a>), Mishunyayev Nikita (<a href="/mnikita">@mnikita</a>)</p>
<h3>Data</h3>
<p>2015 competition data was used for pretraining all our models. Without it out models performed much worse. We used different techniques: first train on old data, then finetuning on the new train, another technique train on both data, the finetune on new train data. Besides, starting finetuning with freezing all layers and training only last FC layer gave us more stable results.</p>
<h3>Models and Preprocessing</h3>
<p>From the beginning, efficientnet outperformed other models. Using fp16 (available in kaggle kernels) allowed to use bigger batch size - speeded up training and inference.
Models used in the final submission:</p>
<p>1) EfficientNet-B5 (best single model): 224x224 (tta with Hflip, preprocessing - crop_from_gray, circle_crop, ben_preprocess=10)
2) EfficientNet-B4: 256x256 (tta with Hflip, preprocessing - crop_from_gray, circle_crop, ben_preprocess=20)
3) EfficientNet-B5: 256x256 (tta with Hflip, preprocessing - crop_from_gray, circle_crop, ben_preprocess=30)
4) EfficientNet-B5: (256x256) without specific preprocess, two models with different augmentations. </p>
<p>We tried bigger image sizes but it gave worse results. EfficientNet-B2, B3, B6 gave worse results as well or didn't improve our blend.</p>
<h3>Augmentations</h3>
<p>From <a href="https://github.com/albu/albumentations">Albumentations </a>library:
<em>Hflip, VFlip, RandomScale, CenterCrop, RandomBrightnessContrast, ShiftScaleRotate, RandomGamma, RandomGamma, JpegCompression, HueSaturationValue, RGBShift, ChannelShuffle, ToGray, Cutout</em></p>
<h3>Training</h3>
<ul>
<li>First 3 models were trained using <a href="https://github.com/catalyst-team/catalyst">Catalyst</a> library and the last one with FastAi, both of them work on top of Pytorch.</li>
<li>We used both ordinal regression and regression. Models with classification tasks weren't well enough to use them.</li>
<li>Adam with OneCycle was used for training. WarmUp helped to get more stable results. RAdam, Label smoothing didn't help to improve the score.</li>
<li>We tried to use leak investigated <a href="https://www.kaggle.com/miklgr500/leakage-detection-about-8-test-dataset">here </a> and <a href="https://www.kaggle.com/konradb/adversarial-validation-quick-fast-ai-approach">here</a> by fixing output results. Almost 10% of the public test data were part of the train. Results dropped significantly, which means training data annotation were pretty bad.</li>
<li>We tried kappa coefficient optimization, it didn't give reliable improvement on public, but could help us on private almost +0.003 score.</li>
</ul>
<h3>Hardware</h3>
<p>We used 1x*2080, 1x* Tesla v40, 1x*1070ti, kaggle kernels</p>
<h3>Ensembling</h3>
<p>Weighted average based on public LB gave .823 on the private LB.
Using more models with different image size, augmentation gave a lower result on public LB, but could push us above 40 places. Unfortunately, we didn't pick it as two final submissions.</p>
<h3>Link to solution</h3>
<p><a href="https://github.com/MamatShamshiev/Kaggle-APTOS-2019-Blindness-Detection">https://github.com/MamatShamshiev/Kaggle-APTOS-2019-Blindness-Detection</a>
<a href="https://github.com/Mishunyayev-Nikita/Kaggle-Aptos-Blindness-Detection-2019-solution">https://github.com/Mishunyayev-Nikita/Kaggle-Aptos-Blindness-Detection-2019-solution</a></p> |
APTOS 2019 Blindness Detection | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to everyone, Here is my solution, not complicated.</p>
<h2>models</h2>
<ul>
<li>efficientnet-b0</li>
<li>efficientnet-b1</li>
<li>efficientnet-b2</li>
<li>efficientnet-b3
I just combined the classification and regression task together, multi loss maybe not easy to ovetfitting. But the test phrase just with the prediction of regression, code like these
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1270655%2F600b35c1bac596b5636e9d40c8b2c8cd%2Fcode.jpg?generation=1567923622631376&alt=media" alt=""></li>
</ul>
<h2>Data preprocessing and Augmentations</h2>
<ul>
<li>No more special preprocessing method, I just cropped black background on images.</li>
<li>image_size: 384x384</li>
<li>Just Fliplr, Flipud, Randomrotate(0, 360), zoom(1.0, 1.35) and self designed perspective transformation method.</li>
</ul>
<h2>Training strategy</h2>
<ul>
<li>5 fold CV</li>
<li>according to the better public LB, I did not pretrained on 2015-data and then finetune on 2019-data, I just combined 2015-data and 2019-data together and then train the model, valid on 2015 priavte test data and 2019 valid data.</li>
<li>lr_schedule: adam optimizer, lrs=[5e-4, 1e-4, 1e-5, 1e-6], and the position of lr decays is [10, 16, 22], train 25 epochs.</li>
<li>2*TTA</li>
<li>threshold: [0.5, 1.5, 2.5, 3.5]</li>
</ul>
<h2>Pseudo labels</h2>
<ul>
<li>I think the pseudo labels is the most import key to win in this competition. What I thinking about is, because we don't know the distribution of the 2019 private test set, it might be similar with 2015data, or 2019 training data or 2019 public data. So I can combine three different distribution data together to train the model, and the model can recognition three different distribution data.</li>
<li>I don't use all the 2019 public data as the pseudo labels samples, I chose the higer confidence samples, the codes like these
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1270655%2F8579cf548aeb7a64be574dc56236bf9e%2Fpseudo_labels.jpg?generation=1567924747948513&alt=media" alt=""></li>
<li>pseudo labels helpd me improve the local mse loss and qwk.</li>
</ul>
<h2>Conclusion</h2>
<ul>
<li>the trainning data is strange and most kagglers met some strange problems in this competition, like the LB is unstable and can not find the relation between the local cv and public LB, and...</li>
<li>But the final results show that we can still trust our local CV.</li>
</ul> |
APTOS 2019 Blindness Detection | 58 place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everybody and congratulations to all of the participants!</p>
<p><strong>Data</strong>
We used resized images from 2015 competition to pretrain our models.
Also, we added IDRID and MESSIDOR data to cross-validation while finetuning.</p>
<p><strong>Models</strong>
We tried a lot of models from the start of competition, but got no luck with either small (ResNet34/DenseNet121) nor very large models (EfficientNet-B6/B7, ResNe(x)t101, DenseNet201)</p>
<p>Best scoring modes were of medium size and medium resolution:
EfficientNet-B4: 380x380
EfficientNet-B5: 456x456
SE-ResNeXt50: 512x512
SE-ResNeXt50: 380x380
We got no improvement after 512x512 images.</p>
<p><strong>Preprocessing</strong>
At first, we used circle cropping and Ben's preprocessing fro a while, but then started to experiment with tight cropping and corners masking, which was worse on every model we tried (both CV and LB).
A week before the competition deadline we realized, that we lose too much information and switched to no preprocessing and hard augmentations instead.</p>
<p><strong>Augmentations</strong>
We used a lot of augmentations, all from Albumentations library:</p>
<p>OpticalDistortion, GridDistortion, PiecewiseAffine, CLAHE, HorizontalFlip, VerticalFlip, RandomRotate90, ShiftScaleRotate, RGBShift, RandomBrightnessContrast, AdditiveGaussianNoise, GaussNoise, MotionBlur, MedianBlur, Blur, Sharpen, Emboss, RandomGamma, ToGray, CoarseDropout, Cutout.</p>
<p><strong>Training</strong>
We used multi-target learning (classification, ordinal regression and regression), and their result was transformed to regression combined with a linear layer (initialized with 0.3333 weights) to get regression output.</p>
<p>At first, we trained only on the current data, but then switched to pretraining on 2015 dataset.</p>
<p>The flow is following:
Pretrain on 2015 dataset (20 epochs, SGD+Nesterov+CosineLR, Label smoothing)
Train 5-fold CV on 2019 data, IDRID and MESSIDOR combined (75 epochs, RAdam, CosineLR, Label smoothing)</p>
<p>Every training stage was made of three substages:
- Freeze model body and head combiner, train for 5 epochs.
- Unfreeze body, train N epochs.
- Freeze everything, unfreeze head combiner, train for 5 epochs.
If we trained everything together, we've always got uneven training of heads.</p>
<p>Also, we added random noise in range (-0.3, 0.3) to labels for regression, that greatly reduced overfitting.</p>
<p><strong>Hardware</strong>
We used server from FastGPU.net with 4xV100, that greatly reduced our experiment cycle length. </p>
<p><strong>Ensembling</strong>
We picked model by their performance on CV and holdout and did not orient on LB at all.
Our best performing solution was an ensemble of 20 models (4 models x 5 folds) with 10xTTA (hflip, vflip, transpose, rotate, zoom) combined with 0.25-trimmed mean. This ensemble performed worse both on holdout and LB, but we decided to choose with hope that it will generalize better (what it did!). </p> |
Recursion Cellular Image Classification | [Boring] 131 place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi every one and congratulations to all the participants!
Thanks organizer for a great opportunity to learn!</p>
<p>We rolled into this competition very late (6 days before the competition end) and tried to squeeze something from the pipeline we used in APTOS 2019 Blindness Detection.
<a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/108007">https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/108007</a></p>
<p>Actually, this performed not as bad as we expected!</p>
<p><strong>Models</strong>
We had time to train only two models, they were:
- 1-fold ResNet34, 512x512
- 3-fold SEResNeXt50, 320x320</p>
<p>Single ResNet34 was better than ensemble of SEResNeXt50s. Probably, due to the higher resolution.
As final solution, we ensembled all 4 models. </p>
<p><strong>Preprocessing</strong>
We used 6 channel images, normalized by channel with mean and std of non-black parts.
To make it compatible with ImageNet pretrained model, we just added single convolution layer from 6 to 3 channels.</p>
<p><strong>Augmentations</strong>
We used standard augmentation, not that hard. All from Albumentations library:
CLAHE, HorizontalFlip, VerticalFlip, RandomRotate90, ShiftScaleRotate, RGBShift, RandomBrightnessContrast, Blur, Sharpen, RandomGamma.</p>
<p><strong>Training</strong>
Just usual finetuning from ImageNet.
SGD+Nesterov+Cosine Annealing LR for 70 epochs.</p>
<p>For folds, we used just random split with no stratification.
In hindsight, we should split by experiment...</p>
<p><strong>Postprocessing</strong>
We used only the leak with 277 siRNA per plate, thanks <a href="/zaharch">@zaharch</a> for his awesome kernel!
<a href="https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak/">https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak/</a></p>
<p><strong>Hardware</strong>
This time we used server with a single V100 from FastGPU.net. Thanks them!</p>
<p><strong>Conclusion</strong>
Overall, I'm glad that we tried. Probably, with more time and dedication, we could achieve better results. However, it was cool to see that pipline from different competition can score that high wiht almost no understanding of the subject area and very little understanding of the competition itself.</p>
<p>Happy kaggling!</p> |
SIIM-ACR Pneumothorax Segmentation | [solution] 10-th place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The core of the pipeline is still quite similar to my <a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/99440">previous post</a>.</p>
<p>One major change is the split of the problem into segmentation and classification. Also after Konstantin joined we have tried more complicated models and a lot of them worked better then ResNet34.</p>
<p>Now, when I see, that most of the team tackled the problem jointly, I start to believe, that having the ensemble of both could significantly increase the score.</p>
<p><strong>Common:</strong>
Data split: CV5
Optimizer: Adam
Scheduler: Reduce lr on plateau
Augmentations: relatively aggressive: ShiftScaleRotate, Grid- and Elastic-transformation, GaussianNoise</p>
<p><strong>Classification:</strong>
Models: se_resnext101 (2 snapshots), senet154
Resolution: 768x768
Loss: BCE
Additional features: TTAx2(hlip), pseudo labeling, gradient accumulation (bs = 100-200)</p>
<p>The key here is the model selection. Instead on focusing on accuracy, I focused on f0.5 metric with fixed threshold of 1.0. The motivation is quite simple, the average dice of segmentation model is around 0.58. It means that correctly guessed negative instance contributes to the score with 1, and correctly guessed positive instance only with 0.58.</p>
<p><strong>Segmentation:</strong>
Models: Unets: dpn98, se_resnet101, se_densenet121 (2 snapshots of each)</p>
<p>The train process consist of 4 stages:
1. Loss: BCE + dice; size: 512x512
2. Loss: BCE + dice; size: 1024x1024
3. Loss: BCE; with soft pseudo labels; size: 1024x1024
4. Loss: symmetric lovasz; size: 1024x1024</p>
<p>Additional features: TTAx6(hlip + rescale), gradient accumulation (bs = 50-100)</p>
<p><strong>What did not work:</strong>
- Unets with other encoders worked worse (resnet34, resnet50, dpn107, dpn131, se_resnext50_32x4d, se_resnext101_32x4d, senet154). senet154 worked much better than other models on size 512x512, but completely failed during scaling to 1024x1024.
- Lookahead optimizer didn't improve the optimization process (<a href="https://arxiv.org/abs/1907.08610">https://arxiv.org/abs/1907.08610</a>)
- Other losses for classification: FocalLoss, SoftF1Loss. The accuracy with FocalLoss was better then with BCE, but f0.5 metric was worse. </p>
<p>The code: <a href="https://github.com/SgnJp/siim_acr_pneumothorax">https://github.com/SgnJp/siim_acr_pneumothorax</a></p> |
APTOS 2019 Blindness Detection | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the competitors!</p>
<p>local code: <a href="https://github.com/4uiiurz1/kaggle-aptos2019-blindness-detection">https://github.com/4uiiurz1/kaggle-aptos2019-blindness-detection</a>
kernel: <a href="https://www.kaggle.com/uiiurz1/aptos-2019-14th-place-solution">https://www.kaggle.com/uiiurz1/aptos-2019-14th-place-solution</a></p>
<h2>Preprocessing</h2>
<p>I used only Ben's crop (scale radius).</p>
<h2>Augmentation</h2>
<p>Output image size is 256x256.
<code>python
train_transform = transforms.Compose([
transforms.Resize((288, 288)),
transforms.RandomAffine(
degrees=(-180, 180),
scale=(0.8889, 1.0),
shear=(-36, 36)),
transforms.CenterCrop(256),
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomVerticalFlip(p=0.5),
transforms.ColorJitter(contrast=(0.9, 1.1)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])
</code></p>
<h2>1st-level models (run on local)</h2>
<ul>
<li>Models: SE-ResNeXt50_32x4d, SE-ResNeXt101_32x4d, SENet154</li>
<li>Loss: MSE</li>
<li>Optimizer: SGD (momentum=0.9)</li>
<li>LR scheduler: CosineAnnealingLR (lr=1e-3 -> 1e-5)</li>
<li>30 epochs</li>
<li>Dataset: 2019 train dataset (5-folds cv) + 2015 dataset (like <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/97860#581042">https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/97860#581042</a>)</li>
</ul>
<h2>2nd-level models (run on <a href="https://www.kaggle.com/uiiurz1/aptos-2019-14th-place-solution">kernel</a>)</h2>
<ul>
<li>Models: SE-ResNeXt50_32x4d, SE-ResNeXt101_32x4d (1st-level models' weights)</li>
<li>Loss: MSE</li>
<li>Optimizer: RAdam</li>
<li>LR scheduler: CosineAnnealingLR (lr=1e-3 -> 1e-5)</li>
<li>10 epochs</li>
<li>Dataset: 2019 train dataset (5-folds cv) + 2019 test dataset (<strong>public + private</strong>, divided into 5 and used different data each fold. )</li>
<li>Pseudo labels: weighted average of 1st-level models</li>
</ul>
<h2>Ensemble</h2>
<p>Finally, averaged 2nd-level models' predictions.</p>
<ul>
<li>PublicLB: 0.826</li>
<li>PrivateLB: 0.930</li>
</ul>
<p>Fortunately, I selected the best submission.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F849476%2Ff697521599e135faf71027923c90b0f2%2Fmy_submissions.png?generation=1568189226782044&alt=media" alt=""></p> |
APTOS 2019 Blindness Detection | Congrats to the winners ! 137th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners! Actually I expected to have bigger shakeup in the private LB but it seems that trusting the public LB was a good idea. I selected my top 2 LB and it worked :
- 0.82 LB and in the private it gave 0.92 kappa
- 0.817 LB and 0.92 in the private too
These both kernels are an ensemble of 4 models :
- EfficientNet b0 , size=416 (btw this single model gave 0.806 in LB and 0.92 in the private)
- EfficientNet b3 , size=288 (0.805 LB vs 0.913 private)
- EfficientNet b4 , size=288 (did not submit it)
- EfficientNet b5 , size=256 (0.797 LB vs 0.914 private)
The only difference between my top 2 kernels was the TTA:
I just used a simple 5-TTA with these augmentations :
<code>
albumentations.Compose([
albumentations.Resize(S, S),
albumentations.ShiftScaleRotate(shift_limit=0.08, scale_limit=0.15, rotate_limit=30, p=0.5),
albumentations.Normalize(),
AT.ToTensor(),
])
</code></p>
<p>but gave me worse public LB. I thought TTA would help in the private LB but it seems that in all my TTA kernels scores went down even in private LB.
Another thing to add, all my models were trained on 80% of the data. I did not use Kfolds since Kfolds gave me bad results and I think I have chosen the golden fold that gave me more 0.02 score than the other folds. That's why ensembling did'nt gave me a huge boost since I am only validating my models on ~700 images.
Thank you for everyone who shared a kernel or an idea or made a comment or a post in the discussion section.
Thank you Kaggle and APTOS for this amazing competition.</p>
<p>[Update : I trained every model on the old data of the previous competition (train+test) using the current data as validation. All my models converge to 0.88 kappa with pretrained weights]</p> |
APTOS 2019 Blindness Detection | 46th place with classification and barely using external data. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners and those who learned something new! I certainly did.</p>
<p>I wanted to experiment with fastai and I entered this competition because I thought the dataset is small. Oh boy, how wrong I was. </p>
<p><strong>Software:</strong> Fastai</p>
<p><strong>Resources:</strong> Kaggle Kernels only</p>
<p><strong>Networks</strong> : EfficientNet 2xB3 , 2xB5 and 1x seresnext101 each with 5 folds</p>
<p><strong>Preprocessing</strong>: some used cropping and other models no preprocessing.</p>
<p><strong>Augmentation</strong>: train transform = brightness, contrast, hflip(), vflip() (some models), and two custom processes randomzoom with different ratios from 1vs1 to 4vs3, sometimes crop to ratio4vs3.</p>
<p><strong>Optimizer</strong> : Adam</p>
<p><strong>Loss</strong> : CE with label smoothing</p>
<p><strong>Learning Rate Scheduler</strong>:one_cycle with max 6e-4</p>
<p><strong>Validation</strong>: 5Fold stratified official data, and if used external added only to train not to hold out </p>
<p><strong>Score</strong>. I did fast submission so only Public LB known:</p>
<p>only official data
B3 0.813 (448px )
B5 0.817 (456px )
seresnext101 0.799 (416px) </p>
<p>official + external train clipped at 2000 samples per class:
B3 0.822 (456px)
B5 no_clue but CV improved to 0.94+ so I added it (512px) here I trusted my gut feeling to trust your CV</p>
<p><strong>Final Submission</strong>:
0. Combining all EfficientNet outputs together no TTA. Public LB:0.831, Private 0.925)
1. Combining all EfficientNet outputs together + h_flip. Public LB:0.828, Private 0.925)
2. Combining all outputs together + h_flip. Public LB:0.829, Private 0.926)
- excluded sub1 because I thought with h_flip will be more robust. Good decision
- In some previous competition I would ensemble only models with best LB score but here only did those with correlation <0.9
- I added seresnext last day although it didn't seem to work as good on Public LB, but it turned out it actually did pretty good on Private. A few of my early trials with seresnext scored ~0.79Public which transfered to ~0.92 Private </p>
<p><strong>CONCLUSION</strong>: I am very satisfied given
- I treated this as classification and not regression as it would be more appropriate
- I barely used external data (GPU limit killed my primary plan and I lost the motivation first time)
- I used fastai which is easy to get you going but I found it not so straightforward to implement your own ideas such as multiheaded outputs and combination of different losses. (so I lost the motivation for the second time)
- I did not use pseudolabels
- And I also started sharing solutions and discussing topics</p>
<p><strong>Fun fact</strong>: I am a veterinarian on parental leave and I did most of my experimenting via phone</p> |
APTOS 2019 Blindness Detection | 37th Place Solution(And Let's talk about CV and private score) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Since some of the top place solutions are already available, I'm not sure if this note is going to add anything new. Just sharing my experiences. :)</p>
<p>My final LB score is an ensemble of 5 Efficientnet models. Here is the overview:</p>
<ul>
<li><p><strong>Preprocessing</strong>: Only Circle Cropping from <a href="https://www.kaggle.com/tanlikesmath/diabetic-retinopathy-resnet50-binary-cropped">this</a> kernel by <a href="/tanlikesmath">@tanlikesmath</a> </p></li>
<li><p><strong>Augmentations</strong>: Horizontal and Vertical Flip, 360 rotation, zoom 20%-25%, lighting 50%. Used Fast.ai transformations.</p></li>
<li><p><strong>Training</strong>: Mostly followed <a href="/drhabib">@drhabib</a>'s <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/100815#latest-582005">method</a>. Did 5+10 training using <code>2015</code> data as train and <code>2019</code> as validation set. Then I split the <code>2019</code> data into 80:20 ratio using <code>sklearn</code>'s Stratified Split function and finetuned the previously trained model over it for 15 epochs. I took the best model based on validation loss which didn't seem to be overfitted and used it for submission. I used a fixed seed so that I can compare different model performances over the same val set. Didn't use CV or TTA.</p></li>
</ul>
<p>As <a href="/rishabhiitbhu">@rishabhiitbhu</a> <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/106844#latest-620958">showed</a> that the class distribution is quite similar in different datasets, I had a hunch that it wouldn't be that much different for our large test set. This is why I trusted my stratified validation data(can't even imagine what would have happened if I were wrong). </p>
<p>My Validation and Public scores didn't seem to be correlated but now the private scores resemble my val scores. Here is a summary of the models I used:</p>
<p>| Model | Image Size | Val Kappa | Public Kappa | Private Kappa |
|-----------------|------------|-----------|--------------|---------------|
| Efficientnet B2 | 256 | 0.922 | 0.807 | 0.918 |
| Efficientnet B1 | 256 | 0.926 | 0.804 | 0.921 |
| Efficientnet B0 | 256 | 0.919 | 0.816 | 0.914 |
| Efficientnet B3 | 256 | 0.921 | 0.812 | 0.917 |
| Efficientnet B5 | 300 | 0.920 | 0.802 | 0.916 |
| Ensemble | | 0.932 | 0.826 | 0.926 |</p>
<p>Please share and let us know about your CV and Private scores. Also congratulations to the ones who survived the LB shakeup. :) </p> |
Generative Dog Images | Journey to LB 44.69 (ResNet DCGAN+Conditional BN) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi All
First, I want to thank Kaggle for holding this great competition, this is first GAN competition, I really enjoy and learned a lot from this competition, thanks everyone for great discussion and great kernel, thanks <a href="/cdeotte">@cdeotte</a> for detail explanation of GAN and good kernel share.</p>
<p>Second, Very thanks a great RaLSGAN kernel from <a href="/sakami">@sakami</a>, I fork it and do some experience to get my final solution, LB44.69, Below is my final submission kernel : <a href="https://www.kaggle.com/super13579/ralsgan-dogs-resnet-cbn?scriptVersionId=18722384">https://www.kaggle.com/super13579/ralsgan-dogs-resnet-cbn?scriptVersionId=18722384</a></p>
<h2>Journal to LB 44.69</h2>
<h3>1. Add convolution output channel count on Discirminator, LB 69 to 54</h3>
<p>Triple the convolution output channel count on Discirminator, LB 69 (original kernel) to LB 54
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F316036%2Fc971fbc7413bc7161bca99fd04a0c3bf%2Fexperience.JPG?generation=1566137754427868&alt=media" alt=""></p>
<h3>2. Add Pixel Normalize and spectral normalize on Generator, LB 54 to 52</h3>
<p>Add Pixel normalize and spectral normalize on Generator, LB 54 to LB 52</p>
<h3>3. Using Breed Label, LB 52 to 49</h3>
<p>I add the Aux layer on the last of discriminator, and choose a good weight to merge to loss function (fine-tune manually ), LB 52 to LB 49
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F316036%2F99b6b3274307a43255f302cf23b53c45%2Fexperience1.JPG?generation=1566138320702129&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F316036%2F8280dbba30b3340ab7d0259960c1d2da%2Fexperience2.JPG?generation=1566138413927201&alt=media" alt=""></p>
<h3>4. Add ResNet Block and Conditional Batch normalize, LB 49 to 44</h3>
<p>I saw recent GAN paper all use ResNet block and conditional batch normalize, so I try to implement them, ResNet Block for Generator and Discriminator, CBN for Generatro, LB 49 to LB 44, you can refer my final kernel to see how to implement them.</p>
<p>This is dog image from my best score kenrl, some head are missing XD
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F316036%2F4cc0a8e2c2d0cb91924ce42ecc25dc00%2FDog%20image.JPG?generation=1566139332159491&alt=media" alt=""></p>
<h2>Summary</h2>
<p>I think BigGAN is the biggest winner in this competition, I've saw some paper mention it, but I thought it will exceed the limit time (9hours), cause it looks like a "Big" model, so I didn't implement it, only implement Resnet Block and CBN, I learned a lot of GAN from this competition, thanks everyone for great discussion and great kernel! GAN is very interesting!</p> |
APTOS 2019 Blindness Detection | 112th place solution (Fastai student) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I*<em>Introduction</em>*: It's my first competition after finishing the fastai class. As there are lots of great solutions posted, I will keep my simple and short.</p>
<p><strong>Validation Construction (didn't work out)</strong></p>
<p>Model used: Resnet18, Resnet34, Resnet50
Validation set: Random split, Stratify split, Mixup test and train to carefully select those closed (Threshold 0.65, for prediction that lower than 0.65 confidence, consider similar to test)</p>
<p>The idea is based on Fastai ML class, use three different models, create different validation set, use validation score and LB score to plot to see if it makes sense.</p>
<p>The goal is if there is one validation set that can have consistent local / and LB score (linearly as model complexity went up)</p>
<p>The result is not pretty, the gap between validation set and LB is still huge. But besides later random seed selection, I also trained some model base on the validation set I create. </p>
<p><strong>Models</strong>:
Using the regression idea, as if the target is 4, predicting 1 should be punished more for predicting 3. I think regression + clipping works fine for the case. Also thanks for <a href="/abhishek">@abhishek</a> for his great kernel of clipping.
<a href="https://www.kaggle.com/abhishek/very-simple-pytorch-training-0-59">https://www.kaggle.com/abhishek/very-simple-pytorch-training-0-59</a></p>
<p>Model used are:
Resnet 50, 101
ResNext 101 32*16d
Efficient Net B0-B4</p>
<p>Base on my test results, Efficient Net B2 and B4 are giving the best results. I am able to have LB around 0.79+ with baseline model (fastai default argumentation, lr=2e-4, seed=42, split = 0.1)</p>
<p>I start to apply progressive image resizing. Surprisingly, it didn't improve much score (size->128 to size->244). Therefore, based on <a href="/drhabib">@drhabib</a> and <a href="/taindow">@taindow</a> discussion thread, I set my image size = (256,256). In fastai, if you pass tuple for image size, it will use Squish instead of Crop</p>
<p>I started to pre-train 2015 data, used ImageNet and Instragram pretrained weights, use 2019 data as validation set to monitor model performance. Also, used Ben's cropping method base on the great kernel <a href="https://www.kaggle.com/ratthachat/aptos-eye-preprocessing-in-diabetic-retinopathy">https://www.kaggle.com/ratthachat/aptos-eye-preprocessing-in-diabetic-retinopathy</a></p>
<p>However, I kept two pretrained models, one with cropping, one without cropping. </p>
<p>From here, I can have single fold efficient net score LB 0.801 in 8 epochs. </p>
<p>I started to test rAdam, different with other posted in the discussion thread, rAdam with fastai lr_find () use suggested lr 10 times smaller gives me single pre-trained model 0.800 LB score (and it is very solid)</p>
<p>I also tried splitting efficient net model at ._fc layer, ._conv_head layer, and find the middle filter change layer split at middle, ._conv_head to make it 3 layer groups then apply discriminative lr. </p>
<p>It didn't work out. </p>
<p>The final thing I tried is remove ._conv_head, and stick fastai concat pooling on top of the efficient net body. The model performed in a very strange way. I don't need to train the body, only the concat pooling head giving me 0.78+ LB, but if I fine-tune the entire model, the LB dropped to 0.74+. Therefore I decided to use consistent lr for all layers</p>
<p><strong>Ensemble</strong></p>
<p>5 fold Efficient Net b4 gives me 0.804 LB score.</p>
<p>I finally ensembled all my 0.8+ LB score model, boosted to 0.809.
Carefully select some high scoring models base on different cropping, complexity, finally got me to 0.814 LB.</p>
<p>It's a fun competition overall, have been studied a lot from the community. And it is my first competition after 9 months of study deep learning. I can't image that I started at 2018 December with 0 experience of ML / DL, 0 experience of python. 9 months later got my first silver at Kaggle.</p>
<p>Special thanks for the kaggle / fastai community! </p>
<p>All the best, </p> |
Predicting Molecular Properties | #1 Solution - hybrid | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone,
Here's a brief writeup of the method we used for the #1 entry. I was hoping to post this a bit sooner, and apologies for the delay.</p>
<h2>Update: 9/13/19</h2>
<p>We have posted the code for our method, all of which is available under MIT license at:
<a href="https://github.com/boschresearch/BCAI_kaggle_CHAMPS">https://github.com/boschresearch/BCAI_kaggle_CHAMPS</a>
The main code for the model is available in the <code>src/</code> directory, though for those interested, the <code>models/</code> directory contains slight variants on this code that were used in the ensemble (mainly earlier versions of the same architecture) so that you can recreate the predictions exactly.</p>
<h2>Introduction</h2>
<p>First, a little bit of background on the team. This project was done at Bosch Research, specifically as collaboration between two groups, one at Bosch Corporate Research, and one at the Bosch Center for AI (BCAI). Our team consisted of both some ML experts and domain experts. To introduce our team:</p>
<ul>
<li><p>Jonathan Mailoa and Mordechai Kornbluth are both Research Engineers working out of the Boston lab of Bosch Research. They are domain experts on DFT and ML approach to molecular simulation, and have worked a great deal on molecular modeling, including lately some work with GNNs.</p></li>
<li><p>Myself (Zico Kolter, I'm a faculty member working in machine learning at CMU, but work in industry one day a week at BCAI in Pittsburgh), Devin Willmott (Research Scienst at BCAI), and Shaojie Bai (my student at CMU, but doing this while while interning at BCAI) were all coming to the competition from the ML side. I had actually done a bit of (pre-deep-learning, so ancient history) work in ML for molecular modeling, though we didn't end up using many of those methods.</p></li>
</ul>
<h2>Overall architecture</h2>
<p>Our overall approach is what I would call a kind of "soft" graph transformer. We wrote the model all from scratch for this work, instead of building upon any existing code. The model processes an entire molecule at once, simultaneously making a prediction for each of the scalar couplings in the molecule (we hadn't considered the per-atom approach that Quantum Uncertainty used, and frankly it sounds like that may be a pretty competitive approach, given that they did nearly as well with much less physical information).</p>
<p>Unlike a traditional graph model, though, we're really processing the data as more of a "meta-graph". In constrast to most graph methods for molecules, where atoms are nodes and bonds are edges, in our graph each atom, bond (both chemical bonds, and non-chemical bonds, i.e., just pairs of atoms are included in the model), and even triplets or quads, if desired, all becomes nodes for the graph transformer. This means that each molecule has on the order of ~500 nodes (depending on whether we include all the bonds or not, or whether we include triplets or quads, which only would be included for chemcial bonds). At each layer of the network, we maintain an embedding of for each node in the graph, of dimension d ~= 600-750 in most of our models.</p>
<p>Following the standard transformer architectures, at each layer of the network, we use self-attention layer that mixes the embeddings between the nodes. The "standard" scaled self-attention layer from the transformer paper would be something like (forgive the latex-esq notation formatted as code ... I'm entirely unprepared to describe model architectures without being able to write some form of equation):</p>
<p><code>Z' = W_1 Z softmax(Z^T W_2^T W_3 Z)</code></p>
<p>where W_1, W_2, and W_3 are weights of the layer. However, following the general practice of graph transformer architectures, we instead use a term</p>
<p><code>Z' = W_1 Z softmax(Z^T W_2^T W_3 Z - gamma*D)</code></p>
<p>where D is a distance matrix defined by the graph. For a "hard" graph transformer, this will work like the mask in normal self-attention layers, and be infinite for nodes that are not connected, and zero for nodes that are connected (and the gamma term would be fixed to one, say). In our "soft" version of the graph transformer, however, D was just the squared distance matrix between nodes in the graph, and gamma was a learnable parameters: as gamma went to zero, this would become a standard transformer with no notion of distance between objects, whereas as gamma went to infinity, it would become a hard graph transformer. To be even more precise, in the final architecture we used a multi-head version of this self-attention layers, as is also common in transformer models.</p>
<p>As a final note, for this to work, we needed to define a distance measure between all the nodes in the graph. For e.g., atom-to-atom distances, we just used the actual distance between atoms, for atom-to-bond distances, we would use the the minimum distance from the atom to the two atoms in the bond, with similar extensions for triplets, quads, etc.</p>
<p>After the self-attention layer, we used the normal fully-connected and layer-norm layers standard to transformer architectures, and used models of depth ranging from 14-16 (depending on available memory). After the final embeddings, we had separate heads that would predict the final scalar coupling for the nodes that corresponded to fairs for which we needed the coupling value, using a simple two layer MLP for each type (or actually, for several sub-types of the bonds, which we'll mention below).</p>
<h2>Input features and embeddings</h2>
<p>The input representation (i.e., the first-layer embeddings for all nodes in the network),
As our input representation, for each type of node in the network, we would include a kind of hierarchical embedding, where were had different levels of specificity for the different atoms, bonds, etc.</p>
<p>As an example, for each bond (again, really meaning just a pair of atoms ... I'm referring to pairs generally as bonds even if they are not chemical bonds in the molecule), we described it in terms of the two atoms belonging to the bond, but also in in terms of the number of bonds that each atom would have. Thus, each bond could be described by multiple given types at subtypes: first by just the type of atoms in the bond, then by the type and total number of bonds that each atom had, and then by a few additional properties such as the bond order, etc.</p>
<p>This lead to substantially more coupling "types" than just the 8 that were used in the competition, and we actually had separate final layers for 33 different types of bonds, rather than the 8 in the competition (for instance, the 1JCH had very different properties depending on the number of bonds the C atom had), which definitely improved our predictions slightly.</p>
<p>In addition to the "discrete" embedding, each node type would have associated with it one or two scalar constants that we would embed with a Fourier encoding, much like positional encoding in a standard sequential Tranformer model. For atoms, this consisted of the partial charge of the atom, as given by the OpenBabel library (*correction: original post said this was from RDKit, but RDKit was used for bond orders and conenctions, whereas OpenBabel was used for charges), just using some simple rules based upon graph structure; for bonds it was the distance between the two atoms; for triplets the angle between the center atom and the other others; and for quads the dihedral angle between the two planes formed by the center bond and the two other bonds (quads didn't end up helping too much for this particular task, though, so were left out most of our final models).</p>
<h2>Ensembling</h2>
<p>In this end, we trained 13 models that we used for the final ensemble, which basically just corresponded to different iterations and versions of the same basic structure (at times we also included a few models based upon a more standard graph neural network approach from the PyTorch Geometric library, though they weren't included in the final ensemble). We timed about 4 final models to complete on the final day of competition, including the model which eventually got the best performance, which is why we managed to sneak into the top spot on the very last day. As I had mentioned in my last post, there really wasn't anything that much happening during the last ~4 days where we moved up the rankings, nor were we "holding anything back": our models simply kept improving each day, and we'd submit our best version of the ensemble, which kept bumping us up day by day.</p>
<p>Our best single model got about -3.08 on the public leaderboard, which makes me actually quite surprised, given that Quantum Uncertainty's best model was substantially better. But I think the fact that we predicted entire molecules at once actually may have increased the variance of predictions across all molecules, but therefore also seemingly made it work much better with the ensembling several different models. By taking a straight median across predictions from the best models, for instance, we could get to the ~-3.22 range, and with a slightly more involved blending scheme (using the median of all 13 models to determine which 9 models seemed best, then taking the mean of a few different medians of the different model predictions), we were able to achieve our score of -3.245 on the private leaderboard.</p>
<h2>Other random notes</h2>
<ul>
<li>We used small amounts of dropout at each layer, as in standard transformer models, though found that it was best to use a very small amount of dropout.</li>
<li>At the very end of the competition, we did find that for our model a kind of cutout procedure (where we would randomly drop out two atoms from the network, plus all bonds, triplets, etc, that contained this atom), worked as a very effective regularizer.</li>
<li>We didn't use QM7/QM9 or in fact any of the extra data that was included in the competition besides the structures and train/test files (so just atoms and bonds).</li>
<li>We used RDKit/xyz2mol and a few other packages to parse the atomic structure to e.g., the bond and neighbor configuration feautres. Jonathan had a post about this earlier listing the packages we used.</li>
<li>I'm not even going to attempt to list all the things we tried that didn't work, but there was lots :-). Quad / dihedral angle information, for instance, actually seemed to <em>hurt</em> generalization performance, as did including simple Coulomb forces at the bond level.</li>
<li>As you'd expect, we had a fair amount of compute resources for the work. Most of the models were trained on 4x RTX2080 Ti systems (we had 5 of these available through the month we were working on the project), with a handful also trained on six single V100s we got access to in the last week.</li>
</ul>
<h2>Final thoughts</h2>
<p>I want to thank the CHAMPS team for putting on an amazing competition. As many others have pointed out, the stability between the private/public leaderboards demonstrates an understanding of how to run a machine learning contest that sadly seems to be missing from many of the other contests I looked over on Kaggle previously.</p>
<p>I also again want to thank the Quantum Uncertainty team, who as I mentioned before, were our goalposts the entire competition. After reading their solution I'm coming away more convinced about Transformers as the architecture that's going to be dominant across many different domains, not just sequence models (despite the fact that I will never, ever, forgive the original paper for the monstrosity that is the "query", "key", and "value" terminology for self-attention layers ;-) ). I also think their per-atom transformer is an awesome idea, and something I wish we had thought of ... I think most likely it took us using a <em>lot</em> of domain knowledge and engineering to make back up the difference that their per-atom approach got. And while it's wild to me that a non-rotationally invariant model would do so well (since we only used distance as a feature at the bond level, our model is rotationally invariant), it's impossible to argue with results. Their model is excellent, and I think it actually goes to show there is substantial room for improvement still in the performance we can get on this task.</p>
<p>Thanks again,
Zico, Devin, Shaojie, Jonathan, and Mordechai</p> |
APTOS 2019 Blindness Detection | 17th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners. Here is my solution. </p>
<p>My solution is pretty straight forward. I treat it as a regression problem. It is a linear stacking of
- efficientnet-b0
- efficientnet-b3
- efficientnet-b4
- efficientnet-b5</p>
<p>Other important features: <br>
- Image size: 380
- Pretrain Data: 2015 train data, valid on 2015 public test and test on 2015 private test.
- Finetune Data: 2019 train data (5-fold CV).
- Augmentation: H/V flip, rotation(0-360), zoom(0 - 0.1), contrast(0-0.2) and brightness(0-0.2).
- TTA: 8x(H/V flip + zoom(0, 0.1)</p>
<p>The only thing can be considered as trick is the cropping applied:
- 2015 data is of high quality, so I decide to do center square crop on all of them.
- For 2019 data, all the 1:1 images are zoom crop (randomly zoom in 0 - 0.18) using 4:3 aspect ratio. </p>
<p>Why different treatments? This is because I found there are only two aspect ratios (1:1 and 4:3) in 2019 data after removing all the black region, and ~99% of 1:1 images are Class 1. When zoomed in at 0.18, the remaining black region in the 1:1 images look exactly the same as the rest of 4:3 images. I hope after this treatment, the model would classify Class 1 images based on the information in the center region, rather than the black region ratio/shape or aspect ratio. </p>
<p>I think the 4:3 images are enlarged images from 1:1 images because when the doctors found potential markers for the illness in one image, they would zoom in to verify their findings. After zooming, they simply saved the enlarged images. Why 4:3? I guess this is the way most software UI are designed. </p>
<p>It is kind of hard for me to accept that I am so close to gold zone, but this is how kaggle works. I guess I need to work harder for my solo gold. </p>
<p>BTW, CV qwk is aligned with private LB perfectly. For my best solution (QWK 0.930), the local CV (QWK) is 0.93456. The golden rule "Trust local CV" applies again! </p> |
Recursion Cellular Image Classification | 11th place solution - AttentionHeads | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>AttentionHeads</h1>
<p>First of all, many thanks to my teammates, <a href="/dempton">@dempton</a>, <a href="/ddanevskyi">@ddanevskyi</a>, <a href="/orgunova">@orgunova</a> and <a href="/cutlass90">@cutlass90</a>. Also our congratulations to <a href="/dempton">@dempton</a> for getting his Grandmaster badge.
So, here is our (pretty simple) approach.</p>
<h1>Model</h1>
<ul>
<li>Ensemble of 2 <strong>EfficientNets</strong>: B0, B5</li>
<li><strong>6 channel</strong> input with first Conv layer replaced with 6 channel version</li>
<li><strong>Batch Normalization</strong> before first Conv layer</li>
<li>We normalize each image using mean/std <strong>computed on all images in experiment</strong></li>
</ul>
<h1>Training</h1>
<ul>
<li>Simplest approach possible: <strong>3-fold split, stratified by cell type</strong>, as well as by <strong>visual appearance</strong>: We made simple visualizations of experiments within cell type and tried to split similarly looking experiments into different folds, so to have all kinds of images in every fold.</li>
<li>No smart sampling or special losses, just basic <strong>Cross Entropy</strong>, without label smoothing or metric learning</li>
<li><strong>SGD</strong> with <a href="https://arxiv.org/abs/1907.08610">LookAhead optimizer</a> (a=0.5, k=5) and 4-step <strong>gradient accumulation</strong></li>
<li><a href="https://sgugger.github.io/the-1cycle-policy.html">1cycle policy and LR range test</a> to find best learning rate for training</li>
<li><strong>Polyak averaging</strong> (exponentially weighted version). When evaluating model we used exponentially weighted average of model parameters instead instead of last/best parameters, this does not give any performance improvements of final model, but has very pleasant <strong>smoothing effect on metric/loss curves</strong> and gives huge performance improvements in almost all training steps except for the very last, where it reaches same performance as model without averaging
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F726978%2F6d0d5effe9c9f35f100cde2c0ec5bf2e%2FScreen%20Shot%202019-09-27%20at%2012.44.21%20PM.png?generation=1569577526910469&alt=media" alt="polyak averaging effect"></li>
<li>Augmentations: randomly sample site and apply flip and transpose, as well as random <strong>channel reweighting</strong> (just multiply each channel by some positive values with restriction that they should sum to 6)</li>
<li><strong>Progressive resize</strong> during training: starting from random crops of size 224 we linearly scale crop size to 512. This gives <strong>2x faster training</strong> without drop in performance</li>
<li>1 round of <strong>pseudo labeling</strong>. We just selected top-K samples with most confident predictions, added them to each fold's train set and finetuned our models for several additional epochs</li>
</ul>
<h1>Post-processing</h1>
<ul>
<li><strong>LAP solver</strong> within each experiment to assign classes to images</li>
<li><strong>Softmax Temperature</strong>. Just multiply logits by some positive number before taking softmax <code>(logits * t).softmax()</code>, this sharpens or softens distribution which has huge impact on perfrmonace when used with LAP. Temperature value can be picked on validation set.</li>
<li>The "277 classes per plate" trick. It might be different from what other participant were doing, but basically we did the following:
<ol><li>Search for best <strong>temperature</strong> that maximizes metric.</li>
<li>Use <strong>LAP</strong> within experiment to get model predictions.</li>
<li>Now we need to decide: within each experiment, what sirna groups should be assigned to each plate. The next step if to just <strong>zero out probabilities</strong> of classes which <strong>does not belong to this group</strong>.</li>
<li>Run <strong>LAP</strong> again.</li></ol></li>
</ul>
<h1>TTA</h1>
<ul>
<li><strong>None</strong>, just average of 2 sites</li>
</ul>
<h1>Tools and hardware</h1>
<ul>
<li><strong>PyTorch</strong></li>
<li>1-2 1080ti most of the time, and about 8 GPUs in last 2-3 days.</li>
<li><a href="https://github.com/gatagat/lap">LAP solver</a></li>
</ul> |
Generative Dog Images | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks Kaggle team to have such a new competition and congrats to all the winners ! And special thanks to <a href="/cdeotte">@cdeotte</a>, <a href="/dvorobiev">@dvorobiev</a>, <a href="/jesucristo">@jesucristo</a>, who made this competition more interesting. I have learned many things about GANs through this competition. </p>
<p>To be honest, I’m very surprised at my final position. I thought I could not be in the gold medal zone, since my public score was not under 30.</p>
<p>Here is my solution(Public 31.09602, Private 55.42142).</p>
<p>・Preprocessing&Augmentations
- exclude images with extreme aspect ratio (y/x < 0.2 and y/x > 4.0)
- exclude images with intruders (thanks <a href="/korovai">@korovai</a> for sharing the kernel <a href="https://www.kaggle.com/korovai/dogs-images-intruders-extraction-tf-gan">https://www.kaggle.com/korovai/dogs-images-intruders-extraction-tf-gan</a>)
- use BoundingBox (no modification)
- Resize 64 (one side) and then RandomCrop to image size (64,64)
- HorizontalFlip(p=0.5)</p>
<p>・Model: BigGAN
- number of parameters G:10M, D:8M
- input noise from normal distribution (nz=120)
- use LeakyReLU
- attention on size 32 feature map
- use truncated trick (threshold=0.8)
- no EMA</p>
<p>・Loss: BCE loss
・Optimizer: Adam (lrG=3e-4, lrD=3e-4, beta1=0.0, beta2=0.999)
・Batch Size: 32
・Epochs: 130 (maximum kernel time limit, i.e. 32400sec)
・Others
- label smoothing 0.9 </p>
<p>At the beginning of this competition, I played with standard DCGANs, which got Public scores around 55. Then I used AC-GAN and got around 50. With ACGAN-projection, I could get around 39. Then I switched to BigGAN and jumped to around 31. But after that, I struggled to improve my score and could not get under 30. I have checked all of my submitted scores and found that there are some randomness. So I'm lucky to be in the final position. Anyway, I enjoyed this competition very much. Thank you!</p>
<hr>
<p>EDIT:
I made my kernel public <a href="https://www.kaggle.com/tikutiku/gan-dogs-starter-biggan">https://www.kaggle.com/tikutiku/gan-dogs-starter-biggan</a>
version32 is the 1st place solution</p> |
Generative Dog Images | My DCGAN solution LB 30.65 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi there!</p>
<p>The notebook for my solution is available here : <a href="https://www.kaggle.com/alvaroma/dcgan-lb-30-65">https://www.kaggle.com/alvaroma/dcgan-lb-30-65</a></p>
<p>Keypoints:</p>
<ul>
<li>Pytorch DCGan</li>
<li>NZ 64, Batch 32, LRG 0.0004, LRD 0.0008</li>
<li>Cropped images, selecting only those with ratio 1:50</li>
<li>Transformations: Flip Horizontal, Random Crop</li>
<li>Early stop at best FID</li>
<li>Select best random images</li>
</ul>
<p>Good luck for all participants!</p> |
Generative Dog Images | Public LB 15.81605 solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is one of my final submitted kernels (I selected quite similar two kernels for final submission). </p>
<p><a href="https://www.kaggle.com/yukia18/sub-rals-ac-biggan-with-minibatchstddev">[Sub] RaLS AC-BigGAN with MinibatchStddev</a>
Version 3 got public LB 15.81605.</p>
<p>FID emphasizes sample diversity too much. My images generated from roose threshold (2~) truncated latents are not good as for quality but get good score. Once you download images.zip from my kernel, you can see that my generated images are crappy quality but diversed.</p>
<p><strong>Key points:</strong></p>
<ul>
<li>Model: based on BigGAN
<ul><li>shared embedding of dog breed labels</li>
<li>hierarchical latent noise</li>
<li>projection</li>
<li>auxiliary classifier (ACGAN)</li></ul></li>
<li>Loss: RaLS with weighted auxiliary classification loss</li>
<li>Batch size: 128</li>
<li>Exponential Moving Average of generator weights </li>
<li>Uniform input noise on real images</li>
</ul> |
Generative Dog Images | My Solution with Code (Public LB 45.6) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for all the participants in this competition.
I provide my solution that gets 45.6 in public LB.</p>
<h2>Solution</h2>
<p>My refactored code is available at <a href="https://github.com/fujibo/kaggle-dog">https://github.com/fujibo/kaggle-dog</a>.</p>
<p>My code is based on SN-GAN with projection discriminator <a href="https://github.com/pfnet-research/sngan_projection">https://github.com/pfnet-research/sngan_projection</a>.
I add the following modifications to the base code.
- Apply spectral normalization to the generator
- Increase the learning rate and reduce the number of critics of the discriminator
These techniques are used in SA-GAN, which outperforms SN-GAN.
I did not use the original SA-GAN code because I am not good at tensorflow.</p>
<p>Moreover, I tuned hyper-parameters as follows.
- change <code>batch size</code> from 64 to 128
- change <code>iteration_decay_start</code> from 0 to 18000
- change <code>iteration</code> to 24000</p>
<h2>Insights</h2>
<ul>
<li>For reducing training time, I produced dog images with 32x32 pixels followed by bilinear resizing, but obtained a worse score.</li>
<li>Different trials have different training time somehow, that is, trial A cannot finish 27000 iterations, while trial B can finish 28000 iterations.</li>
</ul> |
Generative Dog Images | 8th place solution(private 89) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>code</h1>
<ul>
<li><p>kernel
<a href="https://www.kaggle.com/katotetsuro/stylegan-private-89?scriptVersionId=18524688">https://www.kaggle.com/katotetsuro/stylegan-private-89?scriptVersionId=18524688</a></p></li>
<li><p>github
<a href="https://github.com/katotetsuro/chainer-stylegan/tree/ac">https://github.com/katotetsuro/chainer-stylegan/tree/ac</a></p></li>
</ul>
<p>FYI: This scripting style is borrowed from this:
<a href="https://github.com/lopuhin/kaggle-script-template">https://github.com/lopuhin/kaggle-script-template</a></p>
<p>I prefer script style, which is easy to manage by git.</p>
<h1>summary</h1>
<p>To be honest, there is no special technique in my solution.
I chose stylegan, then looked for good parameters.
My stylegan starts with public score 140, then tuning parameters, I got 35.</p>
<ul>
<li>for preprocessing: crop image by bounding box, random horizontal flip</li>
<li>reduce input channel size of style generator to 256 because of resource limitation.</li>
</ul>
<p>all hyper parameters are shown in below.
<a href="https://github.com/katotetsuro/chainer-stylegan/blob/ac/src/stylegan/config.py">https://github.com/katotetsuro/chainer-stylegan/blob/ac/src/stylegan/config.py</a>
<a href="https://github.com/katotetsuro/chainer-stylegan/blob/ac/script_template.py">https://github.com/katotetsuro/chainer-stylegan/blob/ac/script_template.py</a></p>
<h1>trials that didn't work for me</h1>
<ul>
<li>Truncation trick from stylegan paper. I tried various truncation strength, but without truncation was the best.</li>
<li>Auxiliary Classifier</li>
<li>Data cleaning. I thought training data quality is quite important, so I did some procedure (dog face detection, manually picking good images..) but nothing worked.
Today, I read some kernels and learned there's some good techniques.
This is one of the room for improvement of my solution, I guess.</li>
</ul>
<p>It was really fun competition! Thanks for kaggle team!!</p> |
Recursion Cellular Image Classification | 7th place solution summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Recursion and Kaggle for organizing this unique and challenging competition! When I decided to participate in it, I was impressed by the quality of data and organization, <a href="https://www.rxrx.ai/">this beautiful site</a> alone gave me a lot of motivation. GCP and TPU credits were also indispensable for me, hope to see more in the future. The leaks that were found were handled smoothly by Kaggle, specifically thanks to <a href="https://www.kaggle.com/sohier">Sohier Dane</a>. And credits to <a href="https://www.kaggle.com/giuliasavorgnan">Giulia Savorgnan</a> for reporting the second leak. </p>
<p>I was very fortunate to team up with Yuval on this one. His models scored and maintained top10 for very long time, and I was able to contribute meaningfully only much later in the competition. I still can't wrap my head around how it was possible for him to put so much work in this competition, work full-time at a day job, but also prepare for and run UTMB 171k ultra-marathon race at the end of August. He finished after 50k due to injury, but in my eyes it is already a super-human level of toughness. </p>
<h1>Setup</h1>
<p>All my models I trained with pytorch and TPUs, and used exactly all 600$ of the free GCP credit that I had. My setup I described <a href="https://towardsdatascience.com/running-pytorch-on-tpu-a-bag-of-tricks-b6d0130bddd4">in this post</a>. Despite being able to get overall good training speeds, the experience using TPUs with pytorch was rough. The main problem was that it hangs unexpectedly after a few hours of training. At times it was so annoying that I questioned myself if I lived this life correctly. Pytorch/XLA guys tried to help on the forum, but at the moment I think Pytorch/XLA is just not quite production ready (but I will still be happy to receive free TPUs for the next competition!).</p>
<h1>The model</h1>
<p>From the beginning I started with resnet50 and didn’t have an opportunity to successfully try anything else. In hindsight, it was a good choice for this competition. Yuval has used other backbones, I believe he will describe his work in a separate post. </p>
<ul>
<li>Resnet50 backbone</li>
<li>ArcFace (m=0.2, s=30, 512 features), used “as is” from the beginning of training, no adjustments. A vanilla implementation <a href="https://arxiv.org/abs/1801.07698">from the original article</a>.</li>
<li>384x384 images cropped from original 512x512 with random shifts, flips and all angles rotations (albumentations package)</li>
<li>Normalization of images per experiment and channel, with small randomization</li>
<li>Not using well, plate and experiment meta info.</li>
<li>3 folds each containing full experiments</li>
<li>Training for about 10 epochs together and then a separate model per cell type.</li>
<li>Each site is treated as a separate sample</li>
<li>Adam optimizer, decreasing learning rate</li>
<li>HUVEC-05 is removed, as it is too different from HUVEC test experiments</li>
<li>HUVEC-18 is moved to the train set</li>
</ul>
<p>Besides this standard structure described above I had 3 special features in the model. I have not done any serious comparisons with and without them, but my feeling is that the first gives me a major boost, and second and third some additional smaller improvements (but a fair comparison vs regular pseudo-labeling is definitely missing).</p>
<ol>
<li><strong>Training with test</strong>. The model is trained on all train, test and control images together, predicting 1139 classes (1108 + 31 controls). For the test images I took their softmax output as a target for the log-loss. In this case it can be shown that the gradient of the features before the softmax is zero, that is, these samples have no effect. It makes sense intuitively, I take the predicted probability distribution as a target, - this is already the perfect prediction according to log-loss. But then I modify this target probabilities vector for each sample according to the rich structural information that we hold, - and this is quite unique for this competition. First, - <a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/102905">the plates leak</a>, all components besides 277 are zeroed out. Second, - each set of 277 samples contains every sirna only once. This second one was not easy to enforce, but as a soft constraint in the loss it worked reasonably well. Additional note here, - validation set data was in the train as well, with hidden sirnas, to simulate similar setting to test. One more thing, - ArcFace didn’t work well here, so it was turned off for test/validation samples. And the last point, - I think this approach in general can be thought of as a dynamic type of pseudo-labelling.</li>
<li><strong>Mean normalization of ArcFace features</strong>, kudos to Yuval who pioneered this idea. I save EMA of 512 features before ArcFace for each sample, calculate their mean per experiment and overall mean (vectors of length 512 again), and to each sample on the forward pass to these features I add the overall mean and subtract its experiment mean. This way the features entering ArcFace don’t have per-experiment bias and we improve this side of domain adaptation problem. Theoretically.</li>
<li><strong>Incremental hard pseudo-labelling (PL)</strong>. The PL in point 1 above is a probabilistic PL, in the sense that the target is a probabilities vector of size 277, but not forced to be a specific value, anything is good as long as it is in the set. In this addition to the model at the end of the training I try to push all reasonably confident test and validation samples to lock into one sirna, which becomes its target. I can then zero this sirna for all other samples in the 277-set, which helps them to converge. Once locked, this sample also gets into ArcFace (remember, it is turned off for the test). In practice, judging by validation experiments, first 200 samples out of 277 can be easily locked this way without a single error for all cell types. They were confident anyway, so probably not a huge benefit, but helpful. </li>
</ol>
<h1>Test-time-augmentation (TTA) and ensemble</h1>
<p>For each sample and each site I run 16 predefined transformations, aggregating those 16 by quantile 75, and then aggregating by gmean between the sites. I then aggregated also by gmean between the folds, and again by gmean between different runs (both me and Yuval had 2 runs). Guess what aggregation function we used to merge the results between me and Yuval’s probabilities? That’s right, gmean again. It just worked very well everywhere, despite me trying 7-8 competitors. We selected aggregation weights for all of this by CV and public LB, different per cell type.</p>
<h1>Mixed integer programming (MIP)</h1>
<p>The output of the above procedure is a set of <code>18*4=72</code> matrices 277 by 277 where each row sums up to 1 (samples) but column sums range from 0.5 to 3 (sirnas). We want columns to sum up to one as well, as we know that each set contains every sirna only once. To achieve this we divided the matrices iteratively by sum of columns and then sum of rows several times (e.g. 10), to force it into desired <a href="https://en.wikipedia.org/wiki/Stochastic_matrix">stochastic matrix structure</a>. This procedure does reliably improve score. It doesn’t sound ideal, but we couldn’t do any better. </p>
<p>And then the Hungarian algorithm, to get the solution. The name Hungarian algorithm was introduced to us by <a href="https://www.kaggle.com/christopherberner">Christopher Berner</a> in <a href="https://www.kaggle.com/christopherberner/hungarian-algorithm-to-optimize-sirna-prediction">this great kernel</a>. But we solve this matching problem with pulp package with Cplex solver, with a mixed integer programming formulation so short that I can paste it here. Cplex solves each such matrix in a couple of seconds.</p>
<p>```
prob = LpProblem("Recursion",LpMaximize) <br>
p_vars = LpVariable.dicts("match",(range(L),range(L)),0,1,LpInteger)</p>
<h1>objective</h1>
<p>prob += lpSum(lpSum(p_vars[d][i] * mat[d,i] for i in range(L)) for d in range(L))</p>
<h1>constraints</h1>
<p>for d in range(L):
prob += lpSum(p_vars[d][i] for i in range(L)) == 1, "OneSelected_%i"%d
for i in range(L):
prob += lpSum(p_vars[d][i] for d in range(L)) <= 1, "NoDuplication_%i"%i
```</p>
<h1>LB probing (1st place public explained)</h1>
<p>The public set is comprised of experiments HUVEC-17, HEPG2-08, RPE-08, U2OS-04, as can be easily verified and initially <a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/98075">discussed here</a>.</p>
<p>The LB probing idea stems from the observation that the above MIP formulation does not solve the problem with all the available information. Specifically, we know the scores of our own submissions, so we can decrease the feasible set with two constraints per submission:</p>
<p><code>
for s in range(S):
prob += lpSum(lpSum(p_vars[d][i] for i in range(L) if subs_sel[s,d,i])
for d in range(D)) / D &gt;= vals[s], "SubLow_%i"%s
prob += lpSum(lpSum(p_vars[d][i] for i in range(L) if subs_sel[s,d,i])
for d in range(D)) / D &lt;= vals[s] + 0.001, "SubHigh_%i"%s
</code></p>
<p>Note that here we need to solve for 16 sets, 4431 public samples simultaneously, because the constraints use all of them together. The LB probing procedure is simple, we generate a new solution with this formulation, it is a feasible solution in the sense that this solution can be the actual one given all the constraints. After submitting it we get some non-perfect score, which makes the found solution to be infeasible now, and we start from the beginning again.</p>
<p>This is the same approach that I used in LANL earthquake prediction competition, <a href="https://towardsdatascience.com/how-to-lb-probe-on-kaggle-c0aa21458bfe">this is a post about it</a>. It contains some interesting insights about the approach, and it is the same as in this case, so take a look if you are interested in more details.</p>
<p>Besides the constraints and the feasibility question, there is of course still the objective to maximize the likelihood of the matching, same as in initial MIP formulation. It helps to look for a solution in the right region, as this combinatoric problem is huge. Without good objective function which already solves most of the problem by itself (gives high public LB scores), it is hopeless to advance anywhere with this approach. For good and for bad, the problem here is easy enough for the approach to work. When we got to un-constrained score 0.979 (about 93 errors) the probing got traction and it solved the puzzle in 18 submissions from that. Of course building also on all 200+ submissions that we did before, both probing and regular ones, but getting a high score by itself is when it really clicked. </p>
<p>Two comments on the LB probing
1. It would not be possible to get anything with the LB probing, if not for the plates leak. It just simplified the problem too much, so it worked. Otherwise it would have been searching in the infinite feasible set forever. Therefore I think that the original design of the competition was correct, - probing not possible.
2. We corrected 7 HEPG2, 8 RPE, and 45 U2OS predictions thanks to probing and added it all to the training. Note that this is very few. U2OS-04 is one of the hardest, while U2OS-05 is easy, and it is the only U2OS test experiment, so the benefit for it is small. Overall, my estimation is that it gave us private LB boost of 0.001 at the absolute maximum. But it was fun!</p>
<p>Finally, congratulations to the winners and kudos to all kagglers who participated in the kernels and forums discussions!</p>
<p>And I will drop this random but nice image from my notebook here, to remember what it was all about</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2243477%2F44cc7ebcc0b45cbd684aae0ef7fdb019%2Fhuvec05.png?generation=1569543490620681&alt=media" alt=""></p> |
Generative Dog Images | Public LB 30 Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everybody !
The code for my solution is available here : <a href="https://www.kaggle.com/theoviel/conditional-progan-30-public">https://www.kaggle.com/theoviel/conditional-progan-30-public</a>
I spent some time yersteday cleaning it and commenting it a bit.</p>
<p>Overall, I took a PyTorch ProGAN implementation from GitHub about 20 days after the beginning and sticked with it for the rest of the comp. I quickly reached 32 but couldn't get much further. Guess I should've moved to BigGan which seemed to perform better :)</p>
<p>I'm definitely going to enjoy reading other's solutions though.
Special thanks to the Kaggle team for the extra effort put into the competition, hopefully reviewing kernels won't be too painful. </p> |
APTOS 2019 Blindness Detection | 83th Place Solution w/ source code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all,</p>
<p>This was my first real attempt at a competition - I'm quite happy with how much I learned, and to have got a silver medal.</p>
<p>I've written up a competition report in the readme for my repo <a href="https://github.com/khornlund/aptos2019-blindness-detection">here</a>.</p>
<p>Thanks to everyone for all the useful discussion and kernels, in particular <a href="/ratthachat">@ratthachat</a> and <a href="/drhabib">@drhabib</a>!</p> |
APTOS 2019 Blindness Detection | Congrats to all! (67th solution with code) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all winners and everyone who finished this competition! It has been an amazing learning experience for me as I am still super new to image classification problems. Special thanks to <a href="/drhabib">@drhabib</a> where I got the inspirition from for my approach and <a href="/ratthachat">@ratthachat</a> for his amazing preprocessing kernels.</p>
<p>So here is my approach, which is quite simple:</p>
<p>The model is EfficientNet B4 in Pytorch with Adam with image size of 256.</p>
<p><strong>Preprocessing:</strong>
Crop all images using the <code>crop_image_from_gray</code> function.
For 2019 validation data only, random center crop (1.1 to 1.3) is applied</p>
<p><strong>Augmentation:</strong>
<code>
RandomResizedCrop((256,256), scale=(0.5, 0.9)),
RandomHorizontalFlip(),
RandomRotation(360),
ColorJitter(contrast=(0.75,2.2))
</code></p>
<p><strong>Steps:</strong>
1. Pretrain on previous competition data (2015) and validated by current training data (2019).
a. Freeze all layers except the last, train for 5 epochs
b. Unfreeze all layers and train for 15 epochs</p>
<ol>
<li>Finetune using 2019 data. CV-5 is used and train for 7 epochs for each fold</li>
<li>Optimised thresholds (credit goes to <a href="/abhishek">@abhishek</a>)</li>
<li>For inference, no TTA, and use simple average of the 5 CV models</li>
</ol>
<p><strong>Other things I tried but did not work for me:</strong>
- Bigger image size and bigger EfficientNet models
- Ben's preprocessing
- CLAHE
- Circle cropping</p>
<p>Thanks to Kaggle, APTOS and everyone who contributed via Kernels and Discussions. I learned so much from you guys.</p>
<p>Code: <a href="https://github.com/qiletan/aptos2019-blindness-detection">https://github.com/qiletan/aptos2019-blindness-detection</a></p> |
Generative Dog Images | Data Augmentation - LB 47 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Generative Dog Images Competition</h1>
<p>Thank you Kaggle for hosting an exciting competition. It was tons of fun and a great learning experience. Thank you everyone for great discussions and code sharing. </p>
<p>I wish we had more time. There are many more ideas to try. My first model is an improvement from my previously posted LB 100 CGAN. I added more training examples by employing random cropping and added additional auxiliary classification loss to the discriminator. Furthermore precompiling the training loop with <code>@tf.function</code> allows twice as many training epochs by doubling the speed.</p>
<h1>Solution 1: TensorFlow AC-RaLS-DCGAN, LB 52</h1>
<p>Kaggle notebook <a href="https://www.kaggle.com/cdeotte/dog-breed-acgan-lb-52">here</a>. Version 16 is my old CGAN scoring LB 100. And version 17+ is my new ACGAN scoring LB 52+.</p>
<h2>Image Preprocess</h2>
<p>The blue rectangles are the provided bounding boxes. I cropped all images to the yellow squares and resized them to 80x80 pixels. I chose a square that captures my best guess of where the dog head is and added padding beyond the bounding box to allow for random cropping during training.
<img src="http://playagricola.com/Kaggle/d12.png" alt="image"></p>
<h2>Data Augmentation, Random Cropping</h2>
<p>Each original image was cropped to a 80x80 square that included 25% extra image beyond bounding box (yellow squares below). This extra room allows us to randomly choose a 64x64 crop within (red squares below) and mostly stay outside the bounding box (blue rectangles).
<img src="http://playagricola.com/Kaggle/d13.png" alt="image"></p>
<h2>Additional Auxiliary Classification Loss</h2>
<p>The Generator learns via RaLSGAN loss which compares the 1 output unit from real images with the 1 output unit from fake images. The Discriminator learns via basic GAN loss by comparing the 1 output neurons plus we add to that binary crossentropy of dense 121 against the class labels. This converts the CGAN into an ACGAN
<img src="http://playagricola.com/Kaggle/disc4.jpg" alt="image"></p>
<h2>TO DO: Add Spectral Normalization and Batch Norm Modulation</h2>
<p>If I had more time I would add better regularization/normalization to my GAN. Then we could make the conv nets (gen and disc) bigger and train more without exploding. Next I would modulate batch norm with class labels and initial noise seed (like CBN-GAN, StyleGAN, and Self Mod).</p>
<h1>Solution 2: BigGAN, SN-CBN-ResNetGAN, LB 47</h1>
<p>With only 2 days remaining, I didn't have time to improve my GAN further so I searched GitHub for a version of BigGAN! I found a model like BigGAN <a href="https://github.com/pfnet-research/sngan_projection">here</a>. I copied and pasted the code into a Kaggle notebook and connected it to my training data pipeline described above. With no tuning it scored LB 47! BigGAN has the capability to score LB 12 as shown in this Kaggle notebook <a href="https://www.kaggle.com/cdeotte/big-gan-256-lb-15">here</a>. So I believe with more time, I could tune BigGAN to score under LB 30 in a 9 hour Kaggle kernel.</p>
<h2>My BigGAN Dogs</h2>
<p><img src="http://playagricola.com/Kaggle/resgan281419.png" alt="image"></p>
<h2>Update</h2>
<p>After a few experiments, my BigGAN achieves LB 42 with the following changes from default. Batch size to 32 (from 64), learning rates to Adam alpha=0.0003, beta1=0.2, beta2=0.9 (from 0.0002, 0, 0.9). Discriminator channels to 48 (from 64), generator channels to 48 (from 64), train for 40,000 iterations with exponential decay beginning at 32,000.</p>
<h1>References</h1>
<p>The following papers helped me learn about GANs:
* 2014 Basic GAN: <a href="https://arxiv.org/abs/1406.2661">Generative Adversarial Nets</a> by Goodfellow, et al
* 2016 GAN Tricks: <a href="https://arxiv.org/abs/1606.03498">Improved Techniques for Training GANs</a> by Goodfellow, et al
* 2016 GAN Tricks: <a href="https://arxiv.org/abs/1701.00160">NIPS 2016 Tutorial: Generative Adversarial Networks</a> by Goodfellow
* 2015 DCGAN: <a href="https://arxiv.org/abs/1511.06434">Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks</a> by Radford, et al.
* 2016 LS-GAN: <a href="https://arxiv.org/abs/1611.04076">Least Squares Generative Adversarial Networks</a> by Mao, et al.
* 2018 Ra-GAN: <a href="https://arxiv.org/abs/1807.00734">The relativistic discriminator: a key element missing from standard GAN</a> by Jolicoeur-Martineau
* 2017 ProGAN: <a href="https://arxiv.org/abs/1710.10196">Progressive Growing of GANs for Improved Quality, Stability, and Variation</a> by Karras, et al
* 2017 WGAN: <a href="https://arxiv.org/abs/1701.07875">Wasserstein GAN</a> by Arjovsky, et al.
* 2017 WGAN-GP: <a href="https://arxiv.org/abs/1704.00028">Improved Training of Wasserstein GANs</a> by Gulrajani, et al
* 2018 StyleGAN: <a href="https://arxiv.org/abs/1812.04948">A Style-Based Generator Architecture for Generative Adversarial Networks</a> by Karras, et al
* 2014 CGAN: <a href="https://arxiv.org/abs/1411.1784">Conditional Generative Adversarial Nets</a> by Mirza, et al
* 2016 SGAN: <a href="https://arxiv.org/abs/1606.01583">Semi-Supervised Learning with Generative Adversarial Networks</a> by Odena
* 2015 CatGAN: <a href="https://arxiv.org/abs/1511.06390">Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks</a> by Springenberg
* 2016 ACGAN: <a href="https://arxiv.org/abs/1610.09585">Conditional Image Synthesis With Auxiliary Classifier GANs</a> by Odena, et al
* 2018 CBN-GAN: <a href="https://arxiv.org/abs/1802.05637">cGANs with Projection Discriminator</a> by Miyato, et al
* 2018 SN-GAN: <a href="https://arxiv.org/abs/1802.05957">Spectral Normalization for Generative Adversarial Networks</a> by Miyato, et al
* 2018 BigGAN: <a href="https://arxiv.org/abs/1809.11096">Large Scale GAN Training for High Fidelity Natural Image Synthesis</a> by Brock, et al
* 2018 Self Mod: <a href="https://arxiv.org/abs/1810.01365">On Self Modulation for Generative Adversarial Networks</a> by Chen, et al</p>
<h1>Summary</h1>
<p>To build a great GAN, you first build two great convolutional networks (Gen and Disc). Use either ResNet (BigGAN) or VGGNet (DCGAN) architecture. Next choose a loss function. Hinge loss is the current favorite. (Others are GAN, RaLSGAN, WGAN, ACGAN, etc) Then to prevent training from exploding, add normalization/regularization such as weight clipping, gradient penalty (WGAN-GP), batch norm, or spectral normalization (SN-GAN), etc. For best performance add modulation via batch norm using either class labels (CGAN), style labels (StyleGAN), or random noise labels (Self Mod). Next optimize your hyperparameters; alpha, beta1 and beta2 of your Adam optimizer. Choose if the discriminator will train extra epochs versus generator. Lastly choose a dataset, add more data with data augmentation, and start training on a fast GPU !</p> |
APTOS 2019 Blindness Detection | 198th place solution summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was a very illusive competition, but it was for a wonderful cause, and was organized pretty well.</p>
<p>For me, I used efficient net 7 , and an ensemble of efficient net 6 and 7. For pre-processing I used Ben Graham's method, I ran out of time after figuring the right batch size and learning rate for efficient 7. Cross fold validation didn't predict the very well in my case, but that's probably for not finding the right fold size (I tried 3 and 5). I tried different architectures DenseNet(X), Resnet(X), efficient yielded the best results, the image size 299 worked best, I suspect bigger image sizes would yield even better results (unfortunately, I didn't have the hardware for that). I couldn't rely on quadratic kappa (I think everyone saw the shake up), I used F1-macro score instead guage how well my model is doing. </p>
<p>I didn't use focal loss, it didn't give good results. The batch size for B6/B7 was key to moving the score a little bit up. </p>
<p>Didn't use external data for many reasons ( I tried, but there was difference of opinion among the classification type). One thing I really wanted to try was pseudo labeling, but I ran out of time.</p>
<p>I wish to congratulate the winners, really amazing insight. I hope we can have more medical related competitions. </p>
<p>And I wish to thank the whole community for publishing amazing Kernels, and great write ups that I really learn a lot from.</p> |
Generative Dog Images | 3rd place approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My main goal for this competition is to get to know GAN technique. I often encountered the terminology of GAN in my readings, but did not know what GAN is about. This competition is a great opportunity for me to dive in and practice GAN technology. I want to thank Kaggle team for hosting such an innovative competition and the great efforts to ensure fairness of the competition!</p>
<p>I followed this article "Generative Adversarial Networks - The Story So Far"(<a href="https://blog.floydhub.com/gans-story-so-far/">https://blog.floydhub.com/gans-story-so-far/</a>) and tried various GAN techniques including DCGAN, CGAN, SAGAN, ProGAN, conditional ProGAN, StyleGAN and BigGAN. At last I settled down with SAGAN because it got me to the score 40+ in my first try and seemed more stable than other network structures. I mostly stick with it and tune it all the way from the score 40+ down to the public score 18. My model implementation is based on this github "self-attention-GAN-pytorch implementaion"(<a href="https://github.com/voletiv/self-attention-GAN-pytorch">https://github.com/voletiv/self-attention-GAN-pytorch</a>)</p>
<p>Here is the list of things I tried in optimizing the model,</p>
<ol>
<li><p>image processing
a) bounding box cropping only;
b) bounding box cropping for images with multiple objects,
for images with single object and box_size / image_size >= 0.75, use original images instead of bounding box.
c) bounding box cropping plus all original images</p>
<p>Among the three processing methods, b) gets me the best private score, c) gets me the best public score 18, but did not go well with the private dataset. </p>
<p>I learnt the image processing techniques from the kernels <a href="https://www.kaggle.com/cdeotte/supervised-generative-dog-net">https://www.kaggle.com/cdeotte/supervised-generative-dog-net</a> and <a href="https://www.kaggle.com/jesucristo/introducing-dcgan-dogs-images">https://www.kaggle.com/jesucristo/introducing-dcgan-dogs-images</a>. Special thanks to the kernel authors Chris Deotte and Nanashi!</p></li>
<li><p>losses
I experimented with losses such as "standard-dcgan", "wgan-gp", "lsgan", "lsgan-with-sigmoid", "hinge", "relative-hinge". My experience is the best one for this problem with various models is the "standard dcgan" loss, the second best is "lsgan-with-sigmoid" loss. </p></li>
<li><p>Attention at different stages
According to the paper "Self-Attention Generative Adversarial Networks"(<a href="https://arxiv.org/pdf/1805.08318v2.pdf">https://arxiv.org/pdf/1805.08318v2.pdf</a>), self-attention at middle-to-high level feature maps achieve better performance because it receives more evidence and more freedom to choose conditions with larger feature maps. My experiences mostly concur with the paper findings. I have attention at 32x32 feature map for the generator, but at 16x16 feature map for the discriminator.</p></li>
<li><p>Training settings
The learning rate for the discriminator is 0.0004, and the learning rate for the generator is 0.0002;
batch size is 32(tried 64 but the result is not as stable as 32);
label smoothing; thanks the kernel <a href="https://www.kaggle.com/phoenix9032/gan-dogs-starter-24-jul-custom-layers">https://www.kaggle.com/phoenix9032/gan-dogs-starter-24-jul-custom-layers</a> and its author Nirjhar Roy.
learning rate scheduler: CosineAnnealingWarmRestarts
number of epoch: 170</p></li>
</ol>
<p>5 the best latent space dimension size: 180 </p>
<p>My best private score kernel(66.97): <a href="https://www.kaggle.com/lisali/sagan-submit-2?scriptVersionId=18714508">https://www.kaggle.com/lisali/sagan-submit-2?scriptVersionId=18714508</a></p> |
The 3rd YouTube-8M Video Understanding Challenge | 24th place solution - fast simple naive RNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>1. Train RNN on only validation set</h3>
<p>I converted tfrecords into numpy array per <a href="https://www.kaggle.com/tbmoon/how-to-save-tfrecord-into-npy-too-slow">https://www.kaggle.com/tbmoon/how-to-save-tfrecord-into-npy-too-slow</a> to use pytorch.
Then I trained a 3-layers RNN only on validation set, there are 2 seperate GRUs for frame and audio data respectively, then concat them together and pass to a fully connected layer with softmax activation, this gave me 0.71 public lb. Also tried context gate, there is no much difference for this model.</p>
<h3>2. Train RNN on train set and fine tune on validation set</h3>
<p>For training set. I selected only samples with 1 label. To accelerate training speed, for each video randomly choose 15 continual frames for training. Train the RNN model on train set for about 4 hours, then fine tune on validation set. This fine tune single 3 layers RNN model gave me lb 0.746. Trained another 4 layers RNN gave me lb0.735. <br>
The total training time for each model takes less than 6 hours on single P100 GPU.
My final submission is the average ensemble of the 2 models, which is 0.753 for public lb and 0.744 private lb.</p>
<h3>3. Submission</h3>
<p>I predicted every 5-seconds segment of test set, and use top 100000 scores of each classes for submission. The prediction takes 30 minutes on single GPU.</p>
<h3>Transformer / multi-task experiments</h3>
<p>I replaced the RNN model with 3 layers or 6 layers Transformer, did not improve the score. Then I experimented mult-tasks on training set and validation set like this paper (<a href="https://arxiv.org/abs/1901.11504">https://arxiv.org/abs/1901.11504</a>) , it was not hepful either.</p>
<p>I was also planning to experiment sequence to sequence learning with validation set, but was not able to complete the code before competition ends.</p> |
APTOS 2019 Blindness Detection | 163rd place report and code repository | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This competition was very special for me, it's my first competition medal, I worked and learned a lot, thanks for all that shared valuable information.</p>
<p>A brief report (I won't get into too many details because wasn't anything special)</p>
<ul>
<li><strong>Data:</strong>
<ul><li>Old data was sampled to a more balanced distribution, was used only 25% of samples from class 0.</li>
<li>Each fold had all sampled old data and 20% of new data.</li>
<li>Training set: All sampled old data and 80% of new data (15770 samples from old data and 2929 samples from new data).</li>
<li>Validation set: 20% of new data (733 samples).</li></ul></li>
<li><strong>Model:</strong> EfficientNetB5
<ul><li>Imagenet pre-trained weights.</li>
<li>Replaced top head with GlobalAveragePooling2D and a Dense layer as linear output.</li></ul></li>
<li><strong>Training</strong>
<ul><li>Parameters:
<ul><li>Batch size: 32</li></ul></li>
<li>Warm-up step:
<ul><li>Freeze all layers except the last 2.</li>
<li>Learning rate: 4e-3, Adam optimizer and Cosine & Warm-up learning rate scheduler (2 epochs warm-up and hold base LR for 2 epochs).</li>
<li>Train for 5 epochs.</li></ul></li>
<li>Fine-tunning step:
<ul><li>Unfreeze all layers.</li>
<li>Early stopping monitoring validation loss for 5 epochs.</li>
<li>Learning rate: 4e-4, Adam optimizer and Cosine & Warm-up learning rate scheduler (5 epochs warm-up and hold base LR for 3 epochs).</li>
<li>Train for 20 epochs.</li></ul></li></ul></li>
<li><strong>Preprocess:</strong> This step was applied before modeling to make training faster.
<ul><li>Convert image to RGB channels.</li>
<li>Circle crop image. This crop was used to remove black areas from original square images, and also crop a circular area based on the image center.</li>
<li>Resize image to 224x224. The original images size were reduced to 224, I have tried many others but this worked best, also made possible to train with larger batch size (32).</li></ul></li>
<li><strong>Augmentation:</strong> All augmentations were from Keras ImageDataGenerator.
<ul><li>Rescale (divide by 255).</li>
<li>Random rotation 360°.</li>
<li>Random flip (horizontal and vertical).</li></ul></li>
<li><strong>Inference:</strong>
<ul><li>Test time augmentation (TTA) x10, using the same augmentations of the training step.</li>
<li>Averaged all the 5 models and their TTA predictions.</li>
<li>Threshold: [0.5, 1.5, 2.5, 3.5]</li></ul></li>
</ul>
<p>I usually take some extra effort and create <a href="https://github.com/dimitreOliveira/APTOS2019BlindnessDetection">repositories</a> for my competitions, there you can find a more <a href="https://github.com/dimitreOliveira/APTOS2019BlindnessDetection/tree/master/Best%20solution%20%28Bronze%20medal%20-%20175th%20place%29%20">detailed report and my solution stack</a>.</p>
<p>For me, it was a lot more work to put everything together in an organized way, but I think it worth it, also would be nice to see something similar for some top submissions, for beginner see the way that more experienced people structure and build their workflow is really important.</p> |
APTOS 2019 Blindness Detection | 8th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all I want to thank everyone who participated, shared kernels and was part of activate discussion. </p>
<h2>PROBLEM:</h2>
<p>As you have already heard and read many top team solutions have used very simple 2 STEP approach in fact this was heavily discussed here
(<a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/100815">https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/100815</a>) and I also partially mentinoed in my kernel (<a href="https://www.kaggle.com/drhabib/starter-kernel-for-0-79">https://www.kaggle.com/drhabib/starter-kernel-for-0-79</a>) :
<code>
STEP 1: pretrain on 2015 data
STEP 2: finetune on 2019 data
</code>
This approach is good. But When I personally started experimenting I was extremely worried that our local CV was around 0.931 but LB was 0.811. This was a huge gap and we were afraid about shake up. Also just pertaining on old data using 2019 as Validation results in LB 0.75. Which indicated that our model was not generalizing good. </p>
<h2>SOLUTION:</h2>
<p><code>Our idea was very simple instead of training in two diffrent stages we will just combine all the data from 2019 and 2015 and try to work from here to improve our generalization.</code></p>
<h3>PREPROCESSING.</h3>
<p><code>ISSUE 1:</code>
Remove black background.</p>
<p>For Image preprocessing and have used script and info from this awesome kernel <a href="https://www.kaggle.com/ratthachat/aptos-updatedv14-preprocessing-ben-s-cropping">https://www.kaggle.com/ratthachat/aptos-updatedv14-preprocessing-ben-s-cropping</a>
Thank you <a href="/ratthachat">@ratthachat</a> </p>
<p>The script basically removes all the black background. </p>
<p>```
def resize_to(img, targ_sz:int, use_min:bool=False):
h,w = img.shape[:2]
min_sz = (min if use_min else max)(w,h)
ratio = targ_sz/min_sz
return int(w*ratio),int(h*ratio)</p>
<p>def crop_image_from_gray(img,tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol</p>
<pre><code> check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
</code></pre>
<p>```</p>
<p>```
def load_ben_color(path, size):
image = cv2.imread(str(path))
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = crop_image_from_gray(image)
target_size = resize_to(image, size, use_min=True)
image = cv2.resize(image, target_size)
return PIL.Image.fromarray(image)</p>
<p><code>``
The only think I modified I added</code>resize_to<code>function to preserve original aspect ratio so the images look natural. (My function called</code>load_ben_color` it has nothing to do with ben processing i was just too lazy to change name)</p>
<p><code>ISSUE 2:</code>
Removing confusing labels and duplications. </p>
<p>Since we were combining 2015 and 2019 in to one dataset. It was really important to remove all the duplicates and confusing labels to prevent any leak. We have used this awesome kernel to do the job (<a href="https://www.kaggle.com/h4211819/more-information-about-duplicate">https://www.kaggle.com/h4211819/more-information-about-duplicate</a>) Thank you <a href="/h4211819">@h4211819</a></p>
<p><code>ISSUE 3.</code></p>
<p>Solving zoom problem.
After all images were preprocessed to remove black background we observe, also heavily discussed on the forums that 2019 data looked zoomed to the center (below is the example).
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F991320%2F6ce63736925aa4fde27487609d10edce%2FScreen%20Shot%202019-09-08%20at%2010.00.22%20AM.png?generation=1567954302210617&alt=media" alt=""></p>
<p>Since we did not know how private data was we decided this problem is best solved using Augmentation random zooms to center, in this way we were making sure that we are not overfitting and also generalizing better. </p>
<h3>AUGMENTATION:</h3>
<p>This was very important part our solution (I posted experiments that I have performed with augmentations here: <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/100815">https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/100815</a>)</p>
<p>Below is our augmentation: </p>
<p><code>Rotation</code> to 360, <code>flips</code>, <code>zoom</code> up to 1.35x , <code>lightning</code>. This is how its looked:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F991320%2F4405468830b942fdbd0988b087038c7a%2FScreen%20Shot%202019-09-08%20at%2011.02.17%20AM.png?generation=1567954956948361&alt=media" alt=""></p>
<h3>TRAINING:</h3>
<p>All the training was done using one cycle learning policy (fastai) with gradual increasing of size , we start with low resolution images and gradually go to higher resolution with reusing weights. You can find details on the my Github notebooks <a href="https://github.com/DrHB/APTOS-2019-GOLD-MEDAL-SOLUTION">here</a> (I will keep updating) .</p>
<p><code>Model 1</code>:</p>
<p>Results:</p>
<p><code>
Model: EfficientNet-B5
Loss : MSE
CV Sratagy : 5 FOLD StratifiedKFold
</code>
After combining both datasets 2015 + 2019 and applying above mentioned tricks we immediately observed that our Local CV matches very well public scores.</p>
<p>```
EXP 740:</p>
<p>5 FOLD CV: 0.823
LB: 0.821
PB: 0.926
```</p>
<p><code>Model 2</code>:</p>
<p>Results:</p>
<p>```
Model: EfficientNet-B4
Loss : MSE
CV Sratagy : 5 FOLD StratifiedKFold</p>
<p>```
data same as Model 1 </p>
<p>```
EXP 765:</p>
<p>5 FOLD CV: 0.831
LB: 0.816</p>
<pre><code> PB: 0.927
</code></pre>
<p>```
This looked all promising our Local CV was stable and had good correlation with LB score . But stil we wanted to push further. And last think I decided to try is do pseudo labeling: </p>
<p>I took average prediction for 2019 test data from <code>EXP_740</code> and <code>EXP_765</code>. Added pseudo label data to train data and retrain <code>EXP_740</code> and <code>EXP_765</code> for 3 epoch below are results</p>
<p>```
EXP_740_PSD:
5 FOLD CV: 0.831
LB: 0.93
PB: 0.930</p>
<p><code>
</code>
EXP_765_PSD:
5 FOLD CV: 0.83
LB: 0.835
PB: 0.929</p>
<p>```</p>
<p>The combined average of this two models results <code>0.931</code> on PB and <code>0.830</code> on LB =)</p>
<h3>CONCLUSION:</h3>
<p>This was very fun competition, as you can see we did not use many tricks just very systematical logical thinking and trusted our local CV =) </p> |
APTOS 2019 Blindness Detection | Congrats to all !! 108th place solution | first medal !! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to the winners and all the participants !!
I really want to thank <a href="/drhabib">@drhabib</a> and <a href="/ratthachat">@ratthachat</a> and everyone else who shared their insights and work in notebooks and discussion forums, it helped me a lot.
Well my solution is pretty simple, the two things that really worked for me were :
1. Test Time Augmentation
2. Ensembling 5 models</p>
<strong>Overview</strong>
<ul>
<li>Ensemble 5 models - all EfficientNets B3 with image size 300.</li>
<li>a lot of augmentation.</li>
<li>soft TTA.</li>
<li>Preprocessing - only cropping the black borders.</li>
<li>using Keras.</li>
</ul>
<strong>Score</strong>
<ul>
<li>The model that gave me best results scored 0.810 on the public LB and 0.921 on the private LB.</li>
<li>Best performing model on Public lb (0.814) gave 0.917 on the Private LB.</li>
</ul>
<strong>Training</strong> -
<p>i used the Efficientnet B3 with image size 300, pre-trained on the old competition data and then fine-tuned on this competition data. I used a lot of augmentations - zoom_range(0.25-0.35), horizontal and vertical flips, and rotation(360).
TTA - used a very soft tta - getting predictions on test images, randomly rotating 1-6 degrees and making predictions again and then again rotating original image to 7-12 degrees. Did the same for all 5 models, and then taking mode of predictions for each type of tta from each model.
this gave me three lists of predictions - one for each tta, and then again taking mode of predictions of those three lists.
Models used for ensembling with TTA, scored - 0.784, 0.790, 0.794, 0.797 and another model with 0.797 on public lb.</p>
<p>TTA gave me a boost of about 0.02 as these above models scored around 0.77-0.78 on public lb.
Ensembling gave me a huge boost.</p>
<p>Thanks to Kaggle and APTOS for this wonderful competition, it taught me a lot.</p> |
APTOS 2019 Blindness Detection | 130th solution - 92.0 private LB | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulation to all winners and thanks for great competition. This would be my first medal, so I decided to write up our approach.</p>
<p><strong>Main problems to solve:</strong>
- inbalanced classes
- not enough data
- different distribution of the train and public test data set,</p>
<p><strong>Data augmentation:</strong>
I think that one of the best way to improve score on public set was strong augmentation. So, to augment we differentiated two types of the images - first where image contains whole eyeball, and second where image contains cropped eyeball. In first case to augment we cropped both – rows and columns and in the second case we cropped only rows. Cropping was done randomly from 0 to 25% of the size of the eyeball in each side. After that we rotated image from 0 to 360 degrees. However to avoid cropping during rotation we used this function from [<a href="https://stackoverflow.com/questions/11764575/python-2-7-3-opencv-2-4-after-rotation-window-doesnt-fit-image/33247373#33247373">1</a>]. I believed that this also helped to train model in different scales. We also used some pixel-wise augmentation - like addition, multiplications and so on. Then we used the same as Ben’s processing. During validation we got rid of black area in image. Picture below shows: original data, training data and validation/test.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1073879%2Fed5e641a3017f356fbaf909a5a9482d2%2Fimage.png?generation=1568107876432985&alt=media" alt=""></p>
<p><strong>Model</strong>
Basically we used the same augmentation and settings to pretrain model on data from 2015. We trained model with weights computed from sklearn.utils.class_weight. As a loss function we used slightly modified categorical_crossentropy loss to penalize more distant predictions (from [<a href="https://github.com/JHart96/keras_ordinal_categorical_crossentropy/blob/master/ordinal_categorical_crossentropy.py">2</a>]) and this was our checkpoint metric. I also tried to use as a checkpoint metric accuracy, f1, or kappa but it scored worse in public data set. We used Adam optimizer with start learning rate 8e-5 and then learning rate was scheduled as follow. </p>
<p><code>
def scheduler(epoch,learning_rate):
lr = learning_rate
if epoch==3:
lr = 3e-5
if epoch==6:
lr = 1e-5
if epoch==9:
lr = 8e-6
if epoch==12:
lr = 4e-6
return lr
</code></p>
<p>Validation was done on 10% data. For single model the validation kappa score was about 0.87 (without TTA) </p>
<p><strong>Prediction Time</strong>
During prediction we used simple cropping image to contain only eyeball and 7 TTA, each of them was only rotation (without cropping) by additional 45 degrees (0, 45, 90, and so on). Doing that allowed to improve score for EfficientNet-B5 from 80.4 (without TTA) to 81.8 on public test. Some people wrote that it is useless, but I really think it really depends on how you trained your network and how it is done TTA itself.</p>
<p><strong>Ensembling</strong>
As a final submission we used soft-voting on 7 TTA on EfficientNet-B5 (image size 456) and 7 TTA on EfficientNet-B4 (image size 380). It scored 81.6 on public set and 92.0 on private set.</p>
<p>We believed that our solution was stable (after previous competition I tried to avoid my fallacy in relying only on public score), because improvement on validation fold was scored better on public LB, slight changes in settings did not changed public LB score, and we used strong augmentation. Any comment what could we do in different manner are welcomed. Thank you, and I'm starting to dive into fascinated readings of top solutions.</p>
<p>[1] <a href="https://stackoverflow.com/questions/11764575/python-2-7-3-opencv-2-4-after-rotation-window-doesnt-fit-image/33247373#33247373">https://stackoverflow.com/questions/11764575/python-2-7-3-opencv-2-4-after-rotation-window-doesnt-fit-image/33247373#33247373</a>
[2]
<a href="https://github.com/JHart96/keras_ordinal_categorical_crossentropy/blob/master/ordinal_categorical_crossentropy.py">https://github.com/JHart96/keras_ordinal_categorical_crossentropy/blob/master/ordinal_categorical_crossentropy.py</a></p> |
APTOS 2019 Blindness Detection | 4th place solution (2nd after LB cleaning) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats everyone with excellent results and ending of this exciting competition!</p>
<p><strong>Data</strong>
We used full-size images from 2015 train dataset to pretrain our models. Models trained on old data only gave us ~0.73-0.75 on public LB.
Current competition train.
Old competition test.</p>
<p><strong>Models</strong>
Overall we tried different architectures including se-resnext50, se-resnext101, densenet, but they were slow to train and performed poorly. The only NN architecture that performed well was efficient-net, so our final solution contains multiple b3, b4, and b5 networks.
B3 image size: 300
B4 image size: 460
B5 image size: 456</p>
<p><strong>Preprocessing</strong>
I cropped black background on images and then resized to the desired image size before training my models. <a href="/aispiriants">@aispiriants</a> didn't perform any preprocessing except resizing images. I've also tried Ben's color preprocessing, but it didn't boost score, so I stop trying in the middle of the competition.</p>
<p><strong>Augmentations</strong>
We used a lot of augmentations, at least more than I ever used before :)
All from the wonderful albumentations library: Blur, Flip, RandomBrightnessContrast, ShiftScaleRotate, ElasticTransform, Transpose, GridDistortion, HueSaturationValue, CLAHE, CoarseDropout.</p>
<p><strong>Training</strong>
Overall training/submit flow is pretty simple:
Pretrain on old data (~80 epochs on b3, ~15 epochs on b5)
Train on current competition data (~50 epochs on b3, ~15 epochs on b5)
Pick top-performing epochs from the training process and blend them by a simple mean. We also used flips as TTA.</p>
<p><strong>Secret ingredient :)</strong>
We pseudo-labeled current test data and then fine-tune networks again using train+pseudolabeled test. It gave us a huge boost and a lot of motivation, so we repeated a pseudo-labeling round once more that boosted a little bit more. We also pseudo-labeled previous competition test data to increase our training dataset size and fine-tuned some models on it too.</p> |
APTOS 2019 Blindness Detection | Congratulations to all ! 54th Solution. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all !
Thanks to my great teammates. (friends in real world)</p>
<ol>
<li>Network : EfficientNetB4& EfficientNetB5 (Both Classification and regression.)</li>
<li>Image size: B4: (256x256), B5: (328x328)</li>
<li>Image Preprocessing: Cropping images Only.
(Code from <a href="https://www.kaggle.com/benjaminwarner/starter-code-resized-15-19-blindness-images">https://www.kaggle.com/benjaminwarner/starter-code-resized-15-19-blindness-images</a>) </li>
<li>Data Augment:
<code>train_transform = transforms.Compose([
transforms.ColorJitter(brightness=0.45, contrast=0.45),
transforms.RandomAffine(degrees=360, scale=(1.0, 1.3)),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])</code></li>
<li>Optimizer : Adam</li>
<li>Loss : Classification: Cross entropy Loss
Regression:MSE</li>
<li>Learning Rate: Start from 0.001</li>
<li>Learning Rate Scheduler:StepLR(B4-> StepSize:5, B5-> StepSize:10), gamma=0.1</li>
<li><p>TTA: TTA 10 times by following code
<code>test_transform = transforms.Compose([
transforms.RandomAffine(degrees=360),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])</code></p></li>
<li><p>Score:
-> Public LB: <br>
B4 Classification: 0.821 <br>
B5 Classification: 0.825
B4 Regression:0.823
B5 Regression:0.825
-> Private LB
B4 Classification: 0.919 <br>
B5 Classification: 0.921
B4 Regression: 0.917
B5 Regression: 0.921</p></li>
<li><p>Final Submission:
Combining B4 & B5 regression output together. (Public LB:0.835, Private 0.925)</p></li>
</ol> |
APTOS 2019 Blindness Detection | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats all the prize winner and everyone finished in gold zone! And I really want to say thank you to my great teammate <a href="/mhiro2">@mhiro2</a> and <a href="/yasufuminakama">@yasufuminakama</a>, I'm pretty confident that I couldn't get gold medal if I'm alone...</p>
<p>This is my first time to write solution, but I'll try to open up what we did as much as possible. </p>
<h2>OVERVIEW</h2>
<ul>
<li>ensemble 4 models, 2 SeResNext101 and 2 EfficientNet B5</li>
<li>2 kinds of pseudo labeling</li>
<li>use old competition data</li>
</ul>
<h2>CHALENGE</h2>
<ul>
<li>severe leakage with image size</li>
<li>difference between train / test distribution</li>
<li>unstable metric, QWK</li>
</ul>
<h2>DETAIL</h2>
<h3>preprocess</h3>
<p>The most important part of this competition is preprocess. As everyone realized, this competition had a severe leakage with image size and shape. At first I tried many kinds of preprocess (ex. zoom, circle crop or corner cut etc.) to eliminate
this leakage, but almost all failed. When I believed train images and test images looked very similar, that makes LB score significantly dropped. Only rotate worked good for me, but it was very unstable.(ex I used 120 degree. If I changed to 90 or 180, LB score significantly dropped.)
And the most interesting image shape was (480,640), which consist of over 70% of test set. That makes me come up with below hypothesis:
After admin made the dataset, they realized there was strong correlation between image size/shape and diagnosis score. They want to prevent all kagglers from submitting LGBM model which use only image meta features. But they also want to the model which can detect retinopathy, so they decided to crop only some part of public test set....
That means the image size and shape information is still useful, so I stopped focusing on preprocess. I'm now confident that this thoughts was somehow right after checking private score because it's incredibly higher than previous competition. And you can see that the evidence of this in <a href="https://www.kaggle.com/taindow/be-careful-what-you-train-on">this great kernel</a>'s private score.
What we used are below. I believe this difference between preprocess might diversify our models.</p>
<p>```python</p>
<h1>for SeResNext</h1>
<p>Compose([
BensCrop(img_size), #from 2015 solution
RandomHorizontalFlip(),
RandomVerticalFlip(),
RandomRotation((-120, 120)),
),
]) </p>
<h1>for EfficientNet B5</h1>
<p>Compose([
Resize(img_size),
HorizontalFlip(),
VerticalFlip(),
Rotate(),
RandomContrast(0.5),
IAAAdditiveGaussianNoise(p=0.25),
])
```</p>
<h3>dataset</h3>
<p>The 2nd important part of our result was how to split and select dataset. I tried many combination, and finally I found below setting works well for me.
1. split current train dataset to 5 folds by stratified k fold.
2. train on 4/5 train set and previous competition train set
3. validate 1/5 train set</p>
<p>When I pre-trained on 2015 dataset and fine-tuned on 2019 dataset, my single model(SeResNext)'s score was LB0.811, but after I changed to above method, my score was boost to LB0.827.</p>
<h3>model and training method</h3>
<p>We used 2 kinds of model, SeResNext101 and EfficientNet B5. Each models used a differnt training method.</p>
<p>For SeResNext101
- image size:320
- loss:RMSE
- train on 2015 dataset + 4/5folds of 2019 dataset (validate with 1/5 of 2019 dataset) </p>
<p>For EfficientNet B5
- image size:256 -> 320
- loss:MSE
- 2 step training
1. pre-train on 4/5folds of 2015 dataset (validate with 2019 dataset) (img_size=256)
2. fine-tune on 1/5fold of 2015 dataset + 4/5folds of 2019 dataset (validate with 1/5fold)(img_size=320)</p>
<p>Before starting ensemble, SeResNext scored LB0.827 and EfficientNet scored LB0.830.</p>
<h3>pseudo labeling and ensemble</h3>
<p>We used 2 kinds of pseudo labeling.
1. make soft label for test set and re-train from scratch(ImageNet pre-trained model).
2. make hard label for test set and fine-tune from trained model
Both method boost LB score a lot, so we decided used both method to diversify our models. Sometimes we also use external data (Messidor and IEEE) to give some fresh information to our model, but I'm not sure it's necessary or not.
By combining pseudo labeling and ensemble, our both single models reach LB0.846. Below is the result of what we did.(I know it's not beautiful, but we had no time to go back and start from 1st stage model...)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1554318%2F741b1967136cf06b94afaae75fde562f%2F2019-09-07%2011.29.02.png?generation=1567913393476257&alt=media" alt=""></p>
<p>(I deleted the row which didn't contribute to our final model)</p>
<p>Finally we submitted below 2 models.
- No.2+No.5+No.18+No.19(public0.930)
- No.16(public0.930)</p>
<h3>UPDATED</h3>
<p>Here is my single model's scores that shows how pseudo labeling was effective on private dataset.
model: SeResNext101
- before applied pseudo labeling: <strong>private LB 0.922</strong>
- pseudo labeling with only SeResNext101's labels: <strong>private LB 0.925</strong>
- pseudo labeling with both SeResNext101 and EfficientNet's labels: <strong>private LB 0.931</strong>
Interestingly last model is better than our final submission and enough to win gold medal :)</p>
<p>Thank you for reading, please feel free to ask me if you have a question!</p> |
APTOS 2019 Blindness Detection | 187th Place Approach + Some Insights | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Nothing to fancy for us, but I thought I'd share our approach.</p>
<p>For our phase 2 kernels, we selected 2 different ensembles of 2 Effiecientnets: B5 (Image size 456) + B4 (Image size 328) with pretty standard augmentations and some TTA.
We also experimented with a few values for the GaussianBlur and ended up going with a value of 20 (empirically got slightly better results).</p>
<p>We actually decided to only use the current competition data (no 2015 data). <strong>I wonder if there are any top solutions who did the same?</strong>
Since we anticipated there will be a difference between the public and private test data, we also decided to not optimize the QWK coefficients to avoid the risk of overfitting to the public data.
(we used coef = [0.5, 1.5, 2.5, 3.5])</p>
<p>We relied on local validation QWK combined with validation loss and early stopping. We tried to select models that got low validation loss combined with a relatively low difference between the train and validation loss. This proved to be a good tactic as we jumped almost 100 places between the public and private LB.</p>
<p><strong>What Didn't Work</strong>
We have tried using Green CLHE preprocessing but got no apparent improvement. We ended up not using it for any of our ensemble models. <strong>Did anyone have any luck with it?</strong></p>
<p>This was our first Computer Vision contest so we are happy with the results. Thanks for everyone involved with the discussions and awesome kernels! we learned a lot!</p> |
SIIM-ACR Pneumothorax Segmentation | 31st place solution - single fold 512x512 UNet EfficientNetB4 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all of the winners and participants.</p>
<p>Here is my solution:</p>
<p><strong>Preprocessing</strong>
Histogram equalization (CLAHE) with kernel size 32x32 on original image size 1024x1024 - (this kernel size gives more details than bigger sizes).</p>
<p><strong>Model training</strong>
- UNet with EfficientNetB4 encoder
- Single stratified fold. Train - 83.34%, validation 16.66%.
- Adam
- BCE + Dice loss
- 100 epochs
- not aggressive augmentations with HFlip and low brightness changes (to keep histogram eq. work)
- Input image size 512x512, Batch size: 4
- Gradient accumulation - 4 batches
- LR: Cosine annealing - 4 cycles, from 1e-3 to 1e-6
- Stochastic Weight Averaging - 4 epochs on the lowest LR</p>
<p><strong>Prediction</strong>
- HFilp TTA.
- Two-step thresholding:
1. Find the threshold which fits for the classification task (reduce false positive), every time it is somewhere between 0.95-0.99.
2. Find the threshold, which fits for the segmentation task - usually between 0.01-0.50</p>
<p>So to the predicted sigmoid map (after HFlip TTA) was applied threshold with high value. All masks with less than 2% detected pixels marked as an empty mask. If there are more than 2% detected pixels, I apply the 2nd threshold with a lower value to produce a final mask.</p> |
The 3rd YouTube-8M Video Understanding Challenge | 7th Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>20191031 Update</h2>
<p>The cleaned codebase has been published at <a href="https://github.com/ceshine/yt8m-2019/">ceshine/yt8m-2019</a>. The models were reproduced with minor variation (still able to obtain 7th place).</p>
<h1>The Original Content</h1>
<p>This is going to be my first gold medal after more than 6 year of Kaggling. I’m very grateful for this great community on Kaggle, and all the friendship and support from some of the best people I’ve met online. Thank you to the sponsor and organizers for this very interesting competition. And thanks for the $300 GCP credit. It would be impossible for me to reach this posision without the access to T4 and P100 GPUs.</p>
<p>I did not realize I have a chance at landing in the top 10 until exactly one week ago. It was very lucky that the context-aware segment classifier idea I chose to implement worked well enough. I spent the last one week tuning hyper-parameters and model architectures. The public score improved from 0.79450 to 0.81247 just from that. There are still some low hanging fruits because of the time constraint, but I don’t think that they will make a big difference with regard to ranking. </p>
<p>My approach is fairly simple. First, train a video-level model on the frame-level dataset. Then fine-tune the model on the segments dataset to create segment classifiers. There are two types of the segment classifiers — context-aware and context-agnostic. The final predictions come from simply averaging the probabilities (I did not try other averaging methods) generated by these classifiers (with roughly 1:1 mix of the two types). </p>
<h2>Video-level model</h2>
<p>Only some subsets of the frame-level training and validation datasets were used to train the base video-level model. They add up to just under 1TB. I’ll provide a list of the tfrecord files used in the code repository.</p>
<p>Only videos tagged with the 1000 relevant labels were used in training. This was mainly to speed up training. I did not try training with the full set of labels. </p>
<p>A random sample of 150 frames are drawn from a video when its length is larger than that.</p>
<p>I had two video-level models that were good enough at the end of the competition:</p>
<ul>
<li>A NeXtVLAD model with <code>n_clusters = 64</code> and <code>n_groups = 16</code>. Trained 200k steps with batch size 48 and slanted triangular learning rates (<code>max_lr = 3e-4</code>). Training took almost 15 hours with a single P100 GPU.</li>
<li>A context-gated dbof model with <code>frame_dim = 4096</code> and <code>video_dim = 2048</code>. Trained 100k steps with batch size 128 and slanted triangular learning rates (<code>max_lr = 3e-4</code>). Training took 13 hours with a single T4 GPU (this is a rough estimate because the instance was preempted several times). </li>
</ul>
<h2>Segment classification model</h2>
<p>The segments dataset was split into 8 folds (by tfrecord files). </p>
<p>In addition to the 5 seconds/frames labeled by human, the previous and later <em>n</em> frames are also included to the training example. So for a labeled segment start at time t, the actual segment fed to the model starts from <em>t-n</em> and ends at <em>t+4+n</em>. I found <em>n=3</em> works best in early experiments and stuck with it. This particular choice probably caused some overfitting to the leaderboard. </p>
<p>The context-agnostic classifiers are just the video-level models fine-tuned on the segments datasets. </p>
<p>A context-aware classifier consists of three parts: a video encoder, a segment encoder, and a fully connected classifier. A video encoder is created by removing the final logistic (mixture-of-experts) layer of a video-level model. I tried different ways of creating segment encoders, and found that using the same method as video encoder seemed to create the fewest complications. The encoded vectors are concatenated and fed to the fully connected classifier (one linear layer plus ReLU activation on top of the mixture-of-experts classifier).</p>
<p>The first 6 segments and the last 2 segments in the video features are dropped before being fed to the video encoder to reduce noise (probably should have also done this when training video-level models). Like in the training of video-level models, 100 ~ 200 random samples are drawn when the video length is larger than that.</p>
<p>The weights of the video encoder are frozen when training (that is, they are not being updated). I tried fine-tuning the video encoder but did not find significant benefit from doing so. </p>
<p>The learning rate of the segment encoder is set to be half the learning rate of the fully connected classifier (a.k.a. discriminative learning rates).</p>
<p>I also did some negative label mining to speed up the convergence.</p>
<h2>Inference</h2>
<p>Just like in the training of segment classifiers, the first 6 segments and the last 2 segments in a video are skipped, and the segment starts from <em>t-n</em> and ends at <em>t+4+n</em> for a segment starts at time <em>t</em> on paper. </p>
<p>The probabilities were discretized from 0. ~ 1.0 to 0 ~ 9999 and put into 10000 buckets. Regular pruning is performed to reduce memory usage. The most probable 100,000 segments for each class were put into the final submission.</p>
<h2>Implementation</h2>
<p>I use PyTorch in the whole pipeline. The data is directly loaded from tfrecord files. This means that the data loading might have been sub-optimal. I did find a very small amount of examples have video features of length 0. Those examples are discarded in both training and inference. I considered converting tfrecord files to other format to speed up the loading and avoid grouping segments from the same video in the same batch, but decided not to for better flexibility.</p>
<p>The NeXtVLAD ,context-gated DBoF, and mixture-of-experts models are re-implemented in PyTorch with some small modifications.</p>
<p>Finally, I want to give some credits to the fast.ai library. Although I did not use the fast.ai library directly, many parts of my code were inspired or simply copied from fast.ai. It’s really phenomenal.</p> |
Recursion Cellular Image Classification | Quick and dumb 87'th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I think it's probably the most effortless and dumbest solution posted. I lazily started doing something 5 days before the deadline mostly because I got triggered by the word "Arcface" and wanted to try it.</p>
<p><strong>Data Loading</strong>: For each image in my training dataset I randomly chose between site 1 and site 2, created 6-channel image from .png's and normalized with np.mean() and np.std()
<strong>Model</strong>: Single 512x512 resnet50 from torchvision, adapted to work with 6 channels as in this <a href="https://www.kaggle.com/yhn112/resnet18-baseline-pytorch-ignite">kernel</a>
<strong>Training</strong>: 50 epochs, same starting parameters as in the kernel above. it took maybe 8-9 hours with single 1080Ti.
<strong>Augmentations</strong>: I used D4 group augmentations from <a href="https://github.com/albu/albumentations">albumentations</a> library (HorizotalFlip, VerticalFlip, Transpose, RandomRotate90)
<strong>Validation</strong>: stratified split by sirna's with 0.05 left for validation (very smart)
<strong>Prediction</strong>: d4_tta from <a href="https://github.com/BloodAxe/pytorch-toolbelt">pytorch toolbelt</a> sor each site, then average sites and apply leak from [this kernel] (<a href="https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak/">https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak/</a>) (~40 min test prediction time on single 1080Ti).</p>
<p>And that's it. I obtained my LB solution in the first day (it was supposed to be just a baseline) and for the rest 4 days tried to do something smart with arcface, but it simply didn't work (even after applying batchnorm in the beginning I could not get >10% training accuracy).</p>
<p>Without leak and TTA, my model scores 0.52. Now I wonder if I could achieve higher results with the same straightforward approach. </p> |
APTOS 2019 Blindness Detection | #16 Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all. I have been quite busy since the end of the competition so I couldn’t post my solution earlier, but I still want to share with you what I did.
I hope it is not too late!</p>
<p>My solution was an ensemble of different architectures (Resnet34, Resnet50, EfficientNet-b4 and EfficientNet-b5), different preprocessing methods (Circle+GaussianBlur and CircleToSquare+GaussianBlur) and different input sizes (224, 380 and 456). All they were blended using a weighted average (I manually tuned the weights, which worked better for me than selecting the weights using hyperopt). I used TTA in some models, which improved the score a bit (it was required setting scale=1.0 for TTA to work well using fastai).</p>
<p>A key part of my solution was the preprocessing. At first I tried substracting both median blurred and gaussian blurred images, but I just kept using the latter because I saw much better correlation between CV and LB. The idea was to remove all information that could help the algorithm to recognize the different kind of images, in order to generalize better. However, substracting gaussian blurred image seemed not enough to remove all the original picture information, since the color in the limit of the eye was still a bit different for the different image types:
<img src="https://i.postimg.cc/dVqfCn1s/Gaussian-Blur.png" alt="Circle + Gaussian Blur"></p>
<p>So the idea was removing the black part before substracting gaussian blurred images. For that purpose I found the <a href="https://pypi.org/project/squircle/">squircle library</a> which has 3 methods to do it. This is how the images look after preprocessing:
<img src="https://i.postimg.cc/HLKH3Fc4/CTS.png" alt="Circle to Square + Gaussian Blur"></p>
<p>My best single model was an EfficientNet-b5 based on this CircleToSquare+GaussianBlur preprocessing, input size 380 and the following augmentations parameters (fastai): get_transforms(max_rotate=0, flip_vert=True, max_warp=0.0, max_zoom=0, max_lighting=0.35). It scored 0.814 in the public LB and 0.927 in the private LB.</p>
<p>I also used pseudolabeling, which helped me a bit, but not so much because of the way I applied it (using a classification model with a threshold of 0.9). I guess if I would have applied pseudolabeling using all test data the way top teams described it my model would have performed better. At least I learned something new ;)</p>
<p>Well, congrats to all winners and thanks to all people who shared in discussions and who shared great kernels. Special thanks to <a href="/ratthachat">@ratthachat</a> for the many preprocessing ideas and to <a href="/hmendonca">@hmendonca</a> for sharing a kernel showing how to apply EfficientNet using fastai.</p>
<p>All the best!</p> |
Recursion Cellular Image Classification | 16th place solution [0.988 private LB] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p></p>
<p>First, I would like to thank Recursion Pharmaceuticals and Kaggle for organizing such an interesting competition.
Then, I deeply appreciate <a href="https://www.kaggle.com/zaharch">nosound</a> and <a href="https://www.kaggle.com/giuliasavorgnan">Giulia Savorgnan</a> for reporting the plate leak. </p>
<p>My solution is pretty simple.
The overview is shown in the figure above. (<strong>SORRY for my messy handwriting..</strong>)</p>
<h2>Setup</h2>
<ul>
<li>I used cloud instances with some V100s.</li>
<li>PyTorch</li>
</ul>
<h2>Data</h2>
<ul>
<li>6 channel, 512x512 input</li>
<li>Contrast limited adaptive histogram equalization(CLAHE) is applied to some models</li>
<li>Typical augmentation</li>
</ul>
<h2>Model</h2>
<ul>
<li>Basically cosFace with various backbones</li>
<li>RAdam optimizer and cyclic learning rate</li>
<li>Each site is treated separately</li>
<li>2 stage training from <a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/100414#latest-634062">this discussion</a></li>
<li>Cross validation in 2nd stage training</li>
</ul>
<h2>Prediction</h2>
<ul>
<li>No<code>model.eval()</code> (this is due to difference between each experiment)</li>
<li>Test time augmentation(TTA) is carried out 8 times for each image</li>
<li>Predictions from both sites are averaged</li>
<li>Soft voting</li>
</ul>
<h2>Post processing</h2>
<ul>
<li>Raw prediction is corrected in the same way <a href="https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak">this kernel</a> does</li>
<li>Hungarian algorithm is used to remove duplicated prediction in each plate </li>
</ul>
<h2>What didn’t work</h2>
<ul>
<li>Mixup augmentation</li>
<li>Control image ( I tried a two head model in which experiment image features are subtracted by control image features, but it performed worse)</li>
</ul>
<p>I really wanted to try pseudo labeling, which was likely to boost the score, but I didn’t manage to do that due to lack of time.</p>
<p><strong>Finally, congratulations to the winners!</strong></p>
<h1> </h1>
<p>P.S.
I work at <a href="https://aillis.jp">Aillis Inc.</a>, a Japanese medical device startup developing advanced diagnosis device using throat images. We are hiring! Please contact me if you are interested.</p> |
The 3rd YouTube-8M Video Understanding Challenge | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the top teams! Thanks Kaggle/Google team for hosting this competition! And thanks all my teammates <a href="/srinath1993">@srinath1993</a> , <a href="/ahana91">@ahana91</a> and <a href="/jesucristo">@jesucristo</a> for their effort on this competition! I will go through some details of our journey in this competition.
<strong>Standard frame-level models</strong>
We first tried out the models from 1st and 2nd competition wining solutions, including different variations of netVlad, DBoF and LSTM. With the standard training on regular training set, the best result is similar to frame-level DBoF baseline model in the leaderboard (I think we joined the competition mid-July and it took us one month to achieve this. Before that we were stuck into a silly problem. We always used Excel to open the large predictions.csv file. But somehow Excel will automatically modify the large cell and make the submission failed. It let us feels like I should keep segment_max_pred, which is total number of segment outputs per entity, very low. Anyway, we figured it out after long time).
<strong>Fine-tuning on validation set with segment labels</strong>
The next thing we tried out and worked is fine-tuning on the labelled segments on the validation dataset. It will generally give us ~0.04 improvement. Our training goes into two phases: Phase 1, we trained the model on 1.4 TB regular training set. Phase 2, we fine-tuned the model on the validation set with segment labels. There are usually around 60K training steps in phase 1 and we only did around 2000~3000 steps of fine-tuning.
We found that more steps of fine-tuning lead to worst results. Probably it is because more fine-tuning steps will lead the model to overfit validation dataset, whose size is comparable much smaller to regular training dataset.
<strong>Attention/Multi-attention model</strong>
In the regular training set, we have frames across whole video. In the validation set and testing set, we have 5-frames segments. To bridge the gap between regular training set and validation set, we decided to formulate the problem as a multi-instance learning problem and use the attention weights to select important frames in the video. In our model, each frame will go through an attention network to obtain the attention weight of that frame. Then we pooled the frames based on the attention weights and used MoE/logistic model to obtain the final prediction. We also use gating mechanism in the attention network as shown in the paper(<a href="https://arxiv.org/pdf/1802.04712.pdf">https://arxiv.org/pdf/1802.04712.pdf</a>).
As our problem is a multi-class classification problem, we also considered that different high-level topics may require different attention network to learn the weights. We also trained multiple (8 or 16 in final submission) attention networks to emphasize the different important frames in different topics of videos. Finally, we pooled the output from each of attention network in our model. Our best individual model in public LB is 0.772.
<strong>Loss function and final ensemble</strong>
As in the final testing set, the segments only come from 1000 classes. In the loss function, we gave those 1000 classes more weights. Our final ensemble consists of 3 multi-attention models, 3 attention models, 2 DBoF models, 2 CNN models, 1 netVlad model and 1 LSTM model.
<strong>What was tried but didn’t work</strong>
We put a lot of effort on data augmentation and semi-supervised learning. In the beginning, we tried to use our best model to predict segments in the regular training dataset and choose top segments as our new training set. We also tried to pseudo-label the testing dataset. We chose top 200 segments from our best submission and fine-tuned the model based on that. Both methods didn’t work out. One possible reason is we did not blend them with standard validation dataset during the fine-tuning. How we label these segments and how we use them require more experiments.</p>
<p>The detailed technical write-up and Github repository will come soon. Thanks again all my teammates and I cannot wait to see the ideas and models from all top teams!</p> |
Generative Dog Images | BigGAN solution - public LB 15 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank organizers for such a wonderful competition. Generative modeling is a really interesting topic, and I hope to see more such competitions in the future. Also thanks to all participants for sharing their ideas and keeping the forum active :)</p>
<p>Here is a quick summary of my solution...</p>
<p><em><strong>Search for a good model</strong></em>
When I entered this competition, I had zero experience with GANs or other generative models. So I started with simple models like DCGAN or WGAN, but quickly switched to more complex solutions like Self-Attention GAN. The best score I could get out of SAGAN was 39-40 MiFID, and then I got stuck there. So I continued my search for more efficient models. I played with NVIDIA's StyleGAN for a few days, but didn't get any positive results using it. And then a found BigGAN. It took me a while to take the full advantage of this new model given the amount of settings it had.
Also I should mention the great pytorch repo, created by one of the authors of BigGAN, which made using this model a lot easier for me:
<a href="https://github.com/ajbrock/BigGAN-PyTorch">https://github.com/ajbrock/BigGAN-PyTorch</a></p>
<p><em><strong>Data processing</strong></em>
My goal with image transformations at first was to keep as much usefull information as possible. Using simple center cropping often leaves "headless" dogs, which might be a bit less recognisable by the scoring network. So I try to expand the usefull image area as much as possible and capture the whole dog or at least the more recognisable parts like head with upper body. I ended up not using center cropping at all and padding tall or wide images most of the time. Given how the FID metric is sensitive to missing modes, it's reasonable to keep most of the images, even the unhandy ones.
I focused mostly on geometric transformation. Pixel-level augmentation didn't bring any improvements for me. Even the most gentle changes to brightness or contrast usually degraded MiFID metric's value.
Also the choice of interpolation method matters, and it was a hard one for me. The popular and recommended way of downsizing images in OpenCV is using INTER_AREA, which gives the most visually appealing smooth images. But the MiFID value of resulting images usually is 1 point less than for other methods (like 10.x vs 9.x), and generated images I get with it are a bit worse. So I ended up using another resizing methods, which led to more sharp but noisy images. Which brought me to using denoise postprocessing algorithm on generated images :) </p>
<p><em><strong>Hyperparameters</strong></em>
I tried different number of channels for D and G, different placements of attention layer. I found it a lot easier to use generator with less parameters than default settings. I also ended up using unbalanced networks with D having twice as much channels in conv layers than G (I used mostly 32-64, but tried 28-56, 36-64, 48-72, 48-96). I didn't find any benefits using more complex generator. Additional complexity always slowed training down and resulted with a worse score with no clear visual improvements. My guess is the current task is too simple for a model designed to represent complex datasets likes ImageNet one. I guess equal reduction in channels in D and G should also work fine, but I left D with additional channels because it didn't influenced iteration speed much.
Despite the benefits of using higher batch sizes, described in BigGAN paper, I ended up using only 32 :) I believe, that the total number of iterations matters too, and a limited time window presents us with the choice of whether we want to update our models more often or more precise.
Also I use shared class-embeddings layers in G with hierarchical noise injection.</p>
<p><em><strong>Other things I tried:</strong></em>
1. BigGAN-deep. It never surpassed the perfomance of the original model for me.
2. truncation of z-vector. Didn't get much with this architecture, though for BigGAN-deep there were some improvements.
3. stochastic weight averaging (SWA). I usually get about +1-1.5 bigger MiFID metric value using it. But the choise of starting point matters and shifts all the time while I change data pipeline and other settings. So I decided to left EMA which came with the original code.
4. cyclic learning rates with cosine annealing and soft/hard warmups. With some settings it has a positive impact but tend to lack consistency.
5. different batch norm substitutes. Didn't spend much time on it, might be an area of potential improvements.
6. other activation functions. I remember gelu didn't perform very well, but it worked :)
7. relativistic losses. I tried it with several different architectures. For DCGAN it worked nicely, with SAGAN it didn't make much difference. I didn't managed to make RaHinge loss to work with BigGAN model. It might be a consequence of other setting I had chosen (like stronger discriminator compared to generator net).
8. other possible z distrubutions as described in the BigGAN paper (Appendix E). Bernoulli didn't work at all, censored Gaussian gave a bit worse results than usual normal distribution.</p>
<p>You can find full code for my solution here:
<a href="https://www.kaggle.com/dvorobiev/doggies-biggan-sub-data-aug-3">https://www.kaggle.com/dvorobiev/doggies-biggan-sub-data-aug-3</a>
<a href="https://www.kaggle.com/dvorobiev/doggies-biggan-sub-final">https://www.kaggle.com/dvorobiev/doggies-biggan-sub-final</a>
<a href="https://github.com/dmitry-vorobiev/kaggle-generative-dog-images">https://github.com/dmitry-vorobiev/kaggle-generative-dog-images</a></p> |
Generative Dog Images | Project: These dogs don't exist :) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to kaggle for launching this comp <a href="/juliaelliott">@juliaelliott</a> <a href="/wendykan">@wendykan</a> and motivate me to do this...</p>
<p>Before this competition I knew about GANs because this websites:
- <a href="https://thispersondoesnotexist.com/">https://thispersondoesnotexist.com/</a> (faces generated using GANs)
- <a href="https://thiscatdoesnotexist.com/">https://thiscatdoesnotexist.com/</a> (cats generated using GANs)</p>
<p>so I decided to do the same but with <strong>dogs</strong>!</p>
<p>> <a href="https://mv-lab.github.io/thisdogdoesnotexist/">https://mv-lab.github.io/thisdogdoesnotexist/</a></p>
<p>I'm hosting it using github, until I get the domain. I would like to ask permission to the winners, to upload the images you have generated and code to that repository.
The idea is to show 100 images from those amazing models and people will be able to know wich model is generating them, and obviously the author.
I'll try to generate 256x256 images :)
<strong>Anyone can contribute</strong>, in fact I ask you to do so.
btw, the actual pictures are from <a href="/tikutiku">@tikutiku</a> (Tom, the winner) thanks!</p>
<p><a href="/yukia18">@yukia18</a> <a href="/lisali">@lisali</a> <a href="/bestfitting">@bestfitting</a> <a href="/dvorobiev">@dvorobiev</a> <a href="/theoviel">@theoviel</a> <a href="/markpeng">@markpeng</a> <a href="/katotetsuro">@katotetsuro</a> <a href="/johannl">@johannl</a> <a href="/seefun">@seefun</a> <a href="/jionie">@jionie</a> <a href="/leonshangguan">@leonshangguan</a> <a href="/phoenix9032">@phoenix9032</a> <a href="/francoisdubois">@francoisdubois</a> <a href="/manojprabhaakr">@manojprabhaakr</a> <a href="/super13579">@super13579</a> <a href="/alvaroma">@alvaroma</a> <a href="/cdeotte">@cdeotte</a> <a href="/shaygu">@shaygu</a> <a href="/hirune924">@hirune924</a> ....</p> |
Generative Dog Images | When BigGAN met StyleGAN (public 12.4 solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's an interesting competition, and this is the first time I write a solution without knowing private LB score. Great thanks to kaggle for the efforts to host this competition and quick response to the community by <a href="/juliaelliott">@juliaelliott</a>, <a href="/wendykan">@wendykan</a>. </p>
<p>My solution may be described as a mix of StyleGAN and BigGAN.</p>
<p>Respect to the authors of the papers and these two github repositories:
<a href="https://github.com/ajbrock/BigGAN-PyTorch">https://github.com/ajbrock/BigGAN-PyTorch</a>
<a href="https://github.com/rosinality/style-based-gan-pytorch">https://github.com/rosinality/style-based-gan-pytorch</a></p>
<p><strong>Challenges</strong>:
1. Limited resources and limited time to train a GAN from scratch.
2. A new kind of challenge with new rules and settings. It took me 2 weeks to choose between this one and the Youbute 8M video challenge.</p>
<p><strong>Solution</strong>:</p>
<p>I tried DCGAN and ACGAN to get familiar with this competition at first , and then select papers related, I selected about 40 papers which might be useful.</p>
<p>I was impressed by the results of the StyleGAN and tried to make it work more efficiently so I could make it run in a kernel, but I found Progressive GAN and its followers were inefficient to train, but I still spent about 10 days to tune it and got FID about 35 , I guessed that this score was not enough to top 10, but I thought I should spend more time on it to get understanding of every detail of it before I stepped forward.</p>
<p>Then I tried VQ-VAE2 and a lot of other mechanisms and I was not satisfied with the efficiency and the scores.</p>
<p>BigGAN was listed on the top of my most promising solutions, but the <strong>BIG</strong>, <strong>DEEP</strong> in the model name prevent me from trying it in a hurry and the paper stated that using more channels was important, I worried that it might be inefficient in this competition, so I planed to try it after I have tested others. To my surprise, the channels was not so important, when I decreased the G_ch and D_ch from 64 to 24, the FID score did not decrease too much and the FID score can easily reach below 30.</p>
<p>What's more, the project structure of the <a href="https://github.com/ajbrock/BigGAN-PyTorch">https://github.com/ajbrock/BigGAN-PyTorch</a> is very good, we can do a lot of experiments by just modifying some parameters, so I could start many experiments and went out have a bicycle driving or swimming :) After several experiments, my score reached about 24. </p>
<p>After I checked and compared the architecture and mechanisms of the Big and Style GAN, I found that Spectral normalization and Class-conditional Batch normalization were important to BigAN, so we need not use progressive training, and BigGAN's G_shared and skip_z mechanisms were similar to the StyleGAN's style mechanism to some extend, so I made a lot of modifications to BigGAN and tried to move almost all important mechanisms of StyleGAN to it. After many experiments, I found: Style mechanism could improve the score significantly and coverage more quickly, the Blur was also important, Noise was not so important, to save time, I did not add it to my networks. Since I had more efficient and powerful tools for the BigGAN, so I did not use skip_z. I found mix-style mechanism of StyleGAN was not important in this competition, so I did not use it in my final submission. After these efforts and some hyper-parameters tuning, the score can reach about 17. </p>
<p>I thought I might use large image size to train the network, so I designed a 96x96 model instead of 64x64, I found it could improve the score.</p>
<p>I tried to use Attention including BigGAN's attention and CBAM in the network, it increased the training time and the improvement to the score was not so big, so I did not use them</p>
<p>To my surprise, although the paper of BigGAN stated that the larger batch-size, the better result, I found 32 was the best batch-size. So the two most important results--large channels and bigger batch-size of the BigGAN were not suitable of this dataset, so my solution should be called <strong>Style-Small-GAN</strong> ?? :smile::smile:</p>
<p>My best parameters can be described as the following command:</p>
<pre>python train.py --shuffle --batch_size 32 --num_G_accumulations 1 --num_D_accumulations 1 --num_epochs 200 --num_D_steps 1 --G_lr 1e-4 --D_lr 6e-4 --dataset DogOrigin96 --bottom_width 6 --G_ortho 0.0 --G_attn 0 --D_attn 0 --G_init ortho --D_init ortho --ema --use_ema --ema_start 2000 --test_every 25 --save_every 10 --num_best_copies 5 --num_save_copies 2 --G_ch 24 --D_ch 24 --seed 0 --augment 1 --add_blur --add_style --crop_mode 8 --on_kaggle --experiment_name i96_ch24_hinge_ema_dstep1_bs32_noatt_glr0001_glr0006_aug_init_ortho_blur_style_origin_crop_mode8
</pre>
<p>The score improved to <strong>12.4</strong> on public LB after these steps</p>
<p>In the last several days of this competition, I turned to the model's generalization and the data itself, I found that if I cropped square box containing the dogs only, it performed better than adding resized origin images, but, as kaggle will use another datasets and my models were trained on all-dogs datasets, they might over-fit to the cropped all-dogs to some extend, when my models were applied on other images, the dogs might not be cropped, so I added resized origin images to the datasets with some probability. The score on public LB of this augmentation is <strong>13.4</strong>, but I guess this model may perform better than 12.4 one.</p>
<p>I also added all kinds of mechanisms , such as mode seeking, all kinds of loss functions such as such as GDPP, normalizations, regularizations , initializations, and most of them did not work.</p>
<p>Most of mechanism I talked above are kept in the kernels and can be tested by using the parameters. If you want to have a try, 100 epochs in 4.5h can reach FID 18.x, and 125 epochs can reach FID 15.x</p>
<p><strong>What can we do to improve the model if there are no limits on the resources and time?</strong>
1. I designed a mechanism to add dog count in an image to the model, that's to say, dog count embedding. It works, but it took more time.
2. We can still cluster the images to 30-40 clusters, use these cluster id as other embedding, so, the dog on the grass, floor..... ( I did not test it )
3. We can use large sized images.</p>
<p>So the FID less than 10 is not so difficult to achieve if we have two GPUs or double time.
1 and 2 can be described as Multi-Classes-Conditional Style GAN, with the mixture of the Style and BigGAN, after some well designed experiments we may make some progress to the state of the art GANs?
You can check some of images, a type of dogs per line:
<a href="https://raw.githubusercontent.com/bestfitting/kaggle/master/gandogs/samples/samples_mode8.jpg">https://raw.githubusercontent.com/bestfitting/kaggle/master/gandogs/samples/samples_mode8.jpg</a></p>
<p>The link to my kernels:
<a href="https://www.kaggle.com/bestfitting/i96-1e-4-6e-4-bs32-195">https://www.kaggle.com/bestfitting/i96-1e-4-6e-4-bs32-195</a>
<a href="https://www.kaggle.com/bestfitting/i96-crop-mode3-dstep1-1e-4-6e-4-bs32">https://www.kaggle.com/bestfitting/i96-crop-mode3-dstep1-1e-4-6e-4-bs32</a></p>
<p><b>Updated</b>
The source codes in the kennels above are not easy to read, you can check the normal python project structure version here <a href="https://github.com/bestfitting/kaggle/tree/master/gandogs">https://github.com/bestfitting/kaggle/tree/master/gandogs</a></p>
<hr>
<p><em>Please note that the progresses and the scores of my models are not a formal ablation study in a paper and I don't plan to write a paper either, I modified the parameters and other setting during these progresses, they were results of a series of complex experiment setting.</em></p> |
SIIM-ACR Pneumothorax Segmentation | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'd like to thank Kaggle and the organizers for hosting this competition and congratulate all the competitors! Also, a special thanks to my partner <a href="/seesee">@seesee</a>. </p>
<h2>TL;DR</h2>
<p>Our solution is quite simple, it is an ensemble (simple average) of four different models (several folds each): <br>
- FPNetResNet50 (5 folds) <br>
- FPNetResNet101 (5 folds) <br>
- FPNetResNet101 (7 folds with different seeds) <br>
- PANetDilatedResNet34 (4 folds) <br>
- PANetResNet50 (4 folds) <br>
- EMANetResNet101 (2 folds) </p>
<p>Models trained at 768x768 (or close to that) using <a href="https://arxiv.org/abs/1711.05101">AdamW</a> optimizer. For the <a href="https://arxiv.org/abs/1612.03144">FPN</a> models Flip TTA was used whilst Scale (1024) TTA was used for the rest. We used two thresholds, one for segmentation and another (of higher value) for classification. </p>
<h2>See's Models</h2>
<p>See trained models based on the <a href="https://arxiv.org/abs/1612.03144">Feature Pyramid Network</a> decoder and ResNet as encoders. The ResNet models were based on the modified version proposed in the <a href="https://arxiv.org/abs/1812.01187">Bag-of-tricks paper</a>. See used <a href="https://arxiv.org/abs/1711.05101">AdamW</a> as optmizer and a linear decay with warm-up for the learning rate schedule. For loss he used a weighted combination for BCE and Dice. Horizontal flip TTA was also used. </p>
<p>In total he trained 17 models:
- FPNetResNet50 (5 folds) <br>
- FPNetResNet101 (5 folds) <br>
- FPNetResNet101 (7 folds with different seeds) </p>
<p>See's code is available in his <a href="https://github.com/see--/pneumothorax-segmentation">GitHub repo</a>.</p>
<h2>Eduardo's Models</h2>
<p>I trained models based on two different types of decoders: <a href="http://arxiv.org/abs/1907.13426">EMANet</a> and <a href="http://arxiv.org/abs/1805.10180">PANet</a>. Both of my own re-implementation with small modifications:</p>
<ul>
<li><p>For the PANet I just reduced the number of channels before going to the FPA module. I didn't notice any loss of performance and the network was much lighter.</p></li>
<li><p>For EMANets I changed the steep bilinear upsampling of 8x for gradual 2x upsampling using the GAU module of PANet. It considerably improved the performance.</p></li>
</ul>
<p>For the encoders I also used the ResNet family with the modifications proposed by the <a href="https://arxiv.org/abs/1812.01187">Bag-of-tricks paper</a>. My models were trained using <a href="https://arxiv.org/abs/1711.05101">AdamW</a> with <a href="https://arxiv.org/abs/1803.05407">Cosine Annealing</a> learning rate schedule. For the loss a simple BCE worked out fine. In total I trained 10 models at 768x768 resolution:</p>
<ul>
<li>PANetDilatedResNet34 (4 folds) </li>
<li>PANetResNet50 (4 folds) </li>
<li>EMANetResNet101 (2 folds) </li>
</ul>
<p>For prediction Scale TTA (768, 1024) was used.</p>
<h2>Ensembling and tresholding</h2>
<p>For the ensembling a simple average of the models' soft predictions (after sigmoid, before thresholding) were used. Finnaly, we used two values for thresholding. One of smaller value (0.2) is used for segmentation and another (0.72) is used for classification of non-pneumothorax images.</p>
<h2>Things that did not work</h2>
<ul>
<li>HRNet</li>
<li>FishNet as encoder (not bad but no improvements either)</li>
<li>A second-level classifier based on gradient boosting using segnets' outputs and metadata as features</li>
</ul>
<h2>Hardware</h2>
<ul>
<li>See had access to V100s</li>
<li>Eduardo used a 1080ti</li>
</ul>
<p>The code is available <a href="https://github.com/arc144/siim-pneumothorax">here</a></p> |
The 3rd YouTube-8M Video Understanding Challenge | 3rd place solution sharing: A Deep Mixture Model with Online Distillation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>At first, I would like to thanks google research for providing another interesting video understanding challenge. This competition really provides fun to many of my weekends in the last 4 months. </p>
<p>Overall, my solution follows the widely-used system design: candidate generation and ranking. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F132211%2F38572e71d7518a50136e28e4fca386b5%2Fsystem.PNG?generation=1571208082982881&alt=media" alt=""></p>
<p>A quick offline analysis suggests that the top20 topics(among 1000 topics) cover over 97% of the positive labels. The segment level classifier is directly finetuned from video-level classifer(with the same structure).</p>
<p>I found larger model can generally perform better in the video dataset but will quickly overfit the smaller segment dataset. In this competition, I tried another approach to increase model capacity by training multiple models. Our final model is a 2-layer mixture model with online distillation. Each of the MixNeXtVLAD model is a mixture of 3 NeXtVLAD model. So in total, we trained 12 NeXtVLAD models in parallel using 4 Nvidia 1080 TI GPUs. The online distillation part can effectively prevent the whole model to overfit the smaller dataset. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F132211%2F6d516e4eb35e009469a87a8176afca78%2Fmix_mix.PNG?generation=1571208477789758&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F132211%2F0c96b3aff80ffdaa1bbbb8a14a41fe5a%2Fmix.PNG?generation=1571209130114152&alt=media" alt=""></p>
<p>More details about the model will be included in the research paper and shared in this post once I finish the writing : )</p>
<p>If you are interested in the performance of models I have tried, following are the results:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F132211%2Fa082666359eb8ed9f0314a71f0d87a21%2Fresults.PNG?generation=1571705893788412&alt=media" alt=""></p>
<p>Iuse all the available data for training, including the validation set, because the performance on local validation dataset is highly aligned with public LB. </p> |
APTOS 2019 Blindness Detection | Kernels for 0.926 on private test set (37-50 place on LB) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all,</p>
<p>Thanks to APTOS and Kaggle for hosting the competition.</p>
<p>I learned a lot in this competition and had a lot of fun searching the solution space with my amazing team mate <a href="/nemethpeti">@nemethpeti</a>! Congratulations to Peter for making it to the top 3% in his first Kaggle competition!</p>
<p>I thought I'd make public the solution that would have landed us somewhere between 37th and 50th had we selected it 😢Since we never felt confident in our validation strategy, we ended up selected the submission that has the best LB score and another with the most diverse ensemble of models. Neither of these turned out to be the best on the public LB.</p>
<h2>Final ensemble</h2>
<ul>
<li><a href="https://www.kaggle.com/lextoumbourou/bd-efficientnetb3-2015-val-sz-256">EfficientNet B3 - ord regression - 256px - radius reduction preproc - flip TTA</a></li>
<li><a href="https://www.kaggle.com/lextoumbourou/bd-efficientnetb3-2015-val-sz-300">EfficinetNet B3 - ord regression - 300px - radius reduction preproc - flip TTA</a></li>
<li><a href="https://www.kaggle.com/lextoumbourou/bd-densenet201-2015val-psu3-2019-val">DenseNet101 - ord regression - 320px with Ben’s preprocessing - no TTA</a></li>
</ul>
<p>We blended the models with a very simple weighted average using weights: 0.4, 0.4 and 0.2, respectively.</p>
<p><a href="https://www.kaggle.com/lextoumbourou/2-x-b3-densenet201-blend-0-926-on-private-lb">Blend Kernel</a></p>
<h2>Training notes</h2>
<ul>
<li><p><strong>Batch size</strong>: 64
Any less than this and I found training to be very unstable.</p></li>
<li><p><strong>Data</strong>: concat 2019 + 2015 training sets. Thanks to <a href="/tanlikesmath">@tanlikesmath</a> for providing a <a href="https://www.kaggle.com/tanlikesmath/diabetic-retinopathy-resized">resized dataset</a>.
I downsampled class 0 to be the same size as class 2. To ensure that the whole dataset was used I resampled class 0 each epoch.</p></li>
<li><p><strong>Validation</strong>: 10000 examples from the 2015 test set with class 0 downsampled to match class 2. Thanks to <a href="/benjaminwarner">@benjaminwarner</a> for providing a <a href="https://www.kaggle.com/benjaminwarner/resized-2015-2019-blindness-detection-images">resized dataset</a> of the 2015 test data.</p></li>
<li><p><strong>Preprocessing</strong>: Preprocessing copied from <a href="https://www.kaggle.com/joorarkesteijn/fast-cropping-preprocessing-and-augmentation">joorarkesteijn's kernel</a> which used ideas from <a href="https://www.kaggle.com/ratthachat/aptos-updated-preprocessing-ben-s-cropping">Neuron Engineer's kernel</a>. We used the gaussian blur subtraction method from Ben preprocessing for the DenseNet model only. The images were normalised using appropriate values for the model's pretrained weights.</p></li>
<li><p><strong>Augmentations</strong>: flip_lr, brightness, contrast, rotate(360)</p></li>
<li><p><strong>Transfer learning</strong>: all models were initialised with ImageNet weights.</p></li>
<li><p><strong>Model head</strong>: <a href="https://www.kaggle.com/lextoumbourou/blindness-detection-resnet34-ordinal-targets">multiclass (ordinal regression) outputs</a>. For DenseNet, the penultimate FC layer was altered to have an output size of 2046.</p></li>
<li><p><strong>Loss</strong>: BCEWithLogitsLoss with modified label smoothing.
We converted the ordinal regression from <code>[1, 1, 0, 0, 0]</code> labels into <code>[0.95, 0.95, 0.05, 0.05, 0.05]</code>.</p></li>
<li><p><strong>Optimiser</strong>: Adam (fast.ai default)</p></li>
<li><p><strong>Pseudo-labelling</strong>: add all test labels from our best submissions. This appeared to help a lot with our results on the public set, though it's debatable how useful it was on the private set.</p></li>
<li><p><strong>Train</strong>: train just head for one epoch, then unfreeze all layers and train 15 epochs using cosign annealing.
Best LR found using the technique from <a href="https://arxiv.org/pdf/1803.09820">one cycle</a>. I also saved the best validation loss checkpoints.</p></li>
<li><p><strong>Hardware</strong>: all training was exclusively done on Kaggle kernels.</p></li>
<li><p><strong>Software</strong>: PyTorch and Fast.ai.</p></li>
</ul>
<h2>Secret sauce</h2>
<p>Downsampling helped a lot in the early days of the comp. I found using the technique of resampling class 0 each epoch helped to reduce overfitting.</p>
<p>Label smoothing helped a lot with overfitting and having a stable LB score.</p>
<p>Ensembling models with slightly different preprocessing had big impact on our LB score in the early days. However, as our individual models improved, the improvement of this technique decreased.</p>
<p>Pseudo labelling gave us some big improvements to our single model scores (+1-2%) on the public set which helped with motivation. However, it doesn't appear to have had a significant effect on the private results for those models.</p>
<h2>Other notes</h2>
<p>I used ordinal regression based as per <a href="https://www.kaggle.com/lextoumbourou/blindness-detection-resnet34-ordinal-targets">my kernel</a>, this made it a bit of a battle to find a good way to ensemble our models as Peter's best model had used regression and classification for his best models. We tried linear stacking and converting my outputs to floats with various techniques and blending our outputs, In the end, our best kernel wasn't our combined ensemble, but our individual models were improved significantly by pooling our knowledge.</p> |
The 3rd YouTube-8M Video Understanding Challenge | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners and thanks google research for providing such an interesting video understanding challenge.
I would like to share my solution here. Scores shown in the following tables are all private scores, including the private MAPs, which are about 0.01 lower than scores on the public leaderboard.</p>
<h1>Solution Overview</h1>
<p>
We firstly pre-train base models using videos from 2018-data, then those models are fine-tuned on all the segments(2019 data) using a segment-level loss. Finally, a refinement inference strategy takes in both the video-level predictions and the segment-level predictions to obtain the refined segment-level predictions.</p>
<h2>Video-level Pre-train</h2>
<p>We combine the mixture structure (3rd place last year) with different video classification models.
<strong>Frame-level Model</strong>: Mix-[NeXtVLAD, nonlocal-LightNetVLAD, nonlocal-EarlyNetVLAD, GatedDBOF, SoftDBOF, NetFV, GRU].</p>
<p><strong>Video-level Model</strong>: Mix[ResNetLike]</p>
<p>We train models using the 2018 large-scale YouTube-8M video-level annotation data. The trained models were evaluated on last year's video label prediction task. As shown in the table below, most frame-level models with the mixture structure can achieve high scores on the leaderboard, and the simple equivalent weight ensemble of these models can reach 0.88932 on the private GAP which is a high score (although we do not consider the model size limitation). These single models are regarded as ``Base Models" and were evaluated on this year's temporal localization task. Their MAP scores are shown in the second column. The Mixture-SoftDBOF achieves the best MAP, followed by the Mixture-GatedDBOF and the Mixture NeXtVLAD model.</p>
<p></p>
<h2>Segment-level Fine-tune</h2>
<p><strong>Fine-tune Loss</strong>: For a certain segment, suppose A is the set of 1000 segment-level categories, B is the annotated segment category, and C is the set of annotated video-level categories of the video which the segment located in. Cross Entropy Function represents as CE. We add some weak supervision information into our final loss function shows below. <em>α</em> is simply set as 1.0 in our experiments.
\( Loss = \Sigma_{i\in B} CE(p(i), L(i)) + \alpha * \frac{\Sigma_{i\in (A \setminus C)} CE(p(i), 0) }{|A \setminus C| } \)</p>
<p><strong>All Data</strong>: In order to utilize all the annotation data, we set two groups of experiments for each model. We first use 5/6 segment data as our training set and 1/6 data as the validation set to train models. Through the validation results, a good model step interval can be estimated when using full data as the training set. Finally we use the Stochastic Weight Averaging (SWA) technique to combine those models into a single one. On the one hand, training step interval estimation improves the tolerance of the model selection. On the other hand, SWA operation improves the robustness of the models and can gain higher scores.</p>
<h2>Refinement Inference Strategy</h2>
<p></p>
<h3>Basic Inference Strategy</h3>
<p>The basic version of the inference method creates 1000 minimum heaps for segment-level predictions. The predictions for each segment are pushed into the heap of respective categories. Once a heap size overflows the maximum threshold, the segment with the least predicted probability in the heap will be popped. Finally, the segment classification predictions can be converted into the final temporal localization results by sorting confidences in the heap.</p>
<h3>Refinement Inference Strategy</h3>
<p>The basic inference strategy ignores the powerful instruction of global video information. In this part, we will utilize video-level predictions to improve the segment-level predictions. </p>
<p>Considering the fact that if an entity has very low confidence in appearing in a video, then it is also unlikely to appear in any of the segments in this video. Based on this consistency observation, we build a list of candidate labels for segments classification. The list of candidate labels is obtained from video labels that predicted by pretrained models, and is quite effective in removing false positive predictions on video segments. </p>
<p><strong>Top rank k</strong></p>
<p>Our first thought is to select the top k predicting results on the video data. These selected classes consist of candidate categories, and are used to constrain the probable category scope for each segment.</p>
<p><strong>Confidence threshold to constrain</strong></p>
<p>The top k strategy is a good way to generate filters, but it ignores the diversity of categories between videos. For content-rich videos, it contains a large number of categories, while for a single-content video, the number of categories appears will be small. </p>
<p>So the main idea of our second strategy is to consider the confidence. If the prediction score is smaller than the threshold, then the related category won't be considered when predicting the inside segments.</p>
<p><strong>Video number constraint for each category</strong></p>
<p>The third idea is to simply limit the number of predictable videos for each category. We predict each video using video-level model, and then limits the predicted video number by confidence for each category. Only segments in these videos will be considered as the corresponding categories.</p>
<p></p>
<h2>Ensemble</h2>
<p>Finally, ensemble models can achieve great improvement. We finally choose Mix-NeXtVLAD, Mix-GatedDBOF, Mix-SoftDBOF, Mix-EarlyNetVLAD, and three kinds of Mix-ResNetLike Models for the final ensemble. Ratios we set are 3:3:3:1:1:1:1. More models for ensemble and other ensemble ratios may bring some improvement to the final MAP scores.</p>
<p></p>
<h2>Other things we tried but did not work so well</h2>
<ol>
<li>auto-annotation to achieve data augmentation</li>
<li>reranking by combining video-level and segment-level predicted confidence</li>
<li>fix params: fix some parameters to prevent over-fitting</li>
</ol> |
Recursion Cellular Image Classification | 14th place solution, [0.989] Colab and Kaggle kernels | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I want to thank Recursion Pharmaceuticals and Kaggle for organizing this competition. I also want to thank everyone who participated and especially <a href="https://www.kaggle.com/zaharch">nosound</a> for explaining and providing sample code for leak exploitation.</p>
<p>My solution was developed using only freely available computational resources. I used Colab at the start of the competition, then briefly switched to Kaggle kernels up to the point when Kaggle enforced GPU time limitations and then Colab again. My final submission was a single densenet201-based model.</p>
<p><strong>Tools</strong>
- PyTorch with Apex and a bit of Ignite.</p>
<p><strong>Data</strong>
- 6x512x512. First I've tried to use smaller resolutions, but understood fairly quickly that full-size images are necessary for maximum performance.
- For augmentation I've used rotations, horizontal flip, cutout and random brightness/contrast. I've also used 4 images rotated by 90 degrees for TTA.</p>
<p><strong>Model</strong>
- densenet201 with ArcFace loss.
- separate 1108 classes for every cell type.</p>
<p><strong>Training process</strong>
- AdamW and cyclical learning rate with restarts, triangular schedule.
- validating on 10% of train dataset to find a point where model begins to overfit. Training on the whole dataset up to the cycle that showed improvement on validation.
- using the last checkpoint in a cycle for prediction.
- averaging distances for two sites.</p>
<p><strong>Post-processing</strong>
- using leak. Getting maximum prediction combination using lapjv for every plate.</p>
<p>Pseudo-labeling and blending different models' predictions would have likely boosted my score, but unfortunately there was not enough time and computational resources.</p>
<p><strong>Congratulations to the winners!</strong></p> |
Generative Dog Images | First competition 43.6 public score | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Generative Dog Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi </p>
<p>Here is our kernel: <a href="https://www.kaggle.com/shaygu/t3h-d0g-mak3rz-public-43-fid">https://www.kaggle.com/shaygu/t3h-d0g-mak3rz-public-43-fid</a> we specifically attached the version that calculates FID every few epochs, we think it brings more insight into the network and also that's the method we used for optimization .</p>
<p>Our knowledge was very limited walking into this competition, and we joined little more then a month ago, so we decided to start with the best public model and see where it gets us. Our steps along the way are further discussed in the kernel, however, I will briefly touch on the points also here. </p>
<p>Using a lot more parameters for the network, improved our score from 60 to 55. Calculating the FID every few epochs allowed us to observe which network performs better (more consistently), which lead to ability to introduce cyclic learning rate (efficiently now) and small hyper parameter optimizations got us to ~50. The last big step was changing the weight initialization to a broader distribution (0,0.15-1 instead of the 0,0.02 commonly used) got us to ~46 and from there to 43 its minor optimizations. </p>
<p>It was very interesting competition and we've learned a lot. Hopefully, there will be another GAN competition in the near future.</p>
<p><a href="/eviatarai">@eviatarai</a> <a href="/yogevh">@yogevh</a> </p> |
APTOS 2019 Blindness Detection | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle for hosting this competition, thanks to my teammates and specifically to <a href="/dott1718">@dott1718</a> (GM yay! - not to jinx it). I am really, really happy about this gold medal as it was basically the first time for us seriously tackling some computer vision problem meaning we also learned quite a few things throughout this competition. It also closes the circle of our journey over this year where we have done really well in all different kinds of competitions including NLP, tabular data, time series/temporal data, and now image data. We would have never expected to do this well when we started to play around on Kaggle less than a year ago, and I am really proud and humble of our progress. If nothing strange happens, this also means that @dott has reached his GM title. Next, I want to lay out our solution. I like to write a lot, but most likely our solution is very similar to other solutions with shorter explanations 😊.</p>
<p><strong>Preprocessing</strong></p>
<p>Overall, I think there are a few puzzle pieces one must need to put together, to produce a good solution. One is to have robust preprocessing that allows to generalize well and that does not make the models overfit on certain image characteristics, like size of the image or other facets like the markers depicting the device of the recording. This is not that important if you would just look at 2019 training data, but rather when you look at test data, where for example a significant amount of images have been cropped to 640x480. To that end, we have the following rough pre-processing routine which in the end is quite similar to the awesome steps of Ben:</p>
<ol>
<li>Crop the images to the retina by removing black area</li>
<li>Scale the image by the radius of the retina keeping the aspect ratio intact. We slightly modified this process for the 640x480 images in test data as we also have each side of the retina cut there, so the actual radius is a bit larger.</li>
<li>Pad the image to 640x640 meaning we still have the original aspect ratio in place. </li>
<li>Remove blur</li>
<li>Circle crop, in training we use 90% crop, in inference 95% crop</li>
<li>Resize to 320x320</li>
</ol>
<p><strong>Models</strong></p>
<p>I think the type of model is not that important in this competition. But different models might need slightly different learning rates, epochs, etc. but you should be able to get to similar results with a wide variety of models. In the end, we use two models: Efficientnet-B7 and SE-Resnext 50. We switched to SE-Resnext 50 only later on to have faster experimental possibilities, with quite similar results to the Efnet B7. We did a lot of different variations of those models, will elaborate on that later. </p>
<p><strong>Pretraining</strong></p>
<p>We did pretraining on full 2015 data which was very important to get better results on test data. Unfortunately, we did not figure really out what makes a good prefit, so usually we tried to pick the epochs that have the best val score on 2019. Sometimes you can get “lucky” prefits which score quite better on LB compared to others. We fitted for around 15-30 or so epochs depending on the model, we either did some slight LR scheduling, or just kept it constant and either used Radam or some other Adam variation there.</p>
<p><strong>Finetuning</strong></p>
<p>We did 4-fold CV on 2019 data, usually fitting around 3 bags to embrace the randomness a bit. Here, we used Adam optimizer and did a once cycle policy for overall 4 fixed epochs with learning rate ranging from 0.00001 to 0.0005. We did not do any epoch selection here, and kept the 4 epochs fixed. I am a big fan of fixed epochs as it allows you to better judge CV score, and early stopping can lead to some form of overfitting. But, you might need more models here to balance out “bad” fits you might get.</p>
<p><strong>Inference</strong></p>
<p>Our final best sub has TTA4 (as-is, horizontal flip, vertical flip, horizontal+vertical flip). We analyzed that results are really random. This is mostly due to the fact, that a lot of labels are inconsistent. This became really obvious to us, when we tried to do fixed setting of some test labels with the duplicate image labels from train. Just doing this for around 30 images dropped our score by nearly 0.01 on LB which is quite insane. So what makes a good model apparently, is one that has a diverse set of predictions and then finds a good “average” representation of those. You can also get quite lucky if you by chance take the one or the other label for an image. We did not want to rely on luck, so we decided to try to do a blend that is as robust as possible.</p>
<p><strong>Blending</strong></p>
<p>Our final blend is a combination of 12 different types of models, where each one has been fitted for 4 folds and for each one we do TTA4. So overall, our final solution is an average blend of 192 inference procedures. We also slightly changed the final thresholds based on CV simulations to [0.5, 1.5, 2.43, 3.32] expecting private distribution to be similar to train data. This solution has public LB 0.833 and private LB 0.931.
We tried to make our final models as diverse as possible, meaning having different prefits and different types of finetunes. All models have the similar augmentations, doing rotation, horizontal/vertical flip and one of CLAHE or brightness-contrast change. I don’t think image augmentations are that helpful apart from flipping. I will not go into too much detail here but basically we have a mix of models doing the following things:
- normal models not doing anything specific on top of the abovementioned
- at least half of the models have local normalization, meaning that for each image we subtract mean and divide by std of the image itself, then we do not do global normalization
- most models use MSE, we have a few that have a dualloss with MSE and BCE
- mostly we just use average pooling, some models have a concat of max/avg pooling and one model has LPP pooling
- one or two models have some random circle cropping (80-99% cropping)</p>
<p><strong>Further things</strong></p>
<p>We played a lot with finding a good way of blending stuff. In the end simple averaging is what apparently worked best. Our second sub uses extremely randomized trees with different seeds on top of all the predictions. This allowed to introduce a bit more randomness, and worked well on both CV as well as public LB, but a tiny bit worse on private LB. </p>
<p>We did not use any pseudo tagging and apparently that is what is missing to get to top of leader board for us. Of course we also tried it. What we did on the one hand is to test in on CV by pseudo labeling the out-of-folds. This either did not improve much or only a tiny bit. We also tried it with pseudo labeling full test data and refitting the model (public+private). This did not change public LB, but we could not see CV score, so we decided to drop it. We did not try to pseudo label only public LB, and apparently if I am reading solutions correctly, this is what most do. I need to test a bit why this worked, but maybe it helped diversity as public LB is a bit different.</p>
<p>There are many, many more things we tried. A lot with group normalization. A lot with different image sizes, and many other stuff.</p> |
SIIM-ACR Pneumothorax Segmentation | My first bronze medal(128th place solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am absolutely over the moon now.Today i got my first ever bronze medal from my first ever kaggle competition, first up i would like to congratulate my team mates <a href="/dskswu">@dskswu</a> <a href="/muhakabartay">@muhakabartay</a> <a href="/adityaguru149">@adityaguru149</a> and <a href="/mks2192">@mks2192</a> for this little achievement,specially william green,because he helped me a lot,congratulations all.
as a leader of this team i am extremely pleased,from June 24, 2019 (first day of this competition) till 2nd/3rd september (just beforeeee the stage 2 final submission deadline i was a total failure of this competition,we were nowhere near the medal zone,but one thing i can assure you : "we worked really hard".</p>
<p>just before the deadline of final submission my model fortunately got good accuracy and found itself into the medal zone</p>
<p>lesson : " you need to see what happens if you do not give up"</p>
<p>Congratulations everyone,thank you for your support and please keep us in your prayers!</p>
<p>thanks</p>
<p>here is my solution : <a href="https://www.kaggle.com/mobassir/unet-with-se-resnext50-32x4d-encoder-for-stage-2">https://www.kaggle.com/mobassir/unet-with-se-resnext50-32x4d-encoder-for-stage-2</a></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2034058%2F11fd204a6cd4a88fab17734baa0cf366%2Fmedal.PNG?generation=1567751081359915&alt=media" alt=""></p> |
APTOS 2019 Blindness Detection | Solution based on retinal texture | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello,</p>
<p>I would like to share with you my solution based on Local Binary Patterns. I didn't see any mention about this kind of pre-processing before.</p>
<p>My score was much lower than expected because LBPs looked very sensitive to the differences in the test data, but I think there is still room for improvement.</p>
<p><a href="https://www.kaggle.com/jmartindelasierra/dr-diagnosis-from-retinal-texture-descriptors">https://www.kaggle.com/jmartindelasierra/dr-diagnosis-from-retinal-texture-descriptors</a></p>
<p>Thank you
Jesús</p> |
Recursion Cellular Image Classification | 41st place pytorch solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>1. Data</strong>
- 6 channels</p>
<p><strong>2. Augmentation</strong>
- RandomScale, Rotate, HorizontalFlip, VerticalFlip, Resize, RandomBrightnessContrast, RandomGamma, Normalize</p>
<p><strong>3. Model design</strong></p>
<ul>
<li>Backbone: DenseNet201 pretrained on ImageNet</li>
<li>Head: 2 linear layers with batch normalization</li>
</ul>
<p><strong>4. Loss</strong>
- Binary Cross Entropy Loss</p>
<p><strong>5. Training</strong></p>
<ul>
<li>Optimizer: Adam</li>
<li>Different learning rates for different layers</li>
<li>Image size: 512</li>
<li>Batch size: 64</li>
<li>Epochs: 75</li>
<li>Finetuning for each cell type</li>
<li>Mixed precision</li>
</ul>
<p><strong>6. Prediciton</strong></p>
<ul>
<li>TTA: 10</li>
<li>Use embeddings instead of final probability scores</li>
<li>Run k-Nearest Neighbors for each cell type separately</li>
<li><strong>Hungarian algorithm is used to match cell types with plates, wells with siRNAs</strong></li>
</ul>
<p><strong>7. Result</strong></p>
<ul>
<li>Public LB: 0.701</li>
<li>Private LB: 0.959</li>
</ul>
<p><strong>8. Observations</strong></p>
<ul>
<li>I didn't manage to leverage ArcFace :(</li>
<li><strong>Hungarian algorithm boosted score a lot</strong></li>
<li>TTA helps too</li>
</ul>
<p>GitHub link: <a href="https://github.com/rebryk/kaggle/tree/master/recursion-cellular">https://github.com/rebryk/kaggle/tree/master/recursion-cellular</a></p> |
Recursion Cellular Image Classification | Second Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Recursion and Kaggle for sponsoring this very interesting competition and congratulations to all that went through the journey. We learned a lot and enjoyed the entire process. Looking forward to learning from other teams’ solutions.</p>
<h3>Input and preprocessing:</h3>
<ul>
<li>6 channel input, </li>
<li>per image standardization (minus the mean and divide by the standard deviation), </li>
<li>random crop 384x384,</li>
<li>random flip, </li>
<li>random rotation multiple of 90 degrees. </li>
</ul>
<h3>Modeling:</h3>
<p>We modified ResNet to limit the receptive field size of the output, as we suspect it is the individual cells and their immediate neighbors that contain the most discriminating information. Here are the list of things we modified from the vanilla ResNet:
- Fewer blocks. ResNet typically has 4 chunks of blocks, some of our models only has 2 chunks.
- More 1x1 conv blocks
- Average pooling from lower blocks, concatenated with average pooling from higher blocks
- Remove the immediate max-pooling after the first convolution
- Replace the 1x1 convolution with 3x3 convolution in the shortcut layer. This increased the smoothness of the test accuracy during training, but only increased the final testing accuracy slightly. </p>
<p>We used all the negative and positive controls (including those in the test plates) as part of the training set. The output is 1139x4 logits.</p>
<h3>Loss:</h3>
<p>We used the [ArcFace loss posted by bestfitting] (<a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109</a>). We made the <code>gamma</code> parameter adjustable. We were struggling to get ArcFace to converge initially until we tuned the gamma parameter. Our settings are <code>gamma = 0.2, m = 0.4, s = 32</code></p>
<h3>Optimizer:</h3>
<p>Adam optimizer with the following schedules:
<code>python
0-20: linear warm up to 3e-3
20-60: 3e-3
60-80: 9e-4
80-90: 3e-4
90-100: 3e-5
100-120: 3e-6
</code></p>
<h3>Post-processing:</h3>
<ul>
<li>Center the embeddings by plate.</li>
<li>Average the embeddings from both sites to obtain per-well embedding.</li>
<li>For each cell line, obtain train center embeddings by averaging together the siRNA embeddings.</li>
<li>Compute cosine-similarity of each well’s embedding to train center embeddings. </li>
<li>Use LSA to compute label assignment based on the 277 leak.</li>
</ul>
<h3>Pseudo-labeling:</h3>
<p>An ensemble of 5 models achieved a public LB score of 0.993 and a private LB score of 0.9957 without pseudo-labeling (single model 0.990 and 0.9947). We then collected all our public LB 0.990+ predictions and identified 327 examples that were not consistent. All the test predictions not in this set of 327 were then used as pseudo-labels. An ensemble of models trained on this pseudo-label set achieved a public LB of 0.997, and a private LB of 0.9967. We also tried iteratively adding more pseudo-labels to the training set (500, 700, 900 per experiment), but it did not improve our public LB score.</p> |
Open Images 2019 - Instance Segmentation | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners. Here is our solution:</p>
<h2>Codebase and hardware</h2>
<p>We used <a href="https://github.com/open-mmlab/mmdetection">mmdetection</a> with 4 Tesla T4 during training, 8 Tesla T4 during TTA inference.</p>
<h2>Model setup</h2>
<p>There are 300 classes, in 3 hierarchical levels:
- 275 leaf classes
- 23 parent classes
- 2 grandparent classes (<code>Carnivore</code> and <code>Reptile</code>)</p>
<p>For the 275 leaf classes, we train models with these 275 classes as labels and use inference results directly.</p>
<p>For the 25 parent and grandparent classes, there are two methods to get the predictions:
- (i). Use the prediction results from the leaf model, and “expand” to the parent and grandparent. For example, if the leaf model predicted a <code>Tortoise</code> mask, we add a <code>Turtle</code> prediction (its parent) and a <code>Reptile</code> prediction (its grandparent) with the same mask and score. <br>
- (ii). We train models with only the 23 parent classes as labels, and expand to the 2 grandparent classes. Note that for these models, we need to create hierarchical expansion of the instance segmentation before training (as explained <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/challenge_evaluation.md#instance-segmentation-track">here</a>).</p>
<h2>Validation set</h2>
<p>We sampled a small subset from the official validation set for faster inference, 2844 images for the leaf model and 3841 images for the parent model. We used the validation scores to decide when to adjust learning rate, and to choose best checkpoints. </p>
<p>The validation scores are significantly higher than LB. For the 275 leaf classes, we have validation score over 0.8. But nevertheless, the delta between validation and LB are stable.</p>
<h2>Training set rebalancing</h2>
<p>The open images dataset is extremely imbalanced between the 300 classes. We rebalanced the training data for the leaf model as follows:
Sort the 275 classes by descending number of images. So class #0 is the largest class.</p>
<p>| Group | Class | original num of imgs per class | rebalancing |
|-----|------------|--------------------------------|-------------------------------|
| 1 | #241 to #274 | 150-13 | oversample x10 |
| 2 | #64 to #240 | 1500-150 | oversample to 1500 imgs/class |
| 3 | #24 to #63 | 6k-1500 | no rebalancing |
| 4 | #0 to #23 | 89k-6k | downsample to 6k* |</p>
<p>Each epoch has about 450k images after rebalancing. </p>
<p>*For the downsample, we use different random seeds for each epoch to feed the model as many images as possible. Also, this 6k number is a rough target. In actual sampling, the top few classes ended up having more images because many images have multiple labels, and top classes already have more than 6k images after Group 1-3 sampling are done.</p>
<p>Similarly for the parent model, sort the 23 classes</p>
<p>| Group | Class | rebalancing |
|-------|----------|------------------------------|
| 1 | #10 to #22 | upsample to 10k imgs/class |
| 2 | #5 to #9 | no rebalancing |
| 3 | #0 to #4 | downsample to 30k imgs/class |</p>
<p>Each epoch has about 330k images after rebalancing.</p>
<h2>Single models</h2>
<p>We trained 3 cascade mrcnn leaf models
- L1: backbone x101
- L2: backbone r101 + deformable module
- L3: backbone x101 + deformable module</p>
<p>and 1 cascade mrcnn parent model
- P1: backbone x101</p>
<p>hyper-paramters:
<code>
imgs_per_gpu=1
num_gpu=4
img_scale=[(1333, 640), (1333, 960)]
multiscale_mode='range'
</code></p>
<p>lr schedule:
model L1: 0.005 for 760k iterations; 0.005/3 for 60k iterations; 0.005/15 for 152k iterations
model L2: 0.005 for 675k iterations; 0.005/3 for 64k iterations; 0.005/15 for 48k iterations
model L3: 0.005 for 226k iterations; 0.005/5 for 132k iterations; 0.005/50 for 26k iterations
model P1: 0.005 for 148k iterations; 0.005/5 for 62k iterations; 0.005/50 for 8k iterations</p>
<p>Training took 0.5-0.65 hour per 1000 iterations with 4 T4. So in total the training the 4 models took about 22, 18, 10 and 5 days respectively.</p>
<p>Public/private scores of each (<code>max_per_img=120, thr=0</code>, without TTA):</p>
<p>| model | public score | private score | |
|-------|--------------|---------------|----------------|
| L1 | 0.4708 | 0.4354 | on 275 classes |
| L2 | 0.4705 | 0.4275 | on 275 classes |
| L3 | 0.4712 | 0.4324 | on 275 classes |
| P1 | 0.0349 | 0.0344 | on 25 classes |</p>
<h2>TTA</h2>
<p>We used the TTA implementation from <a href="https://github.com/amirassov/kaggle-imaterialist">Miras Amir's winning solution</a> of iMaterialist (Fashion) 2019 </p>
<p>For the leaf model, it’s the ensemble of 3 single models, 2 scale (1333,800) and (1600,960), and flip. At RPN and BB stages, it’s the NMS ensemble of 12 single models. In the end, it’s the mean of all 12 masks for each instance. </p>
<p>TTA inference of the leaf model is very slow, which took about 525 T4-hours. We split the 99999 images into 25 chunks and ran it in parallel. </p>
<p>For the parent model, it’s the ensemble of 2 scale and flip, but just one single model.</p>
<p>Lastly, for the 25 parent and grandparent classes, we implemented NMS ensemble at mask level (i.e. calculating the mask iou instead of bbox iou when determining which ones to suppress) to ensemble (i) leaf class predications expanded to 25 classes, and (ii) parent class predications expand to 25 classes.</p>
<p>Public/private scores of each (<code>max_per_img=120, thr=0</code>):</p>
<p>| model | public score | private score | |
|--------------------------|--------------|---------------|----------------|
| leaf model ensemble | 0.5007 | 0.4548 | on 275 classes |
| leaf ensemble expanded | 0.0325 | 0.0330 | on 25 classes |
| ensemble of (i) and (ii) | 0.0370 | 0.0370 | on 25 classes |</p>
<p>At this point, the total score is public 0.5007+0.0370 = 0.5378, private 0.4548+0.0370 = 0.4918. </p>
<p>There’s one last thing we did to boost scores to 0.5383/0.4922: set <code>max_per_img=200</code> for the leaf ensemble. But we only had time (also restricted by sub file size) to do this for 12/25 of the test images.</p>
<h2>Code</h2>
<p>Training, inference, pre- and post-process code are available at <a href="https://github.com/boliu61/open-images-2019-instance-segmentation">https://github.com/boliu61/open-images-2019-instance-segmentation</a>
Trained model weights are also linked in the readme there</p> |
Open Images 2019 - Instance Segmentation | Keras Mask-RCNN for Instance Segmentation 2019 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I made a release of my Mask-RCNN code for Instance Segmentation task:</p>
<p><a href="https://github.com/ZFTurbo/Keras-Mask-RCNN-for-Open-Images-2019-Instance-Segmentation">https://github.com/ZFTurbo/Keras-Mask-RCNN-for-Open-Images-2019-Instance-Segmentation</a></p>
<p>It wasn’t used in our best submit because I finished training too late. But it gives good result on LB, so I decided to release it. </p>
<p><img src="https://raw.githubusercontent.com/ZFTurbo/Keras-Mask-RCNN-for-Open-Images-2019-Instance-Segmentation/master/img/mask_rcnn_prediction_example.jpg" alt=""></p>
<p>Repository contains:</p>
<ul>
<li>Pre-trained Mask R-CNN models (ResNet50, ResNet101 and ResNet152 backbones)</li>
<li>Example code to get predictions with these models for any set of images</li>
<li>Code to train (continue training) model based on Keras Mask R-CNN and OID dataset</li>
</ul>
<p>I was able to get <strong>0.4670</strong> Public LB and <strong>0.4311</strong> Private LB using ensemble of these 3 models.</p>
<p>There are 3 graphics with training process for each model. I changed parameters of training over some epochs (Freeze Backbone, ReduceLR, AdamAccumulate, different image sampling strategy). Fast growth of score are in places where I switched to uniform classes sampling. Also to speed up the ResNet101 training, I copied weights for decoder from trained Mask-RCNN with ResNet152 backbone. So ResNet101 got good score for much lower number of epochs.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/661345/14368/ResNet50.png" alt=""></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/661345/14369/ResNet101.png" alt=""></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/661345/14370/ResNet152.png" alt=""></p> |
Open Images 2019 - Instance Segmentation | [ODS.ai] - ML trainings, video of presentation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Instance Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In the video, I talk about the features of my solution and the best solutions from the top of LB.
<a href="https://youtu.be/NGnOY-AzDBg">link to video</a>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F407384%2Fb99a832d645a78ce0ba00d0e7f64bcaa%2FScreenshot%20from%202019-12-14%2019-41-06.png?generation=1576341697633505&alt=media" alt=""></p> |
Recursion Cellular Image Classification | 117th place solution [0.718] using PyTorch & fastai | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><em>A big thank you goes out to the competition organizers and they great community!</em></p>
<p><em>This was my first kaggle competition I joined from the beginning and I learned a lot! :-D</em></p>
<p><strong>Final setup for highest score:</strong>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1686473%2F0b3b7757f33b40bd54f75953d3d49659%2Frcic_network.png?generation=1570045807295295&alt=media" alt="">
- <a href="https://github.com/lukemelas/EfficientNet-PyTorch">ImageNet-pretained EfficientNet-B3</a> with 6D input stage (2x 3D weights copied to get 6D input stage at the beginning of training).
- <a href="https://arxiv.org/abs/1905.00292">AdaCos metric learning approach</a> was implemented, which should be more stable without hyperparameter tuning when compared to other cosine-based softmax losses.
- Standard data augmentation (flip vertical/horizontal, rotation, zoom, lighting, warp) with custom implementation of <a href="https://arxiv.org/abs/1707.06183">„Color Augmentation for histopathology images”</a>. Data visualization function displaying the 6D images as 3D+3D image:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1686473%2F83c27c07dc7a61366fe8bc1a9f3bfbbe%2Frcic_data.png?generation=1570045826942943&alt=media" alt="">
- Embedding layers for meta data to let the model learn from celltype, plategroup, and experiment data (i.e., data leak). (However, there was no time in the end to optimize the embeddings used for the full training and test predictions. Something like a similarity search for the test experiments would have been an interesting approach.)
- <a href="https://github.com/oguiza/fastai_extensions/tree/master/shared/0_image_data_augmentation">CutMix</a> was used for the image data for some parts of the training. In addition, a linear interpolation of the embeddings was used during CutMix training.
- Predictions were carried out with 3x TTA to obtain the feature embeddings of the image and meta data input.
- Finally, <a href="https://github.com/facebookresearch/faiss">FAISS</a> was used for cosine similarity search of the extracted feature embeddings.</p>
<p><strong>General training setup:</strong>
1. Train new network output stage/head with frozen pretrained feature extractor network.
1. Retrain pretrained extractor network with new head (approx. 60 epochs).
1. Add meta data embeddings, freeze feature extractor network and train embeddings and output stage (approx. 50 epochs).
1. Train entire network (approx. 150 epochs).</p>
<p>For all training stages the <a href="https://sgugger.github.io/the-1cycle-policy.html">1cycle policy</a> was used for several cycles. EfficientNet training was carried out with AdamW. CutMix was used for the late training stages.</p>
<p><strong><a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/README.md">Notebooks from the GitHub repo</a>:</strong>
- EfficientNet-B3 training:
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v20_efficientnet-b3.ipynb">Pretraining</a>
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v32_efficientnet-b3_celltype_plate-group-exp_Full.ipynb">Complex network</a>
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v41_efficientnet-b3_celltype_plate-group-exp_12D.ipynb">Complex network 12D</a> (2x 6 channel images)
- EfficientNet-B4 (training not finished):
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v43_efficientnet-b4_Mish_Ranger_LabelSmoothingLoss.ipynb">Pretraining</a>
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v44_efficientnet-b4_celltype_plate-group-exp_12D_model-exchange.ipynb">Complex network 12D</a> (2x 6 channel images)</p>
<p><strong>What didn’t work out or had no impact over a more basic setup:</strong>
- Separate classifiers for celltypes and/or plategroups (i.e., siRNA groups of 277).
- Some domain adaption techniques.
- Mean or k-means clustering of siRNA feature embeddings for cosine similarity search.</p>
<p><strong>Learnings:</strong>
- A lot about PyTorch, fastai, and adjacent DL/ML topics.
- Stay longer on the smaller model side, do more A/B or CV testing of improvements, then first really over-fit, and then try to reduce it.
- EfficientNet-B3 retraining took forever and training of EfficientNet-B4 was not finished at the competition deadline (however, this looked very promising).</p>
<p>If you have any questions or are interested in further information, please feel free to contact me! :-)</p>
<p>Thank you very much!</p> |
IEEE-CIS Fraud Detection | 2nd Solution, CPMP View | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>When Giba and I joined the team, <a href="/sggpls">@sggpls</a> <a href="/tony321">@tony321</a> and <a href="/krivoship">@krivoship</a> already had a solid position in top 10 and there were less than 10 days left. It is hard to add value in this case, and I decided to focus on adding diversity to the models they were blending. For that I started my own dataset and borrowed bits of theirs. Let me describe the steps I took.</p>
<p><strong>Data Cleaning</strong>
The data was a bit similar to Malware, with time stamped observations, and many features with lots of values present in test and not in train. I screened all features by looking at train vs test distributions, for raw feature and also frequency encoding of features. Here is an example for <code>card1</code>. <br>
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/641506/14236/card1.png" alt="card1 plot">
Raw data is used on the left column, and feature encoding of it in the right column. Red is train, blue is test. Left row x axis is <code>card1</code>. Top left y axis is frequency. Top left show frequency of each card1 value in train (red) and test (blue). Bottom left y axis is mean of target with a log transform. 0 is the average for train. Right column is the same, except card1 is replaced by log of its frequency.</p>
<p>We see that lots of card1 values only appear in test. If we use it directly it will lead to major drop in private LB. We also see that frequency encoding of card1 looks more balanced across train and test. FOr this feature I discarded the original one and kept the frequency encoding.</p>
<p>I did not merge values as I did in Malware, for lack of time. I am not sure it would have helped a lot, in hindsight.</p>
<p><strong>Cross Validation</strong></p>
<p>Cross validation for time series is always tricky. Here is what I ended up with. I tried to mimic the fact that there is a significant time gap between train and test, and also that test can be several months away from train. </p>
<p>I run these folds, using months starting with with 0. The | indicates the train/val split
0 | 2 3 4 5 6
0 1 | 3 4 5 6
0 1 2 | 4 5 6
0 1 2 3 | 5 6
This makes me train 4 models for a given model type. I then average the auc score of each of these models on the corresponding validation data. This scheme or some variants were reused by my team mates for validating their models. The advantage is that we have folds that test long term forecast.</p>
<p>I used this CV to evaluate features, to evaluate HPO, and for feature selection. For submission I tried two ways. One, to extrapolate the number of trees from the CV runs. I just ran a linear regression on the log of trees, and predicted the value for full train. I also tried to use an unshuffled 8 fold CV and average all fold model predictions for test. Both ways were giving similar LB score. I probably have used both and blend the result, but I didn't...</p>
<p><strong>Feature Engineering</strong></p>
<p>I started by computing frequency encoding of all features, then used lightgbm with the above cross validation to select features using <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/107877#latest-635386">permutation importance</a> with some modification: I kept a feature only if permuting it did not improve any of the 4 model predictions. This removed some features. I repeated the process, using permutation importance to remove features until the feature list does not shrink anymore. With this I got 0.942 on public LB.</p>
<p>The next level was to look for user id, and this is when I teamed. I was hoping my new team mates had sorted out the user id puzzle. They did. They actually had a number of different uids. I selected two of them defined as follows:</p>
<pre><code>data['uid1']` = (data.day - data.D1).astype(str) +'_' + \
data.P_emaildomain.astype(str)
data['uid2'] = (data.card1.astype(str) +'_' + \
data.addr1.astype(str) +'_' + \
(data.day - data.D1).astype(str) +'_' + \
data.P_emaildomain.astype(str))
</code></pre>
<p>where <code>data.day</code> is the number of days since the beginning, computed from <code>TransactionDT</code>.</p>
<p>I did not want to use these user ids directly for fear of overfitting (in hindsight this was a mistake, and in my last sub I just ran catboost using these as features and it led to my best sub). I rather used these uids to compute aggregate features from the sequence of time intervals for transactions for a given uid, and also for teh transaction amount. Here is one way I used:</p>
<pre><code>def add_gr(data, col):
cols = data.columns
gr = data.groupby(col)
data[col+'_count'] = gr.TransactionID.transform('count').astype('int32')
data[col+'_next_dt'] = gr.TransactionDT.shift(-1)
data[col+'_next_dt'] -= data.TransactionDT
data[col+'_mean_dt'] = gr[col+'_next_dt'].transform('mean').astype('float32')
data[col+'_std_dt'] = gr[col+'_next_dt'].transform('std').astype('float32')
data[col+'_median_dt'] = gr[col+'_next_dt'].transform('median').astype('float32')
data[col+'_next_amt'] = gr.TransactionAmt.shift(-1)
data[col+'_mean_amt'] = gr.TransactionAmt.transform('mean').astype('float32')
data[col+'_std_amt'] = gr.TransactionAmt.transform('std').astype('float32')
data[col+'_median_amt'] = gr.TransactionAmt.transform('median').astype('float32')
new_cols = list(set(data.columns) - set(cols) -set([col+'_next_dt']))
return new_cols
</code></pre>
<p>where col is one of <code>uid1</code> or <code>uid2</code>.</p>
<p>With these I got 0.952 public LB.</p>
<p>Another significant improvement was to use target encoding of these uids and a set of transaction chains that my team mates derieed by combining uids with additional feature columns that represent successive transaction amounts, eg <code>V307</code>. This moved my lgb model to 0.9606 public LB unless mistaken.</p>
<p>I added few other features that improved a bit, but nothing very important.</p>
<p><strong>Final Blend</strong></p>
<p>I mostly used lightgbm, but I also tried xgboost and catboost. XGboost was giving models extremely close to lgb, with both very high correlation in predictions and similar LB score. My features were added my team mates already very good models, yielding even higher scores: Giba had one at 0.9644, and my Russian team mates had models up to 0.9626. Giba then used his blending magic to reach 0.9662 public LB the day of original deadline. We then wondered about what could yield us the extra 0.001 we needed to reach prize zone. We realized we were not using the main property of the dataset: if a card is a fraud at some point, then it is a fraud in the future of that point. Well Giba captured that partially with some lag variables, but that was it. We then added some post processing that overrides predictions if a card is a fraud in the past, and this moved us to 0.9672 at the last minute.</p>
<p><strong>What did not work</strong></p>
<p>Many things, but the one I spent most time on what to try to leverage the fact that if a card is a fraud at some point, then it is a fraud in the future of that point. I didn't want to use post processing as it can lead to overfitting if the card ids are not perfect. Indeed, if one of our id actually covers several cards, then one fraud isn't necessarily followed by all frauds. I therefore tried this:
Create one submission. Then for each uid create a lag variable equal to the average of predictions before the current point, and train again. This is like stacking, but with a temporal shift.
This did not work, CV skyrocketed, but LB dropped by 0.01. I still think there is something to be done to make it work, but I could not find how.</p>
<p><strong>What worked</strong></p>
<p>Teaming. See <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111159">here</a> for more details.</p>
<p>It is clear that 3 + 1 + 1 > 5 here. If you have never teamed then try, you will not regret it.</p>
<p>Edit: Additional information about our solution can be found in these two posts:</p>
<p>How to find card ids: <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111559">https://www.kaggle.com/c/ieee-fraud-detection/discussion/111559</a>
The whole team story: <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111554">https://www.kaggle.com/c/ieee-fraud-detection/discussion/111554</a></p> |
IEEE-CIS Fraud Detection | 17th place solution summary - there is nothing special | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to express my sincere gratitude to Kaggle, IEEE-CIS and Vesta for hosting the competition.
This competition was really amazing and fun and I learned a lot.</p>
<h2>Retrospection</h2>
<p>When I submitted my last submission at the last day of the competition, I checked that the public ranking was 32th place, and I fell asleep hoping that the shake-up wouldn't be too bad because the 32th place was already well enough for me.
When I woke up, I couldn't believe that my final ranking went up 15th and won a gold medal in 17th place.
In fact, I had a nightmare of winning a bronze medal barely because of shake-up. 😂 </p>
<p>I have two main goals when joining this competition.
One is to be a competition expert by winning a bronze medal or more.
The other was that I played as a team in the previous instant gratification competition, so this time I wanted to experience the competition alone, end-to-end.
It was enough to achieve these goals and I never expected winning a solo gold medal.</p>
<p>I know how hard it is to win a gold medal through the past few competitions, and I think a lot of other teams are better than me, so I doubt if I deserve a gold medal. All I can think is that I was really lucky in this competition.</p>
<p>As you can see the summary and code I shared below, what I did was just a good use of reading the public kernels and the discussions.</p>
<h2>Solution Summary</h2>
<h3>Validation Scheme</h3>
<p>I thought setting up a reliable validation scheme is the most crucial part. Many people used group k fold with month as a group or k fold without shuffle, but I felt uncomfortable for predicting the past with future data.</p>
<p>Instead, reading various kernels and discussions, my strategy was to get an idea from Chris( <a href="/cdeotte">@cdeotte</a> )'s discussion( <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/108571#625045">https://www.kaggle.com/c/ieee-fraud-detection/discussion/108571#625045</a> ).
For 5 folds, I used the last 20% of the train data as the hold out, and measured the average auc for validation score and the average number of iterations with 3 random seeds to predict test data. When predicting test data, I used stratified 5 fold without early stopping.</p>
<p>After this validation scheme was established, the validation score and the public lb trend were almost perfectly correlated. I think this is the part where I did the best in this competition.</p>
<p>Below is the validation score, public lb, private lb trend after my validation scheme is set up.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F940278%2Fa35a75a05b442d2cdc6faf1291459c1d%2Fieee_val_score.png?generation=1570513675066873&alt=media" alt=""></p>
<h3>Pipeline</h3>
<p>After establishing the validation scheme, I followed Konstantin( <a href="/kyakovlev">@kyakovlev</a> )'s discussion( <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/107697">https://www.kaggle.com/c/ieee-fraud-detection/discussion/107697</a> ) and created a pipeline like this:
1. Feature engineering based on discussions, public kernels, or features I thought myself
2. Evaluate the model performance using validation score
3. For adversarial validation, filter out features using Roman( <a href="/nroman">@nroman</a> )'s covariate shift
4. Re-evaluate the model performance and get final validation score
5. Predict test data and submit</p>
<h3>Preprocessing</h3>
<ul>
<li>Memory reduction : <a href="https://www.kaggle.com/alexeykupershtokh/safe-memory-reduction">https://www.kaggle.com/alexeykupershtokh/safe-memory-reduction</a></li>
<li>Filling C nans with 0 in test</li>
<li>Filling card nans : <a href="https://www.kaggle.com/grazder/filling-card-nans">https://www.kaggle.com/grazder/filling-card-nans</a></li>
</ul>
<h3>Modeling</h3>
<p>I used two models: LightGBM and CatBoost. Both are based on Konstantin( <a href="/kyakovlev">@kyakovlev</a> )'s public kernel.</p>
<p>I used LightGBM until ensemble based on <a href="https://www.kaggle.com/kyakovlev/ieee-fe-with-some-eda">https://www.kaggle.com/kyakovlev/ieee-fe-with-some-eda</a></p>
<p>After reaching public lb 0.9592 with LightGBM, I created a CatBoost model for ensemble based on <a href="https://www.kaggle.com/kyakovlev/ieee-catboost-baseline-with-groupkfold-cv">https://www.kaggle.com/kyakovlev/ieee-catboost-baseline-with-groupkfold-cv</a></p>
<p>The score of the final model is:
- LightGBM validation score : 0.94838, public lb : 0.959666, private lb : 0.937376
- CatBoost validatoin score : 0.95212, public lb : 0.960571, private lb : 0.936798</p>
<h3>Feature Engineering</h3>
<ul>
<li>Frequency encoding</li>
<li>Feature combining
<ul><li>card1_addr1, ProductCD_card1, TransactionAmt_dist2, ProductCD_TransactionAmt, ProductCD_cents</li></ul></li>
<li>Finding uid
<ul><li>Hint from <a href="https://www.kaggle.com/akasyanama13/eda-what-s-behind-d-features"></a><a href="https://www.kaggle.com/akasyanama13/eda-what-s-behind-d-features">https://www.kaggle.com/akasyanama13/eda-what-s-behind-d-features</a></li>
<li>uid : combine of ProductCD, card1-6, addr1, D1-TransactionDay</li></ul></li>
<li>uid aggregation
<ul><li>TransactionAmt</li>
<li>dist1</li>
<li>D features</li>
<li>C features</li>
<li>Some identity features : id_01, id_02, id_05, id_06, id_09, id_14</li>
<li>Some V features : V258, V294, V306, V307, V308</li></ul></li>
<li>C, V features nan group pattern</li>
</ul>
<h3>Feature Selection</h3>
<ul>
<li>Covariate shift from <a href="https://www.kaggle.com/nroman/eda-for-cis-fraud-detection">https://www.kaggle.com/nroman/eda-for-cis-fraud-detection</a></li>
<li>Drop all V features</li>
</ul>
<h3>Ensemble</h3>
<p>Final ensemble is LightGBM, CatBoost weighted geometric mean
- LightGBM, CatBoost Ensemble validatoin score : 0.95541, public lb : 0.963251, private lb : 0.940934</p>
<h3>Code Link</h3>
<p>I wanted to share the final code with the Kaggle kernel, but because of the memory limitations of the Kaggel kernel, I failed to load the code, so I shared the code on the github.
- Preprocessing : <a href="https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-Preprocessing-uid-memory-reduction.ipynb">https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-Preprocessing-uid-memory-reduction.ipynb</a>
- LightGBM : <a href="https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-17th-Place-Solution-LightGBM.ipynb">https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-17th-Place-Solution-LightGBM.ipynb</a>
- CatBoost, Ensemble : <a href="https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-17th-Place-Solution-CatBoost-Ensemble.ipynb">https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-17th-Place-Solution-CatBoost-Ensemble.ipynb</a></p>
<p>As you can see my solution, there is nothing really special or new. I just followed Konstantin( <a href="/kyakovlev">@kyakovlev</a> )'s kernel and discussion. It was something like standing on the shoulder of giants and reaching out a bit more. </p>
<p>Thank you and congrats to all the winners!</p> |
IEEE-CIS Fraud Detection | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for all participants, Kaggle Admin, IEEE-CIS and Vesta corporation who realized this exciting competition. I would like to briefly share my 13th solution, which gave me my first solo gold medal. </p>
<h2>1. Validation Scheme</h2>
<p>First I used 5 fold CV without shuffle. Then in later stage of the competition, I switched to simple time-split and used about last 100k records as val set. This is because 5 fold CV takes too much time. In addition to that, some of my important features generated based on time-split and 5 fold CV might have caused leakage. </p>
<h2>2. Reconstructing user_id</h2>
<p>When I closely examined df[df.isFraud==1], I found that there exists many clusters of very similar records. Because I experienced similar case in Home Credit Default Risk Competition, I could quickly understand what it means and what I should do. I generated wide variety of user_ids from D features and card information by brute force attack approach. They covered wide spectrum from strict and deterministic one to a little ambiguous but robust to changing in physical address/e-mail address. This combination boosted my score greatly. Only strict version of user_ids gave me ~0.9620 in PublicLB, but both of them resulted in >0.9640. </p>
<h2>3. Feature Engineerings</h2>
<p>I conducted various feature engineerings by using user_ids. Out of lots of generated features, Seq2Dec and TargetEncoding on past records showed significant improvement in both of CV and LB. I think Seq2Dec is good method to encode individual transactions because it can prevent loss of order information and it can handle series with variable length. In addition, I added typical features such as count, share, length of the series of the user_id, sinse_first_obs_user_id, to_last_obs_user_id etc.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F570506%2F5b565b73c57cc0195047f06e30aac7de%2Fseq2dec.png?generation=1570335898830916&alt=media" alt="">
Fig. Schematic illustration of Seq2Dec</p>
<h2>4. Modeling and Blending</h2>
<p>I mainly used LightGBM and added Regularized Greedy Forest as small portion of flavor. I added model diversity by seed averaging and bagging models with different n_folds for Seq2Dec/TargetEncoding.</p>
<h2>5. Result</h2>
<p>13th out of 6381 teams, Public: 0.965086, Private: 0.941338</p> |
IEEE-CIS Fraud Detection | 5th Place Solution: Lions | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank Kaggle and IEEE-CIS and Vesta corporation for hosting this competition. Congratulations to the winners. Thanks to the best team members <a href="/hmdhmd">@hmdhmd</a>, <a href="/shuheigoda">@shuheigoda</a>, <a href="/yryrgogo">@yryrgogo</a> who were able to share great results.</p>
<p>In this solution, we describe what we have devised, while omitting what has already been explained in other solutions as much as possible.</p>
<h2>Final submission summary</h2>
<ul>
<li>Simple average of LightGBM individually modeled by four team members. The effect of averaging was great even though everyone modeled using LightGBM & similar parameters.
<ul><li>Although our team was formed from the beginning, we almost didn’t share modeling ideas except the core part of the competition (e.g. userID). This is one of the factors that greatly improved the ensemble.</li></ul></li>
<li>After averaging, we override isFraud to 1 as following two user patterns. This gave us about <code>+0.0011</code> boost in private LB.
<ol><li>Users who are certain to exist in train dataset and isFraud=1. (335rows)</li>
<li>Users who are confirmed as isFraud=1 in test-public dataset by LB probing. (207rows)</li></ol></li>
<li>stats
<ul><li>Final Result: <code>private 0.942453 / public 0.964383</code></li>
<li>Base Models (LightGBM)
<ul><li>private 0.933088 / public 0.956396</li>
<li>private 0.938111 / public 0.960937</li>
<li>private 0.936381 / public 0.959820</li>
<li>private 0.933819 / public 0.958436</li></ul></li></ul></li>
</ul>
<h2>Key Ideas</h2>
<h3>User identification</h3>
<ul>
<li>User identification was a very important factor, but we briefly describe as it has already been explained in many other solutions.</li>
<li>Column used for identification: <code>D1/D3/addr1/P_emaildomain/ProductID/C13</code></li>
<li>The identification work was performed with attention to the following two points.
<ol><li>emaildomain: Replace <code>anonymous.com</code> and <code>mail.com</code> with <code>NaN</code></li>
<li>There is a gap period between train and test-public
<ul><li>Recognized as the same user even if <code>D3</code> etc. don't fit due to the gap period</li></ul></li></ol></li>
<li>In the above user identification is not perfect, so we aggregate based on various keys to generate user group features as describe details below.</li>
</ul>
<h3>User Group Feature</h3>
<ul>
<li>This is the same type of feature that is called userID , but in one model we didn't try to identify the User. The concept of the model is to let the tree model specify a userID . We added a lot of necessary features to the model.</li>
<li>We made about 80 features like this <code>df.groupby('uid_and_register_date')[C_or_V_columns].agg(['mean'])</code>. This is similar to 1st place solution as chris described. Single LGB improvement by these 80 feature is following: <code>LB 0.9519-&gt;0.9565, CV 0.9257-&gt;0.9339</code></li>
<li>Since this user grouping is not perfect neither, we have created features to identify different users in user groups.
<ul><li>Whether the value matches the most frequent value in the group</li>
<li>Whether the value appeared in the user group last month</li>
<li>How many days ago that value appeared in the user group
etc...</li></ul></li>
</ul>
<h3>Utilizing test data which we grasp isFraud correctly</h3>
<ul>
<li>Since we were able to grasp isFraud of more than 5000 lines(1) of test data almost accurately by user identification and probing, we utilized as following three methods. (1: total 5448rows / positive: 542 / negative: 4906))
<ol><li>Override at final submission (as wrote above)
<ul><li>Since the loss is very large when the one with true value is overridden as 0, we only override the one with true value isFraud=1.</li>
<li>This gave us about <code>+0.0011</code> boost. (Even if we didn't override, we got the gold medal.)</li></ul></li>
<li>Feature Selection
<ul><li>We calculated AUC with the data in the test set for which we knew the correct answer. If we add new features to model and the AUC dropped significantly, we threw away them.</li></ul></li>
<li>Determination of model ensemble ratio
<ul><li>The ensemble ratio was decided to maximize the value of above AUC calculated only in the test-private. (I'm not sure how useful this was.)</li>
<li>Weight optimize was executed using gpyopt.</li></ul></li></ol></li>
<li>The isFraud specification could be used to identify negative. To be specific, if user who is isFraud=0 in train and has completed all transactions within 120 days from the last transaction date of train, we can judge all(train&test) transactions as isFraud=0.</li>
</ul>
<h3>Feture Selection Policy</h3>
<ul>
<li>At the beginning of the competition, features that contributed to Adversarial Validation were not added because of fear of shakedown due to the use of features with large discrepancy in train / test.</li>
<li>However, from the experiment described below, it was found that even a feature with a large discrepancy in train-test contributes to improving the score. Therefore, we changed the policy to add features that contribute to improving prediction accuracy, even if they contribute to Adversarial Validation.</li>
<li>Experiment
<ul><li>Divide train data into 3 parts by the following ratio
<ul><li>(Pseudo) train: 50%</li>
<li>(Pseudo) public: 10%</li>
<li>(Pseudo) private: 40%</li></ul></li>
<li>We investigated the relationship between adv val AUC and oof, pub, priv AUCs under different conditions.</li>
<li>Regardless of the value of AUC of Adversarial Validation, if oof is high, pubic and private AUC tend to be high</li></ul></li>
</ul>
<h2>Model details</h2>
<ul>
<li>Four team members used LightGBM with almost the same settings
<ul><li>validation: <code>DT_M</code> GroupKFold</li>
<li>userID GroupKFold didn't work well.</li>
<li>parameter: almost same as konstantin's kernel.</li></ul></li>
<li>I(ML_Bear) made original model using only the data of users who have only one transaction.
<ul><li>purpose: If a user with a large number of transactions is in a scam, normal transactions will be overridden to fraud, making it really difficult to see what the scam transaction is. Therefore, I tried to extract the characteristics of isFraud using transactions of only one transaction users.</li>
<li>validation: StratifiedKFold</li>
<li>The oof / pred of this model was input to above LightGBM as feature.</li>
<li>parameter: almost same as konstantin's kernel.</li>
<li>Thanks to this special model(?), the public LB achieved by my model was not very high, but the weight of the ensemble was fairly high. (best my model: <code>private 0.935089 / public 0.958713</code>)</li></ul></li>
</ul>
<h2>Feature Engineering</h2>
<h3>Based on userID</h3>
<p>We used the userID excessively.
- <code>user_transaction_per_day</code>
- <code>user_transaction_amt_min</code>
- Aggregate the average and deviation of <code>TransactionDT.diff(1)</code> for the last N (= 2,3,4,5,10,20) rows.
- Counting the number of nulls for each userID for columns that contain 1% or more of nan.
- Aggregate the deviation, max, min, diff of max and min, and unique count (or rate) for <code>Cxx</code> or <code>Dxx</code> or 'V_group' fetures. (<code>userID_Vxx_Vxx_mean_mean</code>, <code>userID_Cxx_unique_rate</code>, etc.)</p>
<h3>Other features</h3>
<ul>
<li>Aggregation such as <code>TransactionAmt</code> with various keys including <code>regist_date</code> (Transaction date - D1)</li>
<li><code>Cxx</code> feature dimension reduction by PCA</li>
<li>TargetEncoding of <code>addr1</code> (Numerical features but TargetEncoding worked)</li>
<li>category groupby feature
<ul><li>Calculate frequency distribution of specified category data for each groupby key</li>
<li>Expresses the probability of fraud when it behaves differently from the usual by the probability of appearance of the category</li></ul></li>
<li>LDA / SVD of category co-occurrence matrix</li>
<li><code>Vxx</code> encodings
<ul><li>Since <code>Vxx</code> has many features, we tried a few encodings instead of using original values.</li>
<li>For example, since <code>V242~258</code> group has almost the same value scale, count encoding was performed for each value in that group. ("1" count in V242~258, "2" count in V242~258...)</li></ul></li>
</ul>
In single transaction users model
<ul>
<li>TargetEncoding of <code>TransactionAmt</code> (Numerical features but TargetEncoding worked)</li>
<li>TargetEncoding of <code>emaildomain</code> (both P and R)</li>
<li><code>TransactionAmt_mod_XXX_is_zero</code> (XXX=50,100,200)</li>
</ul>
Other trivial content
<ul>
<li>START_DATE
<ul><li>We set <code>transactionDT=0</code> as <code>11/30 00:00:00</code></li>
<li>Estimate approximate date from browser history</li>
<li>Looking at the number of transactions, we saw a pattern of "many transactions for 5 consecutive days → fewer transactions for 2 consecutive days". So we estimate that the day with few transactions is a Saturday or Sunday, then 11/30 fits perfectly.</li></ul></li>
</ul>
<h2>What did not work</h2>
<ul>
<li>Seed Averaging
<ul><li>There was no high effect.</li>
<li>Averaging a model with high AUC in test data and a model with high AUC in train oof worked a little</li></ul></li>
<li>Split model for each <code>ProductCDs</code>.
<ul><li>We thought that the Fraud trend in <code>Cxx</code> and <code>Vxx</code> was different depending on <code>ProductCD</code>.</li>
<li>There were some features created for each <code>ProductCD</code> that helped improve CV & LB, but not enough.</li></ul></li>
<li>Change sample_weight according to the number of transactions</li>
</ul> |
IEEE-CIS Fraud Detection | 10th solution and code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks my teammates for their efforts. </p>
<h3><strong>Feature Engineering</strong></h3>
<p>Find the openCardDay(day-D1) by EDA(card = card_cols+addr_cols+email_cols, day=TransactionDT//86400):
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1098915%2F666a266df4f34904073bf5f25b9432a0%2Feda.png?generation=1571312507671223&alt=media" alt="">
There are different day-D1 between isFraud=0 and isFraud=1</p>
<ul>
<li><p>Encoding different uids by other columns---useful in old uids</p></li>
<li><p>Count the number of transactions and types of product by uid in a period of time---useful in new uids</p></li>
</ul>
<h3><strong>How to infer there is no shake up in this competition</strong></h3>
<p>Although the number of old uid in test dataset decreased with time. But old uid with fraud=1 largely unchanged.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1098915%2F3c00d5b0df96223745627793161f13d1%2FScreenshot%20from%202019-10-17%2019-47-06.png?generation=1571312848792088&alt=media" alt=""></p>
<h3><strong>How to prevent overfitting</strong></h3>
<p>Because the major improvement is old uid encoding. So more types of uid encoding, higher local cv. But not every uid in train dataset also in test dataset. The trick to prevent overfitting is setting uid which only in train and count < k to NaN. Then you can add more types of uid and encoding.
My three models:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1098915%2F8f40cc584044d9e56a8105d24589154f%2FScreenshot%20from%202019-10-17%2019-52-18.png?generation=1571313166608110&alt=media" alt=""></p>
<p>submission:<a href="https://www.kaggle.com/daishu/10th-submission-part-daishu">https://www.kaggle.com/daishu/10th-submission-part-daishu</a>
code:<a href="https://github.com/jxzly/Kaggle-IEEE-CIS-Fraud-Detection-2019">https://github.com/jxzly/Kaggle-IEEE-CIS-Fraud-Detection-2019</a></p> |
IEEE-CIS Fraud Detection | 15th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As a first thing, I want to thank both Kaggle and IEEE for letting us to compete in this challenge. It was a quite exhausting but also educative process for me.</p>
<p>I've already read all of the top solutions and I know that I don't have super different tricks from theirs. I just wanted to tell how my process shaped during the competition for those who are wondering.</p>
<p>Let's start.</p>
<p>My final submission has following main points:
- Generated my predictions by using unshuffled KFold. Since the test set was time split and I was using unshuffled KFold, I mimicked all these for my local validation scheme. I separated last 20% as hold-out and got predictions for it by setting another unshuffled KFold with the first 80% of the train data. Tested all operations like feature generation, parameter tuning, post processing etc. with this strategy.
- Only used LightGBM. I was neither able to generate sufficiently good CatBoost predictions nor NN predictions. Final sub is 4-5 different lightgbms' weighted blend.
- I couldn't perform any feature selection because all the methods I tried worsened my CV.
- Found different userid's based on different feature combinations. And tested their quality by calculating target means after shifting isFraud by 1 within train data.
- Example:
```
train['ReferenceDate'] = train.DayNumber - train.D1</p>
<p>key_cols = ['TransactionAmt','ProductCD','card1','card2','card3', 'card4','card5','card6','addr1','addr2','ReferenceDate','P_emaildomain']</p>
<p>train['key'] = train[key_cols].apply(tuple,1)</p>
<p>train['previous_isfraud'] = train.groupby(['key'])['isFraud'].shift(1)</p>
<p>print(train.groupby('previous_isfraud')['isFraud'].mean())
>previous_isfraud
>0.0 0.000528
>1.0 0.991961
```</p>
<ul>
<li>Shifting isFraud info from train to test rows for the same userids and applying post process was the main boost of course. Attention: Postprocess was not directly shifting isFraud info from train to test. It was multiplying predictions with a constant number which I found by using validation dataset. Since I had multiple different userids, I also had multiple postprocessing steps.</li>
<li>Creating many aggregated features based on userids.</li>
<li>Assigning mean of predictions to a userid's all transactions. This gives significant boost indeed. (Applied this only for userids with high accuracy)</li>
<li>For blending, I found that lgbm boosted with 'dart' method was quite useful.</li>
<li>Applied final blending on <strong>only new users</strong> in the test. </li>
</ul>
<p><strong>What were my milestones?</strong>
- Analyzed dataset on Excel by sorting TransactionAmt and card infos. Seeing the high correlation of the shifted isFraud with real isFraud for the sorted dataset. (I find it quite useful to look at data in Excel most of the time.) </p>
<ul>
<li><p>Realizing the real meaning of D1 and D3. Then creating 'ReferenceDate' feature by simply calculating data['DayNumber'] - data['D1']</p></li>
<li><p>Starting creating userid's by using different feature combinations and using sihfted isFraud info for postprocessing of final predictions.</p></li>
<li><p>Adding testrows with shifted isFraud info coming from userids with highest accuracy as psudeo datasets during training.</p></li>
<li><p>Creating aggregation features based on userids.</p></li>
<li><p>Realizing that our models were learning by overfitig actually. Isn't it weird that numleaves >= 200 was performing quite well ? It was because the models were actually finding same users in the test set. So we had real problem at new users which only exist in the test set. You can also check this for your models. Take last 20% of the train data as hold-out and make predictions. If it has around 0.940 AUC, now also check AUC for users also exist in train and for users only exist in hold-out. You'll see that old users have AUC around 0.977 and new users have AUC around 0.900.</p></li>
<li><p>Last week I also focused on improving new users in the test set, because private LB has much more new users than public LB. Found out that some of the features that I generated was improving old users' predictions but lowering new users' predictions. Also for the same model, increasing regularization and lowering num_leaves led to better new user predictions. So I rerun my main model for new users with dropped features and new model parameters. Also moved from unshuffled KFold to GroupKfold grouped by userid.</p></li>
<li><p>Finally used new model predictions to blend my best submission but only for new users. Did not touch old users' predictions. I think this process gave me around 0.001 boost. I wish I had worked on modeling by considering two types of users from the very beginning.</p></li>
</ul>
<p>P.S.
As a solo competitor, being silent for a long time to have competitive advantage and to avoid private sharing was hard indeed. I like to talk when I found something immediately :d.
Also I know that the forum and kernels are actually quite beneficial, however; when you found something before others you just pray everyday not to see another post saying 'there is something in V columns.. How to find true users..' etc. etc. 😆 😆 </p>
<p>Have a nice weekend and
See you in the next competition!</p> |
IEEE-CIS Fraud Detection | 1st Place Solution - Part 2 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In "1st Place Solution - Part 1" posted <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111284">here</a>, we discussed the benefits of classifying <strong>clients</strong> (credit cards) instead of <strong>transactions</strong> in Kaggle's Fraud competition. Here we will discuss the technical details. </p>
<h1>Final Model</h1>
<p>Our final model was a combination of 3 high scoring single models. CatBoost (Public/Private LB of 0.9639/0.9408), LGBM (0.9617/0.9384), and XGB (0.9602/0.9324). These models were diversified because Konstantin built the CAT and LGB while I built the XGB and NN. And we engineered features independently. (In the end we didn't use the NN which had LB 0.9432). XGB notebook posted <a href="https://www.kaggle.com/cdeotte/xgb-fraud-with-magic-0-9600">here</a>.</p>
<p>One final submission was a stack where LGBM was trained on top of the predictions of CAT and XGB and the other final submission was an ensemble with equal weights. Both submissions were post processed by taking all predictions from a single client (credit card) and replacing them with that client's average prediction. This PP increased LB by 0.001.</p>
<h1>How to Find UIDs</h1>
<p>We found UIDs in two different ways. (Specific details <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111510">here</a>).
* Wrote a script that finds UIDs <a href="https://www.kaggle.com/kyakovlev/ieee-uid-detection-v6">here</a>
* Train our models to find UIDs <a href="https://www.kaggle.com/kyakovlev/ieee-basic-fe-part-1">here</a> and <a href="https://www.kaggle.com/cdeotte/xgb-fraud-with-magic-0-9600">here</a> </p>
<p>If you remember, Konstantin's original public FE kernel <a href="https://www.kaggle.com/kyakovlev/ieee-fe-for-local-test">here</a> without UIDs achieves local validation AUC = 0.9245 and public LB 0.9485. His new FE kernel <a href="https://www.kaggle.com/kyakovlev/ieee-basic-fe-part-1">here</a> achieves local validation AUC = 0.9377 and public LB 0.9617 by finding and using UIDs. Soon I will post my XGB kernel which finds UIDs with even less human assistance and proves to beat all other methods of finding UIDs. (XGB posted <a href="https://www.kaggle.com/cdeotte/xgb-fraud-with-magic-0-9600">here</a>). The purpose of producing UIDs by a script was for EDA, special validation tests, and post process. We did not add the script's UIDs to our models. Machine learning did better finding them on its own.</p>
<h1>EDA</h1>
<p>EDA was daunting in this competition. There were so many columns to analyze and their meanings were obscured. For the first 150 columns, we used Alijs's great EDA <a href="https://www.kaggle.com/alijs1/ieee-transaction-columns-reference">here</a>. For the remaining 300 columns, we used my V and ID EDA <a href="https://www.kaggle.com/cdeotte/eda-for-columns-v-and-id">here</a>. We reduced the number of V columns with 3 tricks. First groups of V columns were found that shared similar NAN structure, next we used 1 of 3 methods:
* We applied PCA on each group individually
* We selected a maximum sized subset of uncorrelated columns from each group
* We replaced the entire group with all columns averaged.</p>
<p>Afterward, these reduced groups were further evaluated using feature selection techniques below. For example, the block <code>V322-V339</code> failed "time consistency" and was removed from our models.</p>
<h1>Feature Selection</h1>
<p>Feature selection was important because we had many columns and preferred to keep our models efficient. My XGB had 250 features and would train 6 folds in 10 minutes. Konstantin will need to say what his models had. We used every trick we knew to select our features:
* forward feature selection (using single or groups of features)
* recursive feature elimination (using single or groups of features)
* permutation importance
* adversarial validation
* correlation analysis
* time consistency
* client consistency <br>
* train/test distribution analysis</p>
<p>One interesting trick called "time consistency" is to train a single model using a single feature (or small group of features) on the first month of train dataset and predict <code>isFraud</code> for the last month of train dataset. This evaluates whether a feature by itself is consistent over time. 95% were but we found 5% of columns hurt our models. They had training AUC around 0.60 and validation AUC 0.40. In other words some features found patterns in the present that did not exist in the future. Of course the possible of interactions complicates things but we double checked every test with other tests.</p>
<h1>Validation Strategy</h1>
<p>We never trusted a single validation strategy so we used lots of validation strategies. Train on first 4 months of train, skip a month, predict last month. We also did train 2, skip 2, predict 2. We did train 1 skip 4 predict 1. We reviewed LB scores (which is just train 6, skip 1, predict 1 and no less valid than other holdouts). We did a CV GroupKFold using month as the group. We also analyzed models by how well they classified known versus unknown clients using our script's UIDs. </p>
<p>For example when training on the first 5 months and predicting the last month, we found that our
* XGB model did best predicting known UIDs with AUC = 0.99723
* LGBM model did best predicting unknown UIDs with AUC = 0.92117
* CAT model did best predicting questionable UIDs with AUC = 0.98834</p>
<p>Questionable UIDs are transactions that our script could not confidently link to other transactions. When we ensembled and/or stacked our models we found that the resultant model excelled in all three categories. It could predict known, unknown, and questionable UIDs forward in time with great accuracy !!</p> |
IEEE-CIS Fraud Detection | IEEE-CIS 12nd solution 中英注释 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>大家好,很抱歉来晚了
但是我还是坚持把我们丑陋的代码开源出来。和其他的参赛队伍一样我们也发现了D2的秘密,关于这个magic其他帖子已经说了很多了,我这里就不在赘述。
特征工程上,我们将productCD按照类别进行了拆分,对D特征进行scale的操作,并采用其他常规encoding方法。
我的好友<a href="https://www.kaggle.com/newbielch">@ADAM</a>进行了大量的基础工作,并且进行了一些奇妙的特征工程,值得一提的是这是他第一次参加kaggle竞赛,为他鼓掌。我在赛程的后半场加入并发现了D2的秘密。
我将我们的工作分为5个部分</p>
<p>part 1 介绍了整个特征工程,part 2-4构建3个不同的模型,part 5进行了模型融合,后验发现XGB模型对融合的提高非常小,我们最好的score也是0.8lgb+0.2cat构成的。链接如下,请大家批评指正。</p>
<p>Hello everyone, I am sorry to be late.
But I still insist on opening up our ugly code. Like other participating teams, we also discovered the secret of D2. People have already said a lot about this magic on other posts. I won't go into details here.
In feature engineering, we split the productCD by category, scale the D features, and use other conventional encoding methods.
My friend [@ADAM] (<a href="https://www.kaggle.com/newbielch">https://www.kaggle.com/newbielch</a>) has done a lot of basic work and has done some wonderful feature engineering. It is worth mentioning that this is his first time to participate in the kaggle competition. , applaud him. I joined and discovered the secret of D2 in the second half of the race.
I divide our work into 5 parts.</p>
<p>Part 1 introduces the whole feature project, part 2-4 builds 3 different models, part 5 do some blending , and we find that the XGB model has very little improvement on the blending . Our best score is also 0.8lgb+0.2cat. The link is as follows, </p>
<p><a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-1">IEEE-CIS 12nd solution 中英注释 part 1</a>
<a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-2-lgb">IEEE-CIS 12nd solution 中英注释 lgb part 2</a>
<a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-3-cat">IEEE-CIS 12nd solution 中英注释 cat part 3</a>
<a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-4-xgb">IEEE-CIS 12nd solution 中英注释 xgb part 4</a>
<a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-5-blending">IEEE-CIS 12nd solution 中英注释 blending part 5</a></p> |
IEEE-CIS Fraud Detection | 9th place solution notes | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I know, 9th place out of 6000+ is a very good place. But well... it doesn't exactly feel so after being 1st on public LB for quite a long time :)
Anyway, this was an interesting competition and - it's better to fly high and fall down, rather than never try flying.</p>
<p>First I would like to congratulate top teams and especially <strong>FraudSquad</strong> for well deserved 1st place with big gap from others!
Also special congratulations to some people I know better than others - my ex teammates from other competitions @johnpateha and @yryrgogo for getting in gold!</p>
<p>And of course, the biggest thanks to my teammate @kostoglot for great and professional teamwork!</p>
<h2>Some key points from our solution</h2>
<ul>
<li>We heavily used identification of transactions belonging to the same user (as I think all of the top teams did).</li>
<li>To identify users very helpful feature was ("2017-11-30" + TransactionDT - D1) - this corresponds to some date (like first transaction date, etc) of card - same for all transactions of one card/user. Similar applies to several other D features.</li>
<li>To check identified users very helpful features were V95, V97, V96 which are previous transaction counts in day, week, month and features V126, V128, V127 - cumulative amount of transactions made by user in previous day, week, month.</li>
<li>We have different approaches on how to integrate user identification in solution - Konstantin used them as features, I used them in postprocessing and pseudo labelling.</li>
<li>We used cv setup taking first training data set 3 months for training, 1 month gap removed and 2 last months for validation. This cv setup correlated quite good with public LB, but as it turns out, in some cases it lied to us regarding private set and at some points we moved in a bit wrong direction resulting in falling down to 9th place.</li>
<li>Models used were LightGBM, Catboost and XGBoost.</li>
</ul>
<p>Good luck!</p> |
IEEE-CIS Fraud Detection | 29th place - Mini write up about automation and groupings | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Well, that was a blast!</p>
<p>First i want to thank Kaggle of course for setting up the great competition, and my awesome teammate <a href="/psilogram">@psilogram</a> ,It's been a real honor working with you, i learned a lot!
Thank you for deciding to work with me.</p>
<p>This has been an incredible competition.
I'll put emphasis on the non model processes I ran in order to support and enhance Silogram's models. There were so many iterations of my solution, I will only mention the last solution. Also - non of the things written here are facts, just my findings personal findings.
The main thing that guided me was Lynn's note :
<code>The logic of our labeling is define reported chargeback on the card as fraud transaction (isFraud=1) and transactions posterior to it with either user account, email address or billing address directly linked to these attributes as fraud too.&nbsp;&nbsp;</code></p>
<p>After some EDA i found that once a customer is fraud once, most of the time all future transactions will also be fraud. My goal was to create groups of cards that if we find 1 of them as fraud, then we can mark them all as fraud, and also to retain the frauds we have in train set and mark these customers as frauds in the future. </p>
<p>We could use grouping by card, date-d1, addr1+2, dist1+2 to identify customers but there are lots of NAs, if only addr1 is different it could be a different customer, and when it's NA, we can't know who it is, So i took a different approach. </p>
<p>I'll start with the groupings that i used, and continue with how i came about finding those, all these were done using an iterative script .Within each of these grouping there are multiple sanity checks to consider for lost information due to missing data. for example - V307 is not simple cumsum of TranscationAmt, it's for last 30 days, if a card has more than 1 consecutive month of data, , sometimes V307 won't be equal to previous V307 + previous Transaction Amount, also if a customer had more than 30 days between transactions, V307 loses its signal and becomes 0. I'm not able to list all the checks and rounds, but i'll say the general guidelines. </p>
<p><strong>1) Customer\Card:</strong></p>
<p>Use unique_card1-6 and date-D1, identify next transaction by V307 = CumSum(TransactionAmt) in last 30 days (TransactionDT - 2592000), and either Date-D3 =date of previous transaction or V322= previous V322 +1 (when D3 is NA)</p>
<p><strong>2 ) 1 type of User grouping. </strong></p>
<p>Group by id_19+id_20+id_31+DeviceInfo, next transaction identified by V264= CumSum(TransactionAmt) in last 30 days (TransactionDT - 2592000), V218= previous transaction V218+ 1, and shares same D1 or D4</p>
<p><strong>3 ) another type of User grouping. </strong>
Group by id_19+id_20+id_31+DeviceInfo, next transaction identified by V203= CumSum(TransactionAmt) in last 30 days (TransactionDT - 2592000), V168 = previous transaction V168 + 1, and shares same D8</p>
<p>The "User Grouping" ones span across different cards
Some of them are relevant for specific ProductCDs.This allowed me to fill many NAs in the data, in addr, dist, card features. </p>
<p>My output for the ensemble was:
1) a 50 feature. 9492 lgbm that eventually wasn't used. It was craeted before the groupings</p>
<p>2) per each transaction,its card group id, user1 group ID and user2 group ID. Also i grouped together all groups that are related. Is we have group X with 8 transactions, and one of them is also part of group Y, create 1 grouping for all of them (as a new type).
3) in order to retain customer frauds into the future, i joined all test transactions with all fraud transactions in train, and created features such as "is same addr", "is dist na", does past transaction's group has within it a matching addr", "do their groups share any transaction amounts " etc. I then trained a model to identify the probability that the transaction at hand is of the same customer as any of the frauds we know in the past.</p>
<p>Using these, Silogram was able to use additional features in the ensemble to enable the model to take transaction that it found as fraud, and to mark the transaction's entire group as higher fraud probability. </p>
<p>I'm sure there was a lot more to do, both Silogram and i had breakthroughs on the last day that allowed us to boost our score significantly, but not enough time to explore further. </p>
<p>I think now, after reading the other write ups - i might have took too conservative way to do the groupings, or picked logics that would generate the groups with not as good quality as others did.I did lots and lots of EDA, but probably i got myself too tired with lack of sleep, and sometimes i would just stare at the data while my brain melts.Might have been worth to revisit all of my assumptions from scratch with a new perspective from time to time.</p>
<p>...........
As a side note. I used a power Bi model in the start of competition to allow myself drag and drop and object interactions functionality in a dashboard. It was of great value because on the fly i could drag card, addr, D1, and have a measurement that that shows me all those that have both rows in train and test, have frauds, and that after 1st fraud, there are only frauds onwards. Clicking on 1 of the groups would bring up their transactions. This allows quick and intuitive hands on EDA that you code once, and then analyze on the fly from then on. </p>
<p>I do it in almost every competition and it's an awesome tool for immediate, interactive ,hands on interaction with the data
......... </p>
<p>Thank you all for reading,</p>
<p>Thank you for being such an awesome community.</p>
<p>You're all truly a unique group of people.</p> |
The 3rd YouTube-8M Video Understanding Challenge | Solution for 21st place and a PyTorch kernel | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Out solution is pretty straightforward. We didn't use the train dataset at all. We trained a simple model (pooling + MLP) with a softmax classifier for 1000 classes. A single fold of this model resulted in 0.750 on public LB, ten folds resulted in 0.760 and a blend of 4 similar models resulted in 0.761, which is our final result.</p>
<p>The best model looks like this (exact values were found with hyperopt):
```
nn.AdaptiveAvgPool1d(1) # average in time dimension</p>
<p>nn.Linear(1152, 2765)
nn.BatchNorm1d(width)
SwishActivation()</p>
<p>nn.Linear(2765, 1662)
nn.BatchNorm1d(width)
SwishActivation()</p>
<p>nn.Linear(1662, 1000)
<code>``
I created a kernel with this model, and it achieves a public LB score of 0.753 in less than one hour, including submission generation: https://www.kaggle.com/artyomp/stronger-baseline/. This became possible since my teammate <a href="/tenich">@tenich</a>:
1. has created a TensorFlow dataloader for PyTorch;
2. optimized submission generation from 5 hours to 5 minutes (check out</code>generate_submission`).</p>
<p>I used a slightly different approach regarding data loaders: I converted all data to np.arrays. This makes training a lot faster since 90% of validation data is not labeled. So, actually the size of the segment-labeled dataset is 2 Gb. 1 epoch takes less than a minute.</p>
<p>For the sake of speed, this kernel uses combo-approach. It only converts the validation data to np.array and uses TFDataset data loader for the final inference.</p>
<p>I hope this sheds some light on how challenges of this kind could be solved!</p> |
Recursion Cellular Image Classification | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi kagglers! Nice one, congratulations to all the winners and all who've learned something new. Congratulations and many thanks my team Andriy ( <a href="/ayaroshevskiy">@ayaroshevskiy</a> ), Sasha ( <a href="/pajari">@pajari</a> ), Vitaliy ( <a href="/ladler0320">@ladler0320</a> ). Short summary of our very simple and straightforward solution.</p>
<p>About "<strong>leak</strong>". I won't argue about the definition. But actually, that was simply target class co-occurance and I recommend to check this in every EDA of every multiclassification challenge same as you check class imbalance. EDA is must.
For example:
<code>df.groupby(["experiment", "plate"]).sirna.apply(lambda x: x.sort_values().unique())</code>
So why do they co-occur that way? Why experiments are shuffled only within the plates? I guess its by design hard to shuffle them every single experiment for 1100+ classes but anyway that knowledge reduced an error significantly.</p>
<p>Also, as you already know - crucial was to solve <strong>assignment problem</strong> having raw/softmax predicts from neural network. So we've used hungarian algorithm as well. </p>
<ul>
<li><p>Validation. I've used 30/33 experiments as a validation splitted by <strong>3 folds</strong>. (3 other experiments went to all the folds). Iteratively, I've manually swapped experiments to align their scores. HUVEC-18 went to the train too. We've found that public LB contains U2OS-04 and it's hard to achieve good score for that particular one. I removed it from most of our submissions to see score for other tree cells. But also that gave us a chance that someone will overfit to public LB. Private U2OS-05 was alright and looked much more like U2OS-03/U2OS-01.</p></li>
<li><p>Image <strong>normalization</strong>. Organizers gave us pixel stats so we've used it in exact way - scaled all experiments by their mean/std. By the end of competition I've found that scaling U2OS-04 with mean/std of U2OS-02 increased its score by 10%. But who cares about public LB experiments ¯\_(ツ)_/¯</p></li>
<li><p>Networks. We've trained our first classification model with softmax - efficientnet-b0 in kaggle kernels before the gpus quotas. Then in my 2x1080ti devbox. I didn't have much time for modeling because of lack of gpus but I've found EfficientNets work comparably better than densenets and resnets/se-resnets. One fold of b0 takes about 9 hours but the CPU was a bottleneck.</p></li>
<li><p>Hard <strong>augmentations</strong>. All flips/rotates. Hard Brightness and Contrast with prob=0.8. Gamma, ChannelDropout. One thing I've found - uint8 augmentations work worse than float because of distribution discretization. </p></li>
<li><p>Training process. I train everything with <strong>Adam</strong>, ReduceOnPlateau and few warmup iterations. I've spent so much time trying to understand why people declare SGD works better but no chance - Adam performed better as usual.</p></li>
<li><p><strong>Controls</strong>. We've simply added train and test (!) controls as additional classes 1108:1139. That boosted score significantly. </p></li>
</ul>
<p><strong>First iteration</strong>. Pretrained <strong>EfficientNets</strong> b0, b1, b2 achieve almost the same score but blending their raw predictions and than solving assignment problem boosted score nicely. Averaging this 3 models scores 0.744/0.750 (3 cells, U2OS-04 reduced) public and 0.990 private LB (13th).</p>
<p>I saw few strategies of how to split test experiments for <strong>pseudo-labeling</strong>. And actually I've tried around 6 of them with one or two iterations of PL. By the end I've came with very careful strategy - soft labels and balanced experiments split between folds:
<code>
pl_all_folds = ["HEPG2-10", "HUVEC-19", "RPE-10", "U2OS-05"]
pl_folds = [["HEPG2-08", "HUVEC-17", "HUVEC-23", "RPE-08"],
["HEPG2-09", "HUVEC-20", "HUVEC-21", "RPE-09"],
["HEPG2-11", "HUVEC-22", "HUVEC-24", "RPE-11"]]
</code>
As you see U2OS-04 was reduced.
<strong>Soft</strong> pseudo-labelling seemed to work better than hard. Now I understand that the best strategy might be to add them iteratively based on some threshold after solving assignment problem.</p>
<p><strong>Second iteration</strong>. Same b0, b1, b2 now with pseudo-labels. People don't usual train same architectures in PL stage so I've added b4 and b6 as well. EfficientNet-b4 scored best validation and LB score and unfortunately EfficientNet-b6 didn't converge well in time. For big models we used <strong>gradient accumulations</strong> up to x12. TTA: 2 sites with 8 combinations of flip/rotate each. </p>
<p>What didn't work:
- SGD with CLRs (triangles, cosine etc)
- Lookahead and Ranger
- ArcFace (achieved almost the same score as Softmax)
- My favourite se-resnext50</p> |
IEEE-CIS Fraud Detection | 11th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the top teams, especially people at the gold zone. This has been a tough competition to us (at least to me 😎).
Our solution was based on stacking. I personally started this competition with stacking about 20 models and after adding more and more oofs (started with Roman's KFold(k=5, shuffle=False) validation scheme with LB~0.9419),
then I continued by adding oofs from:</p>
<ul>
<li>most of the public kernel models,</li>
<li>LGBM based on data created using categorical features + PCA on numerical features,</li>
<li>XGB trained trained on different encodings (target, catboost, weight of evidence, ... encodings) of categorical features + numerical features,</li>
<li>Lasso trained on different subsets of numerical features (mean and median imputations),</li>
<li>Ridge trained on different subsets of numerical features (mean and median imputations),</li>
<li>LGBM/Ridge trained on different subsets of count features (each feature count encoded),</li>
<li>LGBM/Ridge trained on count features (each feature count encoded) + onehot encoding of counts,</li>
<li>LGBM/Ridge trained on count features (each feature count encoded) + onehot encoding of counts (dimension reduced using truncated SVD),</li>
<li>Gaussian Naive Bayes trained on different subsets of count features (each feature count encoded),</li>
<li>Multinomial Naive Bayes trained on a subset of count features,</li>
<li>Univariate Logistic Regression trained on a subset of Vs picked using Spearman correlation between them and the target,</li>
<li>LGBM/XGB trained on categorical features + some numerical features features (picked using Kolmogorov-Smirnov test),</li>
<li>a couple of NFFM, FFM models trained on raw categorical features + numerical features using <a href="https://github.com/guoday/ctrNet-tool">this</a>,</li>
<li>LibFM trained on a subset of categorical features using <a href="https://github.com/jfpuget/LibFM_in_Keras">this code</a>,</li>
<li>CatBoost trained on different subsets of categorical + numerical features (all features treated as categorical and fed to CatBoost encoding API),</li>
<li>NN trained on raw data (embedding of categorical data + different imputation methods for the numerical data) with different architectures, </li>
<li>LGBM/XGB trained on a subset of 2d, 3d, 4d interactions of categorical features,</li>
<li>Extra Tree classifier trained on different subsets of PCAed data set (numerical data),</li>
<li>and KNN trained on a small subset of numerical features. </li>
</ul>
<p>After this, I ended up with about 900+ oofs. By selecting about 120 oofs out of these oofs using RFE (Recursive Feature Elimination) with LGBM as the classifier, I was able to reach 0.9552 on LB. After that I teamed up with my current teammates where we pooled our features and oofs. ynktk had an XGB trained on about 2000 features that scored 0.9528 and he had created lots of oofs with the same validation method as mine as he worked on his single model. ynktk had been using KFold + brute force feature engineering (various aggregations of the data) from the start and my other teammates were mostly doing FE based on validation using GroupKFold on months so in terms of diversity, it was very helpful. Luckily after a day or two we reached 0.9579. </p>
<p>I started to search for the m*gic (😎) after reading Konstantin's post about <code>making them do friendship</code> and he using CatBoost (to me his magic was most probably about some form of an interaction). First I focused on the interactions of the categorical features especially card1-card6 and addresses then I added Cs and Ds to my interactions. After a lot of experiments with CatBoost I found the feature <code>card1 + date.day - D1</code> (treated as categorical) and shared it with my teammates (later Youngsoo Lee found out that <code>card1 + card6 + addr1 + addr2 + date.day - D1</code> was a better option). After focusing on this feature and adding new oofs based on it, we reached AUC=0.9649.</p>
<p>Our best stacked model was a 5 time bagged XGB (it was a bit better than LGBM) trained with about 190 oofs (selected from a pool of about 1100+ oofs) that scores 0.9650 on LB with CV=0.9644. </p> |
IEEE-CIS Fraud Detection | 41st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to my teammates Rob Mulla <a href="/robikscube">@robikscube</a> , <a href="/raddar">@raddar</a> , and Yang <a href="/ljjsfe">@ljjsfe</a> .</p>
<h3>Initial findings</h3>
<p>It starts with Rob Mulla and Yang independently submitting stuff when the competition first opened. Yang had independently achieved 0.927 private LB. Then when CHAMPS ended, Rob and I decided to pursue the IEEE competition more fiercely.</p>
<p>I had noted that the D-features leaked <code>user_id</code> privately to Rob. Just using the D-features alone, however wasn't enough, and so I saw many users flip from <code>isFraud=1</code> back to <code>isFraud=0</code>. I reached out to Raddar because he is the king of reverse-engineering (I knew of his strength from ELO and Home Credit post-processing). After I explained to him the D-features, he knew exactly how to exploit this and was on-board to team. The simple problem we were missing was also to enforce the user should have the same <code>card1-6, P_emaildomain, addr1-2</code>, etc.</p>
<h3>Raddar post processing</h3>
<p>After raddar did that, he also began tinkering to enforce other rules, like C features should be monotonous. By simply post-processing people with <code>isFraud=1</code> and people with <code>isFraud=0</code> to 0.93 and 0.000001 respectively in the test set, he brought our private LB 0.927 -> 0.9297 (And Rob and I were very happy to have raddar on our team when we saw our public LB jump so high)</p>
<p>Yang reached out to join us on September 21, and we accepted given his strong performance on Home Credit.</p>
<h3>LB Probing</h3>
<p>I then began to LB probe based on the <code>user_id</code> raddar had created. Basically, if the <code>user_id</code> was only in test and not in train, and if it had a high prediction like >= 0.15, I changed him all to 1's. If LB improved, I kept it; if it worsened, I changed him to all 0's. I continued this every day until the competition ended; this ended up netting us a decent amount of AUC</p>
<p>We coded up some features related to <code>user_id</code> (like number transactions, max TransactionAmt, mean isFraud, etc.) and chose them if they improved the CV.</p>
<h3>Adversarial Validation, Pseudolabels, and train on all data blindly</h3>
<p>We began to get scared of train/test difference in distribution since many were saying there would be a shakeup (and it was evident that many features had different distribution in test). Therefore we fit a feature 1 at a time to predict train vs private test (this technique is called adversarial validation). From now on, we had 2 branches: one model that kept all of the features, and one model that only kept features that had <= 60% ROC AUC in adversarial validation.</p>
<p>Additionally, after having so many LB probes, we began to train on full train set + use the LB probes as pseudolabels from the test set. The number of iterations we used was roughly (number of iterations on 5th fold) * 1.1</p>
<h3>Corey post processing</h3>
<p>In the last few days, I modified <code>corey_user_id</code> (different from <code>raddar_user_id</code>) to be just based on the D intercept and then group by <code>card1-6, ProductCD, and P_emaildomain</code>. Whenever I tried any LB probes based on my ID, it worsened LB. So I made a modification: IF in test set there was a <code>raddar_user_id</code> with <code>isFraud=1</code>, then for the <code>corey_user_id</code> mapped to that row, change all predictions to <code>isFraud=0.99</code>. Do the same if <code>raddar_user_id</code> had <code>isFraud=0</code>, except change predictions to 0.0002. Why? Because I had more confidence in <code>raddar_user_id</code> (he was grouping by more features, and was enforcing C monotonicity) and my <code>corey_user_id</code> was more shoddy. However, doing this post processing gave us a jump of like 20 places.</p>
<h3>Final submissions</h3>
<p>In the end, we have 2 submission:
1. The first is a giant model LGBM trained on Rob's, my, raddar's, and Yang's features such that ROC AUC in adversarial validation is <= 60%. We bagged it thrice, trained blindly on full train data + pseudolabels from <code>raddar_user_id</code> and <code>corey_user_id</code> and then did post processing similar to how I described it above. This model gave 0.955 public LB, and 0.9339 private LB
2. The second is a 60% * (50/50 blend between Rob's old model that used all features and Yang's model that used solely his features, with post processing) + 40% * (first submission). This had 0.9586 public LB and 0.9363 private LB.</p>
<h3>Post-competition thoughts</h3>
<p>We thought there would be more of a shakeup, so we are sad not to achieve gold (though we did rise 18 LB spots). There are some problems we could have pursued: for example, using D3, D4, D5, or V307 to make our <code>user_id</code>'s better. Additionally, basically Rob was the only one training LGBM, so we had no big diversity from other models/feature sets apart from Yang's early model. Next time, we will work harder ;)</p> |
Recursion Cellular Image Classification | Low Budget LB 0.9759 kaggle-kernels-only solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I will try to summarise here my part of the 2 months battle with Kaggle kernels in this comp.
Our team's solution was basically entirely run on kaggle GPU's (mostly prior to the new quotas).
The final kernel is here: <a href="https://www.kaggle.com/hmendonca/fold1h4r3-arcenetb4-2-256px-rcic-lb-0-9759">https://www.kaggle.com/hmendonca/fold1h4r3-arcenetb4-2-256px-rcic-lb-0-9759</a></p>
<p><strong>Quick outline in chronological order:</strong>
1. CV split by experiments about ⅓ of the data, 3 folds.
2. Trained a quick and dirty classifier (left it training while analysing the data) with all 1108 classes, 6 channels and no augmentation. OK: <strong>LB 0.10-0.25</strong> and established a reasonable learning rate schedule which was used in the rest of the comp.
3. Normalised the data by experiment mean and std, added random crops: <strong>LB 0.32</strong>
Added more aug: no improvement.
4. Added ArcNet head with 512 features and default params, using pre-trained model from above, eventually <strong>LB 0.35</strong>
5. Started training with each cell type as a different class 1108 * 4 = 4432 classes: <strong>LB 0.44</strong>
6. Cell type balancing in training data: <strong>LB 0.45</strong>
7. Added the 31 controls to the training data (but not to validation) 1139 * 4 = 4556 classes: LB 0.46
8. Added a custom cosine distance loss to the ArcNet features (same class should be close to each other, different classes far...) <strong>LB 0.47</strong>
9. PCA showed that 100 dimensions explained all the variance in features, so I reduced the ArcNet features to 200 (double, just in case ;) <strong>LB 0.52</strong>
10. Applied plates leak, loads of normalisation and assigned single treatment per plate LB 0.82 !!
More normalisation, and lots of multi-fold ensembles later: <strong>LB 0.88</strong> (Thanks to <a href="/aharless">@aharless</a> and <a href="/giuliasavorgnan">@giuliasavorgnan</a>)
11. Used the former predictions as pseudo labels and trained further with train + test data, eventually <strong>LB 0.93</strong> (this process was repeated a few times, always training the latest models with pseudo labels from the best possible submission/ensemble in an amazing team effort! Thanks <a href="/aharless">@aharless</a> <a href="/giuliasavorgnan">@giuliasavorgnan</a> <a href="/stillsut">@stillsut</a> <a href="/lastlegion">@lastlegion</a>)
12. Failed badly trying to blend the new best models (single models had better LB than the blends) probably due to models overfitting to the pseudo labels?
Reduce the amount of pseudo labels in the training data (from 50/50% to 33/66% now) and improved feature distance metric: <strong>LB 0.943</strong>
13. Out of gas, $$ and quota, but I have never learned so much in a comp before! Super happy with the result</p>
<p>The code for all that is in the <a href="https://www.kaggle.com/hmendonca/fold1h4r3-arcenetb4-2-256px-rcic-lb-0-9759">kernel linked above</a> (including some visualisations).</p>
<p><strong>In the beginning</strong>
I basically started looking at the data and decided to just try a normal classifier without any fanciness. I ported an EfficientNetB4 kernel from another comp and watched the batch loss for a few different learning rates.
The loss during cosine annealing with warm restarts is very informative, almost as much as the LR finder scheduler used in fast.ai, so after a couple of runs you can work out what is too high and what is too low, and also how much you need to decrease it from epoch to epoch.
I was training 3 folds and later 3 architectures (EfficientNet B4, B5 and seResNet-xt101). However, the journey described above is actually mainly from the initial B4 I used, and that also ended up being the best model and submission. Its weights were carved slowly by training several different heads one after another, with several branches used to experiment with different ideas.
Unfortunately, we didn't have the budget to do clean experiments and try each idea with their optimal hyper-params and initial conditions. Therefore, some changes used here might not have been optimal but still ended up improving the score, so I had to keep them in the pipeline for the lack of a better reference.</p>
<p><strong>Normalisation and Post-processing:</strong>
The experiment setup let out a lot of information, and thanks to the public kernels that was made clear for everyone. That played a huge role for us to improve the submission score and create the pseudo labels. Cheers <a href="/zaharch">@zaharch</a> and <a href="/christopherberner">@christopherberner</a> !
Looking at the raw model predictions we could see that the initial models were obviously preferring some treatments to others. The predictions weren't balanced across the different classes, but we knew that they should, so a basic idea was to normalise across all treatments. Each treatment should appear exactly the same number of times in each experiment. Normalising the prediction probabilities across the siRNA already helped increasing the validation and LB score.
Andy and Giulia went further and created normalisation loop iteratively normalising the probabilities vertically and horizontally until convergence.
This process showed great improvements in both validation and LB scores and was crucial to our solution. </p>
<p><strong>Platform</strong>
As I mentioned before, the solution was run on GPU only.
TPU's would have been amazing but I had a look initially and wasn't super excited about its pytorch support. I have also trained a ResNet101 model using Colab and the provided notebook in the comp's intro, but I found it a bit too annoying to work with all the deprecated TF code (hopefully it will get better soon as they're fixing the new TF2.0), so I gave up after a few days and went back to PyTorch. Maybe that was a mistake?
We had a few GCP credits but $400 goes really quick, and we only made a few experiments in the beginning. I also used some more (including a bit of my own pocket) to train another few models improving the pseudo labels a bit. However, our best model was trained exclusively in kaggle in about 13 runs in total (100hrs approximately, excluding all abandoned branches)</p>
<p>I tried Colab again after the kaggle quotas were introduced, but I found their GPU notebook quite flaky, as it keeps disconnecting my runs during training :(
Maybe I should buy a GPU lol</p>
<p>Many thanks to all the organisers and the amazing community of kagglers always sharing some much!</p> |
APTOS 2019 Blindness Detection | Congrats to the winners (88th place solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: APTOS 2019 Blindness Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to the winners 💯.
My result comes from a single model EfficientNetB3, pretrained on 2015 (combination of training and private testset), valid on 2015 public testset and then fine-tuned on 2019 dataset.
I approached this problem as a regression problem, the output node is not a regular linear output but a rectified linear output with max_value of 4, so output value is in proper range.</p>
<p>Image size 300x300.</p>
<p>Image preprocessing: Crop blackbars, subtract local means (similar to Ben Graham's preprocessing technique, but his technique also remaps the images to 50% gray and removes outer shadow effect) with sigmaX = 10.</p>
<p>Image augmentation: Random zoom (0.75-1.25), random rotation (360 degrees), random horizontal flip.</p>
<p>I pretrained B3 on previous dataset until val-loss didn't improved after 5 epochs. </p>
<p>First phase of finetuning: Load pretrained weights, train on 2019 dataset with 20% validation split for 40 epochs, monitor valloss. The purpose of this phase is to find the optimal number of epochs to train on entire 2019 dataset.
Second phase of finetuning: Load pretrained weights again, this time train on entire 2019 dataset where the number of epochs is when val-loss reaches minimum in the first phase.</p>
<p>I tried TTA, Optimized Rounder and Model Ensembles but none works well for me on public score as well as private score. Maybe I made mistakes somewhere.</p>
<p>Thanks kaggle, the organizers and all participants for this amazing and challenging competition. I was just a novice when I first take on this challenge and I've learned a lot along the way. Gaining knowledge and experience for me is a victory. Thank you all!</p> |
Recursion Cellular Image Classification | [private 0.977]: solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>our first kaggle competition, and we will keep on:
model:densenet121
classification: 1108
input: resolution 512*512, rotate, flip, aug(lightness, contrast), normalize...
ensemble: avg</p>
<p>train method: Adam optimizer, decreasing learning rate, iteratively train with tests,
plate leak, 31controls, rearrange, </p>
<p>loss: ce, center-loss, arc</p> |
IEEE-CIS Fraud Detection | Get silver medal using public kernel mainly. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is my second competition, and first time committed from the biginning.</p>
<p>I tried to make good feature, but unfortunately, it failed.
So, I tried to use public kernel whose distribution is different.
The link I used mainly was following.
These two links has especially high weight in my blending(20% or more).</p>
<p>Litemort
<a href="/litemort">@litemort</a>
<a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/108364#latest-634308">https://www.kaggle.com/c/ieee-fraud-detection/discussion/108364#latest-634308</a>
Catboost
<a href="/kyakovlev">@kyakovlev</a>
<a href="https://www.kaggle.com/kyakovlev/ieee-catboost-baseline-with-groupkfold-cv">https://www.kaggle.com/kyakovlev/ieee-catboost-baseline-with-groupkfold-cv</a></p>
<p>I also combine over 15 submissions in public kernel for final submission, using this kernel.
<a href="https://www.kaggle.com/paulorzp/gmean-of-low-correlation-lb-0-952x">https://www.kaggle.com/paulorzp/gmean-of-low-correlation-lb-0-952x</a></p>
<p>What I also cared was not using kernel whose data distribution is different between private and public. I also made bins whose distribution is different between train and test and make model again.
I might not be able to say that this is my ability, so I will make efforts more.
I have to say thank you for all people making kernels and teammates.</p> |
IEEE-CIS Fraud Detection | My first competition/ silver medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for Kaggle for organizing this exciting competition and thanks to my teammates for their incredible performance.
I am a first-year PhD student with a background in machine learning more specifically recommender systems. I just started my PhD so while busy with classes and looking for a good subject of research, I decided to give Kaggle a try to see what is the state of the art of Fraud detection/tabular data in general. I tried to play around with different competitions before but this one was the first serious one.</p>
<p>My main goal was to learn not to compete and I think everyone should set their goal at the start of the competition so that they are really focused on achieving it. </p>
<p>As I am coming from an academic environment, this competition taught me a very important skill: validation strategy design. After competing here, I am questioning many of my previous experiences related to my research and other papers that I have seen that claim to be better than others. In fact, it is so easy to tune your "new" model until it overfits your local validation and then you can claim to be advancing the state of the art. </p>
<p>Another, important thing that I learned as a researcher is how to present your work to the public. During the competition, I was the public. I looked at many research projects and github repo and read many papers trying to find state of the art models for different tasks. I found out these different key features that should be present in decent research:
- Always attach your code to your paper
- Document your GitHub repo
- Provide easy installation / importing guidelines
- Provide a notebook explaining an example with your work</p>
<p>Now coming to the competition. We tried a gazillion things as we worked on it full time for at least 2 months. What worked was really feature engineering using groupby and lag methods and some feature interactions (C features) also we built some features using nmf on several groups of features. We tried different unsupervised methods like DAE, regular AE but they did not give good results. We also tried Gaussian Mixtures Models on each "user" and tried to single out anomalies. But in the end, most of our features are from public kernels with some modifications and cleaning</p>
<p>We tried to find the "magic" in the data. It did not work out well and we did not spend too much time on it as we were trying to learn other methodologies. </p>
<p>One thing that I wanted to try and did not have time to is to try some Recommender systems models for fraud detection. Basically in RS, the goal is to rank items based on some user-item similarities. What I wanted to try is to rank transactions based on user dissimilarities/anomalies and try Bayesian Personalized ranking as a learning algorithm which aims at maximizing the AUC. It was far fetched but I think it would have been cool.</p>
<p>Our final solution was a blend of 6 of our best models with different hyperparameters (scoring 0.949-0.95), 2 NNs scoring 0.91 on the LB and one catboost scoring 0.94 (nothing fancy)</p>
<p>Finally, I want to say thanks to everyone for their insightful discussions, also thanks to the trolls for the spice and the fun.
Next challenge: Image data competition.</p> |
IEEE-CIS Fraud Detection | 25th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for Kaggle, Vesta, IEEE-CIS for hosting this competition. And before writing about the summary of our team's approach, I really want to appreciate my team-mates,( @yeonmin, @chocozzz, @soonhwankwon , @sunghun), without your hard work it wasn't possible to end up well. All of my team-members needed a gold medal to become a competition master. That's why we choose "borabori" (Tinky Winky in Teletubby) as our team-name, but we missed it by 2 positions. Let's promise it next time :D</p>
<h1>Overview</h1>
<p>Our team made 2 independent pipelines. Two different approaches were quite different, so these gave us huge boost when ensembling(stacking wouldn't work for us). I'm going to write down the main concepts of each pipeline below.</p>
<h1>First model</h1>
<ul>
<li>Specify the Client_id : At the very first stage, we found out the real meaning of D1 column, so we could specify client_id by (card1, addr1, addr2, make_account_date_D1), though it was not perfectly matches. </li>
<li>Again Client_id, Card_id : After roughly finding the combination for discerning client, we thought it would great if we figure out how to specify the following two groups. (1) Who is the owner of each transaction(=client_id), (2)Who is the owner of the card that was used for each transaction(=card_id). Users can have more than one card. If specifying these two is possible, we thought grouping the transactions by card_groups or client_groups, then making aggregation features from them would be the best way to solve this problem.(By implementing GroupKfold based on card_groups or client_groups)</li>
<li>Failure : By the last day the competition ended, we couldn't find the perfect condition for specifying groups. So, we generated diverse client_id, card_id combinations to approximate the real client_id, card_id.</li>
<li>FE : Created aggregated, lag, lead features based on those "imitated combination" such as how many unique items(=ProductCD +'_'+TransactionAmt) are pursed by each client, how much was the time gap between the previous transaction and the current one. And frequency-encoding for some categorical variables.</li>
<li>Feature Selection : First of all, we only took the common values between Train/Test-set for categorical variables, and replace the uncommon values with NaN. Then, removed some columns by adversarial validation check.</li>
<li>Validation Strategy : As I mentioned above, we would have gone with GroupKfold based on client_groups or card_groups. But we couldn't. Instead, we choose validation strategy as GroupKfold(K=6) based on the month_block. </li>
<li>Model : LGBM(goss, dart, gbdt) , CATBOOST, XGBOOST. For the model performance, (CAT> LGBM > XGB). The single model's score was varied a lot depend on seed. So, we trained each model at least with 5 different seeds.</li>
</ul>
<h1>Second Model</h1>
<ul>
<li>Specify the Client_id ,Card_id </li>
<li>LDA, NMF : First change the data-type as string and concatenate, then implemented LDA for two combinations( ['card_1','addr_1'], ['card1','id_20']), NMF for one combination(['card4','DeviceInfo']).</li>
<li>FE : Similar job with the first approach + Target_mean_encoding for some categorical variables</li>
<li>Validation Strategy : First, split the dataset into two sub-groups. One for "ProductCD" is "W", and the other for "ProductCD" is not "W". Then, implemented StratifiedKFold(K=5) for each sub-group. </li>
<li>Model : LGBM(goss, dart, gbdt) , CATBOOST, XGBOOST. For the score, (CATBOOST > LGBM > XGB). </li>
</ul>
<h1>The things didn't work for us.</h1>
<ul>
<li>NN : @soonhwankwon really tried hard for this part. But it was hard to make a robust, good-scoring model with NN. And also, ensembling/stacking the result with tree-based boosting model didn't work.</li>
<li>DAE </li>
<li>Stacking</li>
</ul> |
Severstal: Steel Defect Detection | 5th place summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Apologize for the late sharing. I failed miserably in the efficiency competition. haha
My solution is briefly presented below.</p>
<p><strong>Input Preprocessing and Input Size:</strong>
Since I used Imagenet pretrained models which required three-channel color input, each input image in this competition was duplicated two times and transformed into color image. There was no other preprocessing performed.
The input image size for model training was fixed to the original size of the images: 256x1600. During early stage of the model development, I roughly tried cropping a smaller size, e.g., 224x768 for training and the original size for testing, but the public leaderboard result with cropped input turned out to be slightly worse. Another reason to not using a cropped version for training was because there would be no benefit for inference speed.</p>
<p><strong>Image Augmentation:</strong>
Image augmentation is one of the key factors for the successful training of deep neural nets. This was particularly true for this competition because of the label imbalance. There are a lot more negative samples compared with positive ones. When using random input sampling to form training batches, the model would tend to predict with less confidence the positive samples after the first dozens of epochs training, which means we would need smaller thresholds (rather than 0.5) to segment the positive regions, and these thresholds could extra hyperprameters we would want to avoid that. This problem would gradually disappear after the model was trained long enough, but the risk of overfitting (due to overtraining) would increase, which could be solved with the help of heavy augmentation.
The augmentations I used includes crop and resize back, contrast and brightness, gamma correction, blurring and sharpening, scale and shift (no rotation and shearing), horizontal and vertical mirroring. The crop and resize back augmentation (the random_crop function in any of the train.py) first crops horizontally and vertically a random portion, for example, crop 90% horizontally and 80% vertically, and then resize back to the original size to introduce some distortion effect. The random shift augmentation is a little unusual: instead of using a small portion of shift (typically at most 20%), I used a 100% random shift, and made the border filling circular (cv2.BORDER_WRAP). I expected this circular shift augmentation to be simulating the manufacturing pipeline.
During test time, I used either horizontal or vertical flipping or both for augmentation. Totally there were four variants including the original version. </p>
<p><strong>Model Types:</strong>
I trained totally eight Unet models and averaged their results. All of them were based on qubvel’s github: <a href="https://github.com/qubvel/segmentation_models.pytorch">https://github.com/qubvel/segmentation_models.pytorch</a>. The encoders used were inception_resnet_v2, efficientnet_v4, se_resnext50, se_resnext101, all pretrained on Imagenet. I trained two models for each type of the encoders. Due to the nature of the evaluation metric, the benefit of correctly detecting a negative sample is larger than detecting and segmenting a positive sample. Therefore, removing false positives are the key to high score. To achieve this, I used the max of all the predicted pixel probabilities as something like a classification score to perform thresholding on the averaged (across eight models) max probabilities and rule out false positives. I also noticed that a lot of participants trained some classifiers to specifically perform this false positive removal, I didn’t do that because of the extra cost of training the classifiers and the extra time to run them during testing. Note that to further improve the performance, we can use the top K (K>1) pixel probabilities instead of a single max probability for a more reliable estimate, but I did not go that far during the competition.</p>
<p><strong>Training:</strong>
Every model was firstly trained for 40-50 epoches using the traditional BCE loss. After the model weights were trained to a good state, I reduced the learning rate and shifted to lovasz-hinge loss (<a href="https://github.com/bermanmaxim/LovaszSoftmax">https://github.com/bermanmaxim/LovaszSoftmax</a>) due to its robustness for imbalanced dataset. The models were trained until convergence, and further tuned using pseudo-labeling on the public test data for 10 epoches. This also helped improved the performance a little.</p>
<p><strong>Testing and Postprocessing:</strong>
In the early stage when I started this competition, I noticed that my cross-validation results were much better than the public leaderboard. Later, some participants posted in the Kaggle forum that there are some duplicate or near duplicate images in the training set, possibly generated by the same manufacturing pipelines, and the labels are very likely correlated between the duplicates. On the one hand, this breaks the i.i.d. assumption of data when applying machine learning models, resulting in over-optimistic validation estimates due to the label leakage; on the other hand, we could take advantage of the duplicates by jointly predicting clusters of duplicated images instead of independently predicting the single images. However, the fact that the cross-validation performance on the training set was much better than the public test set indicated that the duplicates in the training and public test set may not be perfectly overlapped, which implied new manufacturing pipelines in the public test set. Apparently, the difference between the train and public test set could not be a coincidence, it was more like intentionally done to mimic some real world scenario that required the models to generalize to new data rather than taking advantage of the label leakage. So I asked myself if the private test set would be another new set. My guess was, yes, because if the public and private test set are very much alike the participants could still take advantage of the label leakage by probing the public leaderboard. So some method to take advantage of the duplicates in the private set was necessary. For this, I used the efficientnet_b4’s bottleneck features (because of the lower dimensions) generated from the last output of the encoder as features for each image. I then grouped the duplicated images into clusters by kmeans clustering the features of the test data. In each cluster, I averaged the max pixel probabilities mentioned earlier. So, for each of the test image, the averaged max probabilities of its cluster were compared with pre-defined thresholds ([0.6,0.6,0.5,0.5] for the four classes), and this image would be determined to be negative if smaller than the pre-defined thresholds, if otherwise, the total number of positive pixels (binarized with the threshold 0.5) would be compared with pre-defined thresholds ([547,715,1145,2959] corresponding to the 2% quantile thresholds found on the train data) to further remove false positives. Only the test images that survived these two rounds of false positive removal went to the run-length encoding process to generate final positive masks.</p> |
Severstal: Steel Defect Detection | 31st place solution with github code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Team - [ods.ai] stainless</strong>
Insaf Ashrapov, Igor Krashenyi, Pavel Pleskov, Anton Zakharenkov, Nikolai Popov</p>
<p><strong>Models</strong>
We tried almost every type of model from qubvel`s segmentation model library - unet, fpn, pspnet with different encoders from resnet to senet152. FPN with se-resnext50 outperformed other models. Lighter models like resnet34 performed aren't well enough but were useful in the final blend. Se-resnext101 possibly could perform much better with more time training, but we didn’t test that.</p>
<p><strong>Augmentations and Preprocessing</strong>
From <strong>Albumentations</strong> library:
Hflip, VFlip, RandomBrightnessContrast – training speed was not to fast so these basic augmentations performed well enough. In addition, we used big crops for training or/and finetuning on the full image size, because attention blocks in image tasks rely on the same input size for the training and inference phase.</p>
<p><strong>Training</strong>
- We used both pure pytorch and Catalyst framework for training.
- Losses: bce and bce with dice performed quite well, but lovasz loss dramatically outperformed them in terms of validation and public score. However, combining with classification model bce with dice gave a better result, that could be because Lovasz helped the model to filter out false-positive masks. Focal loss performed quite poor due to not very good labeling.
- Optimizer: Adam with RAdam. LookAHead, Over900 didn’t work well to use.
- Crops with a mask, BalanceClassSampler with upsampler mode from catalyst significantly increased training speed. Gradient accumulation as well.</p>
<ul>
<li><p>We tried own classification model (resnet34 with CBAM) by setting the goal to improve f1 for each class. The optimal threshold was disappointingly unstable but we reached averaged f1 95.1+. As a result, Cheng`s classification was used.</p></li>
<li><p>Validation: kfold with 10 folds. Despite the shake-up – local, public and private correlated surprisingly good.</p></li>
<li><p>Pseudolabing; We did two rounds of pseudo labeling by training on the best public submit and validating on the out of fold. It didn’t work for the third time but gave us a huge improvement.</p></li>
<li><p>Postprocessing: filling holes, removing the small mask by the threshold. We tried to remove small objects by connected components with no improvements.</p></li>
<li><p>Hardware: bunch of nvidia cards</p></li>
</ul>
<p><strong>Ensembling</strong>
Simple segmentation models averaging with different encoders, both FPN and Unet applied to images classified having a mask. One of the unchosen submit could give as 16th place.</p>
<p>Partial solution: <a href="https://github.com/Diyago/Severstal-Steel-Defect-Detection">https://github.com/Diyago/Severstal-Steel-Defect-Detection</a> </p> |
Severstal: Steel Defect Detection | Small Writeup - From 20th to 40th | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everybody! </p>
<p>First of all thanks to my team-mates @mnpinto @iafoss for those intense weeks of competition. When we merged 2 weeks ago we were about spot 300 but we managed to climb pretty well, to eventually go down by 20 spots. </p>
<h3>Why we went down (probably)</h3>
<ul>
<li>Too low thresholds</li>
<li>Pseudo labeling : Models without it scored a bit better on private </li>
<li>Not robust enough validation, we did not have the time to do 5 folds</li>
<li>Not enough luck, still we're quite lucky to be 40th given that we could've fallen much more</li>
</ul>
<h2>Models</h2>
<h3>Classification</h3>
<p>Simple Resnet34, we used two folds out of 5.
TTA : None , hflip, vflip, both</p>
<h3>Segmentation</h3>
<p>We went for a voting ensemble of 7 models trained on the first of our 5 folds:
- UnetSeResNext50 x 4
- UnetDensenet169
- FPNEfficientnet x 2</p>
<p>SeResNexts performed better but we still wanted to add some diversity. The FPN is basically the one shared by Heng (huge thanks for sharing so much) with some personal modifications.</p>
<p>TTA : None, hflip, vflip</p>
<h3>Tricks for segmentation</h3>
<ul>
<li>Auxiliary classifier head (4 classes)</li>
<li>Lovasz Loss (5 classes + softmax) + BCE for the classification.</li>
<li>Training on progressive crop size : 256x256 -> 512x256 -> 1024x256 -> Full resolution</li>
</ul>
<h3>Augments</h3>
<ul>
<li>Crops</li>
<li>Flips</li>
<li>Brightness</li>
<li>Putting black bars on one of the sides to simulate the edge of the plate</li>
</ul>
<p><em>Thanks for reading!</em></p> |
Severstal: Steel Defect Detection | 27th place solution (8 classification and 8 segmentation models) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Better late than never, I would like to share a short write up of my solution.</p>
<p>My solution is just a combination of 16 models:</p>
<p><strong>Segmentation</strong>
4 x PSNets with resnet152 backbones + 1 x FPN with resnet50 backbone, trained on full-sized images
3 models from the notebook of @lightforever</p>
<p><strong>Classification</strong>
2 x efficientnet-b4 + 2 x efficientnet-b5, trained on 64 x 400 images
1 x resnext101_32x4d + 1 x densenet201, trained on 224 x 224 resized images
1 x resnet152 + 1 x resnet50, trained on full-sized images</p>
<p><strong>Training</strong>
All my models were trained on 80% of random subsample of training data each using Catalyst and strong augmentations from Albumentations (it was my first serious experience with both libraries, so I have experimented with a lot of different augmentations, that seems to produce reasonable results):
```
A.VerticalFlip(p=0.5),
A.HorizontalFlip(p=0.5),
A.ShiftScaleRotate(rotate_limit=15,shift_limit = 0.01, p=0.3), A.GridDistortion(p=0.2)A.OpticalDistortion(p=0.2),
A.Blur(blur_limit = 1, p=0.2),
A.CLAHE(clip_limit = 2, tile_grid_size = (16,16), p=0.2),
A.HueSaturationValue(p=0.1),
A.JpegCompression(quality_lower = 50, p=0.2), A.RandomBrightnessContrast(brightness_limit=0.3, contrast_limit=0.3, p=0.2), A.RandomGamma(p=0.2)</p>
<ul>
<li>A.RandomSizedCrop or A.Resize
```</li>
</ul>
<p>For segmentation models I have used:
1) dice + pixelwise BCE loss if the class mask is not empty
2) pixelwise BCE + overall BCE (based on max predicted pixel value) if the class mask is empty</p>
<p>For classification models - BCE loss</p>
<p><strong>Inference</strong>
I have made simultaneous inference for all models: I have read small batches of test images, applied different sets of transformations, and computed predictions for all models. That means, I have read every image only once and it made inference quite fast.</p>
<p>The prediction of different classification and segmentation models were averaged. More complex blending schemes worsened my scores.</p>
<p><strong>TTA</strong>
Classification, TTA2: original image + horizontal or vertical or both flips (different for different models)
Segmentation, TTA2: original image + both horizontal and vertical flip simultaneously</p>
<p><strong>Postprocessing</strong>
Classification thresholds [0.5, 0.5, 0.5, 0.5]
Segmentation-classification thresholds [0.5, 0.5, 0.5, 0.5]
The sum of both classification approaches thresholds [1.05, 1.05, 1.05, 1.05]
Segmentation thresholds [0.3, 0.3, 0.3, 0.3]
Min sizes [400, 400, 1400, 4000]</p>
<p>My second chosen solution had different thresholds for different classes, and better public LB score – but it turned out to be overfitted to public LB. </p> |
IEEE-CIS Fraud Detection | LB9564 NN solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I mentioned in another discussion that I trained NN with 9548 CV (unshuffled Kfold) and 9564 LB and promised to post the solution. Here it is:</p>
<p>Version 1:
- Features: all raw categorical features + all raw numerical features (except V features)
- Preprocess:
* Numerical: Log transformation first, then apply standardization; Fill Nan by mean and add boolean indicators for all numerical features to indicate whether a certain feature is missing or not
* Categorical features: Decrease the cardinality of all features first and then apply one-hot-encoding
- Optimizer: Nadam with Keras default setting
- Batch size: 2048
- Validation strategy: 5 fold unshuffled
- Early stopping based on AUC on Val dataset
- Score: LB 931+
- Architecture: DNN with 3 layers (256-256-256, dropout = 0.3, use batch normalization **after **the activation; relu as the activation function)</p>
<p>Version 2:
- Extra features added: all numerical engineered features (Groupby features, count features, gap features, etc) that used by my lgbm model
- Preprocess method:
* Same as version 1
- score LB 937+</p>
<p>Version 3: here is the real magic
- Extra features added:
* uid: a user identifier engineered by my teammate. Just like the magic feature found by top solutions
* uid-based numerical features: Groupby(uid)[fea].transform('mean'); Groupby(uid)[fea].transform('std') for fea in all numerical_features
- Preprocess method:
* uid-based numerical features: same as the previous version
* uid: This is a high cardinality categorical feature; I tried multiple approaches to handle this and the best approach I found is to use embedding layer(size = 4) and concatenate the embedding layer with the last dense layer in NN
- Score LB 9556</p>
<p>Version 4:
- Extra features added: some engineered categorical features used by lgbm
- Preprocess method:
- Feed these categorical features into a FM (factorization machine) with a high regularization on the embedding. Add the output of FM to the output of DNN
- Why I used this way? Tried other approaches like one-hot encoding with decreased cardinality, embedding layer used by DNN, DeepFM model, etc. All of these methods significantly decreased local CV
- Score: 0.002 boosted in local CV; slightly decrease in LB</p>
<p>Something to notice:
- In each version, I actually trained 3 DNN with the same architecture with different loss functions (logloss, focal loss, tflearn.objectives.roc_auc_score) and blend them. This practice gave me a boost in local cv (about 0.0015)
- Apply the DAE+NN learning strategy to improve the score from 9556 to 9564; Things to notice is that I slightly modified the approach proposed by the original author: I trained DAE with a subset of all features and the second stage NN is trained with both original features and latent feature representation.
- The original author of this approach: <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629</a></p>
<p>What can be further improved:
- Replace Nadam with SGD and train more epochs and smaller batch sizes. This will surely improve the performance but I cannot afford the computational burden needed
- When I'm writing this post, I suddenly realize that I should use rank blend instead of a weighted average to blend 3 DNN. Using rank blending should improve performance since the metric is auc.
- I found enough evidence that restricting width and increasing depth (use residual connection when depth > 7) of the NN can improve NN performance on this dataset. But I didn't have enough time to further explore this strategy</p>
<p>Special thanks to this kernel. I learned a lot from it: <a href="https://www.kaggle.com/abazdyrev/keras-nn-focal-loss-experiments">https://www.kaggle.com/abazdyrev/keras-nn-focal-loss-experiments</a></p>
<p>PS: I'm in the midterm season and don't have enough time to prepare a clean kernel. Also, I may be too busy to answer the questions at the time being. Will post a kernel and answer all questions after the end of all my midterms.</p> |
SIIM-ACR Pneumothorax Segmentation | Yet another solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is an short overview of our solution:</p>
<h3>Ideas:</h3>
<p>Competition metric and dataset balance makes this competition a mix of hidden classification competition with segmentation competition. This lead us to decomposing a problem into following parts:</p>
<p>a) Solve a classification task. We have done this with a mic of binary classificator and segmentation networks. (more about it later)</p>
<p>b) Solve segmentation task, networks that lead to good classification does not necessary lead to good dice scores.</p>
<p>c) With a given estimated classification probability and a given estimated segmentation dice take a decition if the mask should be included in the submission or not. </p>
<p>Just to explain idea: Let's assume that you have 0.9 probability that particular image has pneumotorax but your estimation for a dice score that you can get from this image is 0.05. This clearly means that it is more profitable statistically to exclude this image and set a "-1" in submission</p>
<p>d) Observation - segmentation networks that predicts large max probability for at least some pixels, has a high probability to process this image with a good dice scores. Another observation
if the different networks largely agree about the image in question, and produce a clear mask then this image most likely contains pneumotorax and this pneumotorax most likely is easily
processable by our networks</p>
<p>e) Optimal treshold for the segmentation depends from the max predicted pixel probability </p>
<p>Basing on this ideas, we created several ansembles of different networks: Unet, FPN - as architectures. ResNet34, InceptionV2, Effnet , Densnet121 as backbones
For classification we used : Xception, InceptionRensnetV2, Effnet.</p>
<h3>Ansembling</h3>
<p>Then we used hyperopt to find ansemble composition that leads to highest score. This composition is basically a hard coded decision tree with a set
of comparison of predicted probability of classification ansemble , predicted max pixel probability of segmentation ansemble and level of agreement
between individual networks in segmentation ansemble.</p>
<p>We also used GA like algorithm to find a set of rules for choosing optimal treshold for an image segmentation on a per image basis</p>
<h3>Learning process details:</h3>
<ol>
<li>Loss: BCE + DICE. Some networks just plain BCE</li>
<li>Batch: 1-2 per GPU card</li>
<li>No image normalization at all.</li>
<li>ReduceLROnPlateau, no wormup, no restarts - just plain Adam</li>
<li>15-20 epochs for almost all networks except those that was trained by Lyan.</li>
</ol>
<h3>What did not worked:</h3>
<p>a) End to end segmentation +classification training (I tried it but it worked worse then ansembling separate networks)
b) Using external data
c) Having grand master in the team.
d) Moving networks from 1080ti,2080ti to more potent cards. (Finding a good learning profile for them was a task on its own, and if you just scale the batch and lr, you network performance will be different and usually worse.</p>
<h3>Our biggest faults:</h3>
<p>I was to exhausted to perform a proper retraining on stage 2. So only Lyan ansemble was retrained. This probably costed us gold.</p>
<p>Despite of big number of people in the team we was not able to stick with a single pipeline and perform a proper scientific search.</p>
<p>Pipelines that i used for training:</p>
<p>Segmentation Pipeline: <a href="https://github.com/musket-ml/segmentation_training_pipeline">https://github.com/musket-ml/segmentation_training_pipeline</a>
Classification Pipeline: <a href="https://github.com/musket-ml/classification_training_pipeline">https://github.com/musket-ml/classification_training_pipeline</a></p>
<h3>Hardware:</h3>
<p>I performed couple of experiments on P100,V100 cards and then sticked to using my 2x2080ti and 2x1080ti machines for training (due to differencies in learning profile) , Lyan used a mix of his own 2080 ti's with P100 cards</p>
<p>Regards,
Pavel</p> |
Severstal: Steel Defect Detection | First silver(3x kaggle expert) 36th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>with my first silver today i became "Kaggle Competition Expert".
congratulations to all the participants of this competition,
First of all, thanks to <a href="/hengck23">@hengck23</a> for sharing his insightful ideas And thanks to <a href="/pavel92">@pavel92</a> for his convenient segmentation library.
i am thankful to all my talented team mates <a href="/pukkinming">@pukkinming</a> <a href="/aleksandradeis">@aleksandradeis</a> <a href="/frtgnn">@frtgnn</a> <a href="/freestander">@freestander</a> for their cooperation.
but i am very sad after seeing heng couldn't get his solo gold medal,i learnt plenty
from his work and we implemented a lot of his ideas in this competition,his discussion posts helped us a lot.
can't wait to see him becoming kaggle competition grand-master,he richly deserve that.
he is my inspiration and motivation for open source community.
he taught me one thing "if you share your work,your wisdom won't be reduced in fact your wisdom will be increased exponentially".kaggle is alive for people like them</p>
<h1>Our solution :</h1>
<p>Ensemble of these algorithms :
unet_resnet34,
unet_se_resnext50_32x4d,
unet_mobilenet2,
fpnb0,
fpnb1,
fpnb2,
fpn_se_resnext50_32x4d,
fpn_resnet34_v2</p>
<p>with popular kaggler heng's resnet34 classifier </p>
<p>github : <a href="https://github.com/mobassir94/Severstal-Steel-Defect-Detection-Kaggle">https://github.com/mobassir94/Severstal-Steel-Defect-Detection-Kaggle</a></p>
<p>final inference kernel : (private now)</p>
<p>thanks to rishab,heng,bibek,abhishek and a lot other kagglers who provided codes and ideas publicly,we used their resources in this competition</p> |
Recursion Cellular Image Classification | 1st Place Solution Write-up & Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Recursion and Kaggle for hosting such an interesting competition. It was really fun to participate.</p>
<p><strong>UPDATE: source code is here: <a href="https://github.com/maciej-sypetkowski/kaggle-rcic-1st">https://github.com/maciej-sypetkowski/kaggle-rcic-1st</a></strong></p>
<h2>Data pre-processing & augmentation</h2>
<ul>
<li>Loading original images (512x512)</li>
<li>HUVEC-18 is moved to the training set (known leak)</li>
<li>For training, all control images (also these from the test set) are used in the same way as non-control images</li>
<li>Training augmentations
<ul><li>Random resized crop preserving aspect with scale ~ uniform(0.5, 1) using nearest-neighbor interpolation</li>
<li>Random horizontal and vertical flip, and 90 degrees rotation</li>
<li>Normalizing each image channel to N(0, 1)</li>
<li>For each channel: channel = channel * a + b, where a ~ N(1, 0.1), b ~ N(0, 0.1)</li></ul></li>
<li>Test-time augmentations
<ul><li>Horizontal and vertical flip, and 90 degrees rotation</li></ul></li>
</ul>
<h2>Model</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2631834%2F21c9fd3c9f61e1ea9970a395ed046111%2Fmodel.png?generation=1569708962482420&alt=media" alt="">
* Backbone is pre-trained on ImageNet and first convolution is replaced with 6 input channel convolution
* Neck: BN + FC + ReLU + BN + FC + BN
* Head: FC</p>
<p>I found it important to not normalize input for the head, and that's the reason why the head and arc margin product are separate layers with different weights for fully connected layer (contrary to <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109">what bestfitting did in Human Protein Atlas Image Classification</a>).</p>
<h2>Training</h2>
<ul>
<li>Batch size: 24 (48 with gradient accumulation)</li>
<li>Optimizer: Adam</li>
<li>Weight decay: 1e-5</li>
<li><a href="https://arxiv.org/abs/1905.04899">Cutmix</a></li>
<li>Loss = ArcFaceLoss / 2 * 0.2 + SoftmaxCrossEntropyLoss * 0.8
(ArcFaceLoss is divided by 2 to more or less preserve magnitude between losses)</li>
<li>90 epochs</li>
<li>Learning rate: 1.5e-4 with cosine scheduling</li>
</ul>
<h2>Post-processing</h2>
<ol>
<li>Predictions from different site images and different test-time augmentations are combined by taking mean of logits</li>
<li>Predictions for control classes are ignored</li>
<li>831 classes that can't be on the given plate are marked as impossible</li>
<li>Linear Sum Assignment (LSA) is applied</li>
</ol>
<p>With such configuration (training on all labeled part of the dataset -- no validation), I got 0.98997 private score and 0.95802 public score (single model).
Ensembling it with models trained in the same or very similar way (most of them with train/val split 5:1) (3x DenseNet161, 2x DenseNet161 with mixup (instead of cutmix), 5x DenseNet201 also with mixup, 3x ResNeXt50 also with mixup) gave me 0.99540 private and 0.98262 public (between 3rd-4th place on private LB).</p>
<p>To reach score of 0.997 private with single model I needed to add one more trick, which I would call:</p>
<h2>Progressive pseudo-labeling</h2>
<p>In all write-ups I've read so far, pseudo-labeling methods consist of iteratively training new model(s) and enlarging training set using them. In my method, small amount of most confident predictions is pseudo-labeled and added to the training set <strong>each epoch</strong>. Precisely, for each epoch:
1. Predict all test and validation examples that weren't added to the training set yet (without TTA, only with combining over sites -- predicting with TTA could probably lead to a small improvement, but it would take more time to compute)
2. For each prediction, mark as impossible:
* control classes,
* classes that can't be on the plate,
* classes that are already assigned to any image for the plate in the training set.
3. Select K most confident prediction (difference between greatest and second greatest class prediction)
4. Add new examples to the training set. If at least two examples are on the same plate and have the same class pseudo-labeled, add to the training set only the most confident one (to preserve uniqueness of classes on the plate)</p>
<p>During class assignment, I use greedy-like approach instead of LSA. However, to generate final predictions, the same post-processing as earlier (without pseudo-labeling) is applied (and LSA there). Using LSA increases score because examples that were added later are more difficult and have smaller confidence (because network saw them fewer times, and learning rate was smaller (decreasing learning rate policy)).</p>
<h2>Training previous single model further</h2>
<ul>
<li>for additional 40 epochs,</li>
<li>pseudo-labeling 40% of the test set at the start, and then adding 1.5% each epoch,</li>
<li>cosine learning rate schedule with initial learning rate = 6e-5 scheduled for 60 epochs (i.e. at the end of training it is 1.5e-5),</li>
</ul>
<p>gave me 0.99700 private and 0.99029 public, which already puts me on the 1st position.</p>
<p>Ensembling it with another model trained in the same way, but with train/val split 5:1, I got 0.99749 private and 0.99187 public. Adding one more model to the ensemble (also trained in the same way, but on the different split) gave me 0.99763 private and 0.99187 public, which matches my private score.</p>
<p>I noticed that around 120th epoch (30th epoch of pseudo-labeling -- 85% of test set already added) first pseudo-label misclassifications on validation set started to occur, hence I tried to fine-tune model even further (taking checkpoint from 120th epoch), starting with 80% of test set, and again incrementally adding new images for 30 epochs. And then again taking checkpoint 5 epochs before end, and starting from 95% for 20 epochs. However, by doing this I was able to classify correctly only one private example more (0.99707) and increase public score (i.e. probably only U2OS-04 experiment) to 0.99232.</p>
<p>My final submission is an ensemble of 11 models (6x DenseNet161, 5x DenseNet201) (each of them with pseudo-labeling) with more TTA (also predicting on crop-resized images with scale 0.75 and 0.85), but it didn't give me any boost on the private test set (0.99763), but helped on the public test set (probably only on U2OS-04) (0.99480).</p>
<h2>Other insights</h2>
<ul>
<li>Mixup performs a little better than cutmix on the part without pseudo-labeling, but it converges slower. On the contrary, with pseudo-labeling, cutmix was a little better (probably because of faster convergence)</li>
<li>Larger architecture is better: DenseNet121 < DenseNet169 < DenseNet201 < DenseNet161 -- for people without knowledge about DenseNets: DenseNet161 has less layers but more parameters than DenseNet201</li>
<li>EfficientNets and ResNeXts didn't work for me</li>
</ul>
<h2>Attempt to use control images in a smarter way</h2>
<p>I want to share the approach I've tried, however it didn't give me any boost in the score, and I didn't use it in the final submission. But I think it's very valuable information, especially for the further research.
My idea was to instead of feeding to the head embedding only, feed also some information about any reference image from the same plate/experiment (e.g. its embedding and one-hot label).
Using only control images as a reference, would lead to overfitting. To tackle that problem, I used also non-control images as reference -- during training, control and non-control images are treated in the same way; during inference, only control images are used as reference (obviously, non-control images in the test set are not labeled).
We have 1139 classes per experiment (or 277 + 31 = 308 per plate), so the head would see each pair of classes after 1139 * 1139 = 1297321 images (once per 15 epochs) or 308 * 308 = 94864 (once per 1 epoch). To solve this, I ensure that in every batch there will be constant number of images from each of randomly chosen experiments/plates. For example, for batch size = 48, I can have 8(number of experiments/plates) x 6(number of images from given experiment/plate), and run the head on each pair among each experiment/plate in the batch. That gives 8 * 6 * 5 = 240 pairs in one batch (I forbid the image and the reference to be the same image), and doesn't increase training time (embedding of each image is calculated only once, and the head consists of few fully connected layers).</p>
<p>However, it didn't work any better than normal classification, and sometimes even worse. I tried to add or modify features for the head, for example:
* concatenate difference between / multiplication of image and reference embedding,
* concatenate corresponding vector from the arc margin product layer to the reference label,
* normalize / not normalize embeddings,
* detaching some of the features (not computing gradient through them).</p>
<p>I also tried:
* add more layers to the head,
* heavy-augment all images in a batch for the same experiment/plate in the same way -- all images in the batch belonging to the same experiment/plate would have the same brightness, contrast, gamma correction, the same scale applied, and so on -- the idea was to artificially simulate other cell types to direct model toward using references more effectively,
* use mixup only within images from the same experiment/plate.</p>
<p>However, no luck.</p>
<p>What more, after training such model and feeding random noise as the reference, network still inferred very similar predictions with almost the same validation score (and not always worse). So, network didn't learn how to use the references properly.
That would imply that creating a model that performs well on different cell types (not seen during the training) using control images may be a very hard and challenging problem.</p> |