Title of Competition
stringclasses 168
values | Title of Writeup
stringlengths 9
139
| User
stringclasses 1
value | Writeup
stringlengths 419
61.5k
|
---|---|---|---|
Freesound Audio Tagging 2019 | 6th place solution fastai | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here are the write-up and code for my solution!</p>
<p><strong>Blog post:</strong> <a href="https://link.medium.com/Kv5kyHjcIX">https://link.medium.com/Kv5kyHjcIX</a>
<strong>Code:</strong> <a href="https://github.com/mnpinto/audiotagging2019">https://github.com/mnpinto/audiotagging2019</a></p>
<p><strong>Summary:</strong>
* Models: xresnets
* Image size: 256x256
* Mixup sampling from a uniform distribution
* Horizontal and Vertical Flip as new labels (total 320 labels)
* Compute loss only for samples with F2 score (with a threshold of 0.2) less than 1.
* Noisy data: ~3500 "good noisy samples" used the same way as curated data
* TTA: Slice clips each 128px in the time axis (no overlap), generate predictions for each slice and compute the <code>max</code> for each class.
* Final submissions: 1) Average of 2 models: public LB 0.742, private LB 0.74620; 2) Average of 6 models: public LB 0.742, private LB 0.75421.</p>
<p><strong>Additional observations:</strong>
* I found better results when using random crops of 128x128 and rescale to 256x256, comparing to random crops of 128x256 and rescale to 256x256, I was expecting the opposite.
* I wonder why max_zoom=1.5 works; I would not expect so.</p>
<p><strong>Acknowledgements:</strong>
<a href="/daisukelab">@daisukelab</a> thanks for the code to generate the mel spectrograms! Thanks to everyone that contributed in the discussions or with kernels. And finally, thanks to the organizers for this great competition!</p> |
Freesound Audio Tagging 2019 | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone! </p>
<p>At first, congratulations to all participants and to all who got any medals or any new ranks!</p>
<p>Below I want to outline my solution. My approach is quite similar to many approaches that were previously shared on the forum, so I will focus only on that tricks and methods that I find important or which weren't mentioned in other's solutions.</p>
<p><strong>Models</strong></p>
<p>I used two types of models, both are based on convolutions. The first type uses 2d convolutions and works on top of mel-scale spectrograms, while the second uses 1d-convolutions on top of raw STFT representations with relatively small window size like 256, so it's only 5 ms per frame or so. Both types of models are relatively shallow and consist of 10-12 convolutional layers (or 5-6 resnet blocks) with a small number of filters. I use a form of deep supervision by applying global max pooling after each block (typically starting from the first or second block) and then concatenating maxpool outputs from each layer to form the final feature vector which then goes to a 2-layer fully-connected classifier. I also tried using RNN's instead of a max pooling for some models. It made results a bit worse, but RNN seemed to make different mistakes, so it turned out to be a good member of the final ensemble.</p>
<p><strong>Frequency encoding</strong></p>
<p>2d convolutions are position-invariant, so the output of a convolution would be the same regardless of where the feature is located. Spectrograms are not images, Y-axis corresponds to signal frequency, so it would be nice to assist a model by providing this sort of information. For this purpose, I used a linear frequency map going from -1 to 1 and concatenated it to input spectrogram as a second channel. It's hard to estimate now without retraining all the models how much gain I got from this little modification, but I can say It was no less than 0.005 in terms of local CV score.</p>
<p><strong>This is not really a classification task</strong></p>
<p>Most teams treated the problem as a multilabel classification and used a form of a binary loss such as binary cross entropy or focal loss. This approach is definitely valid, but in my experiments, it appeared to be a little suboptimal. The reason is the metric (lwlrap) is not a pure classification metric. Contrary to accuracy or f-score, it is based on <em>ranks</em>. So it wasn't really a surprise for me when I used a loss function based on ranks rather than on binary outputs, I got a huge improvement. Namely, I used something called LSEP (<a href="https://arxiv.org/abs/1704.03135">https://arxiv.org/abs/1704.03135</a>) which is just a soft version of pairwise rank loss. It makes your model to score positive classes higher than negative ones, while a binary loss increases positive scores and decreases negative scores independently. When I switched to LSEP from BCE, I immediately got approximately 0.015 of improvement, and, as a nice bonus, my models started to converge much faster.</p>
<p><strong>Data augmentation</strong></p>
<p>I used two augmentation strategies. The first one is a modified MixUp. In contrast to the original approach, I used OR rule for mixing labels. I did so because a mix of two sounds still allows you to hear both. I tried the original approach with weighted targets on some point and my results got worse.</p>
<p>The second strategy is augmentations based on audio effects such as reverb, pitch, tempo and overdrive. I chose the parameters of these augmentations by carefully listening to augmented samples.</p>
<p>I have found augmentations to be very important for getting good results. I guess the total improvement I got from these two strategies is about 0.05 or so. I also tried several other approaches such as splitting the audio into several chunks and then shuffling them, replacing some parts of the original signals with silence and some other, but they didn't make my models better.</p>
<p><strong>Training</strong></p>
<p>I used quite large audio segments for training. For most of my models, I used segments from 8 to 12 seconds. I didn't use TTA for inference and used full-length audio instead.</p>
<p><strong>Noisy data</strong></p>
<p>I tried several unsupervised approaches such as <a href="https://arxiv.org/abs/1807.03748">Contrastive Predicting Coding</a>, but never managed to get good results from it.</p>
<p>I ended up applying a form of iterative pseudolabeling. I predicted new labels for the noisy subset using a model trained on curated data only, chose best 1k in terms of the agreement between the predicted labels and actual labels and added these samples to the curated subset with the original labels. I repeated the procedure using top 2k labels this time. I applied this approach several times until I reached 5k best noisy samples. At that point, predictions generated by a model started to diverge significantly from the actual noisy labels. I decided to discard the labels of the remaining noisy samples and simply used model prediction as actual labels. In total, I trained approximately 20 models using different subsets of the noisy train set with different pseudolabeling strategies.</p>
<p><strong>Inference</strong></p>
<p>I got a great speed-up by computing both STFT spectrograms and mel spectrograms on a GPU. I also grouped samples with similar lengths together to avoid excessive padding. These two methods combined with relatively small models allowed me to predict the first stage test set in only 1 minute by any of my models (5 folds). </p>
<p><strong>Final ensemble</strong></p>
<p>For the final solution, I used 11 models trained with slightly different architectures (1d/2d cnn, rnn/no-rnn), slightly different subsets of the noisy set (see "noisy data" section) and slightly different hyperparameters. </p>
<p>Source code is available: <a href="https://github.com/ex4sperans/freesound-classification/tree/master">https://github.com/ex4sperans/freesound-classification/tree/master</a></p>
<p>I'll add a detailed README soon.</p> |
Freesound Audio Tagging 2019 | A Naive Solution (28th in Private LB) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to thank organizers and all participants, and congratulate winners on excellent results !
For me, this is the first image(sound) competition in Kaggle and I have learned a lot.</p>
<p>Many top teams unveiled their nice solutions(also scores). So I'm ashamed to publish my solution here, but I will do for a memento. </p>
<hr>
<h1>Local Validation</h1>
<p>I created 80class-balanced 5 folds validation and have evaluated the model performance with BCE. At the early stage in this competition, I have checked the correlation of BCE and LWLRAP. Below figure shows an almost good correlation. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F479538%2F01d7ab43832e5eb5eec8a139fdb69c19%2Ffig1.png?generation=1561761823509063&alt=media" alt=""></p>
<h1>Features</h1>
<ol>
<li>Remove silent audios</li>
<li>Trim silent parts</li>
<li>Log Mel Spectrogram ( SR 44.1kHz, FFT window size 80ms, Hop 10ms, Mel Bands 64 )</li>
<li>Clustered frequency-wise statistical features</li>
</ol>
<p>I did not do nothing special. But I will explain about <strong>4</strong>. <br>
I thought CNN can catch an audio property in frequency space like Fourier Analysis, but less in statistics values ( max, min, ... ). And labels of train-noisy are unreliable, so I created frequency-wise statistical features from spectrograms and compute cluster distances without using label information(Unsupervised). The procedure is following, </p>
<p>a. Compute 25 statistical values (Max, Min, Primary difference, Secondary difference, etc... ) and flatten 64 x 25 (=1600) features
b. Compute 200 cluster distances with MiniBatchKMeans ( dimensional reduction from 1600 to 200 )</p>
<p>This feature pushed my score about 0.5 ~ 1% in each model. </p>
<h1>Models, Train and Prediction</h1>
<p>In this competition, we do not have a lot of time to make inferences ( less than 1 GPU hour ). So I selected 3 relatively light-weight models. </p>
<ul>
<li>Mobile Net V2 with/without clustered features</li>
<li>ResNet50 with/without clustered features</li>
<li>DenseNet121 with/without clustered features</li>
</ul>
<p>My final submission is the ensemble of above 6 models ( 3 models x 2 type of features ). Here is a model pipeline. The setting of train and prediction with TTA is written in this figure. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F479538%2Fd21b6e837c1be18cc4a66bbbc82c3174%2Ffs2019_model-pipeline_01.png?generation=1561760818137707&alt=media" alt=""></p>
<h1>Performance</h1>
<p>I used the weighted geometric averaging to blend 6 model predictions. Below table shows each performance and the blending coefficients. Those coefficients are computed with optimization based on Out Of Fold predictions. LWLRAP values are calculated with 5 fold OOF predictions on train-curated data.</p>
<p>| Model | LWLRAP on train-curated | Blending Coefficient |
| --- | --- | --- |
| MobileNetV2 with clustered features( cf ) | 0.84519 | 0.246 |
| MobileNetV2 without cf | 0.82940 | 0.217 |
| ResNet50 with cf | 0.84490 | 0.149 |
| ResNet50 without cf | 0.83006 | 0.072 |
| DenseNet121 with cf | 0.84353 | 0.115 |
| DenseNet121 without cf | 0.83501 | 0.201 |
| <strong>Blended</strong> | 0.87611 ( Private 0.72820 ) | --- |</p>
<p>Thank you very much for reading to the end. <br>
See you at the next competition !</p> |
Freesound Audio Tagging 2019 | [155th place] My very first competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks a lot for the organizers - it was a fun task, especially a noisy dataset, which is full of hilarious gems.
Kaggle has rolled out new UI during early days of competition - resulting in completely missing kernel code and invisible cells.
New version of PyTorch was released</p>
<p>Those are just my outtakes, moving forward to the new competitions:
- <strong>Build a team as early as possible</strong>. Sometimes it was very hard to motivate myself to stay up the whole night after the full working day at the day job. There are plenty of tasks of various scale and everyone would find his place.
- <strong>Good data == Good model / Bad data == bad model</strong>. I've heard about this dozen of times. But it's very easy to get tunnel vision on network structure or training schedule and completely forget about input data transformations.
- <strong>Cross-validation. It was actually the first time I've used KFold</strong>. The problem was that I've stacked all predictions by simply computing the mean value. There are plenty of more advanced tactics that would've given slightly better results.
- <strong>Regularization is magic (sometimes)</strong>. Training a network is some kind of art :) I was using vanilla Pytorch and it requires a bit more in-depth knowledge comparing to Fast.ai. That's just my impression.
- <strong>Octave</strong> convolution doesn't seem to work well on spectrograms (at least I didn't see any improvements). I haven't tried <strong>CoordConv</strong>, but it would be interesting to compare results.</p> |
Freesound Audio Tagging 2019 | My write up. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's finally weekend, that I have time to write a quick post. It might be disappointing if you were come in here to see if there is any magic or tricks.</p>
<p>As many of you know there is quite some difference even between the curated training set and the public test set. For example, those almost duplicated 'Mr. Peanut' clips were very bad and would confuse the model on the test set. For the noisy set, the differences are much larger. The slams seem to be basketball court(slam dunk?), the tap is mostly tip tap dance, the surf and wave clips are much windy. It also has notably much more human speakings in non-human sounds labeled clips. I also tried many methods to use the noisy set, but with limited success reaching only .72.</p>
<p>My .76 model is fine-tuned to overfit the test set. I spent my memorial day holiday weekend, hand made a tuning set with cherry-picked clips from the curated set and noisy set that sounds most similar to the public test set and used this dataset to fine tune my models. It is kind of cheating, for that, it is equivalent to me sneak peek at the exam paper and giving my model a targeted list of past quiz questions to practice right before the exam. It will do well on this particular exam, but might not do well on a new one. </p>
<p>I did not use this method as my final submission, because I don't think it solves the problem the competition is about. I did a similar thing in my previous <a href="https://www.kaggle.com/c/inclusive-images-challenge/">competiton</a> which I missed a top spot for not submitting a fine-tuned model.</p>
<p>So quickly about my solution without the fine-tuning. It is Conv nets trained with mixup and random frequency/time masks on curated set and pseudo-labeled datasets. </p>
<p>I have two pseudo-label sets. Here is how I generated them. For each audio clip, I run my model over sequences of windows on it. For each window, I calculate a ratio of the top k activations vs the sum of all activations. I call this ratio the signal-noise ratio. My first pseudo-label set consists of those windows with the highest ratios and associated most confident labels. My second pseudo-label set consists of full clips, I average activations of windows with highest ratios to be the labels. During training, I am zipping the curated training set and the two pseudo label sets, calculate loss separately and linearly combine them to do backward flow. </p>
<p>On inference time, I do the same scanning through windows, calculating ratios, etc to generate predictions as well. I found it better than averaging random crops. But this may simply due to the fact that my model learned to focus on those strong short signals from the first pseudo label set.</p> |
Freesound Audio Tagging 2019 | 9th place solution: smaller and faster | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I would like to congratulate all winers and participants of this competition and thank kaggle and the organizers for posting this interesting challenge and providing resources for participating in it. Also, I want to express my highest gratitude and appreciation to my teammates: <a href="/suicaokhoailang">@suicaokhoailang</a>, <a href="/theoviel">@theoviel</a>, and <a href="/yanglehan">@yanglehan</a> for working with me on this challenge. It was the first time when I worked with audio related task, and it was a month of quite intense learning. I would like to give special thanks to <a href="/theoviel">@theoviel</a> for working quite hard with me on optimization of our models and prepare the final submissions.</p>
<p>The code used for training our final models is available now at <a href="https://www.kaggle.com/theoviel/9th-place-modeling-kernel">this link</a> (private LB 0.72171 score, top 42, with a single model (5 CV folds) trained for 7 hours). Further details of our solution can be found in our <a href="https://storage.googleapis.com/kaggle-forum-message-attachments/563780/13697/DCASE2019_Challenge.pdf">DCASE2019 technical report</a>.</p>
<p>The <strong>key points</strong> of our solutions are the following (see details below): (1) Use of <strong>small and fast models with optimized architecture</strong>, (2) Use of <strong>64 mels</strong> instead of 128 (sometimes less gives more) with <strong>4s duration</strong>, (3) Use <strong>data augmentation and noisy data</strong> for pretraining.</p>
<p><strong>Stage 1</strong>: we started the competition, like many participants, with experimenting with common computer vision models. The input size of spectrograms was 256x512 pixels, with upscaling the input image along first dimension by a factor of 2. With this setup the best performance was demonstrated by DenseNet models: they outperformed the baseline model published in <a href="https://www.kaggle.com/mhiro2/simple-2d-cnn-classifier-with-pytorch">this kernel</a> and successfully used in the competition a year ago, also Dnet121 was faster. With using pretraining on full noisy set, spectral augmentation, and MixUp, CV could reach ~0.85+, and public score for a single fold, 4 folds, and ensemble of 4 models are 0.67-0.68, ~0.70, 0.717, respectively. Despite these models are not used for our submission, these experiments have provided important insights for the next stage.</p>
<p><strong>Stage 2</strong>:
<strong>Use of noisy data</strong>: It is the main point of this competition that organizers wanted us to focus on (no external data, no pretrained models, no test data use policies). We have used 2 strategies: (1) pretraining on full noisy data and (2) pretraining on a mixture of the curated data with most confidently labeled noisy data. In both cases the pretraining is followed by fine tuning on curated only data. The most confident labels are identified based on a model trained on curated data, and further details can be provided by <a href="/theoviel">@theoviel</a>. For our best setup we have the following values of CV (in stage 2 we use 5 fold scheme): training on curated data only - 0.858, pretraining on full noisy data - 0.866, curated data + 15k best noisy data examples - 0.865, curated data + 5k best noisy data examples - 0.872. We have utilized all 3 strategies of noisy data use to create a variety in our ensemble.</p>
<p><strong>Preprocessing</strong>: According to our experiments, big models, like Dnet121, work better on 128 mels and even higher image resolution, while the default model reaches the best performance for 64 mels. This setup also decreases training time and improves convergence speed. 32 mels also could be considered, but the performence drops to 0.856 for our best setup. Use of 4s intervals instead of traditional 2s has gave also a considerable boost. The input image size for the model is 64x256x1. We tried both normalization of data based on image and global train set statistics, and the results were similar. Though, our best CV is reached for global normalization. In final models we used both strategies to crease a diversity. We also tried to experiment with the fft window size but did not see a significant difference and stayed with 1920. One thing to try we didn't have time for is using different window sizes mels as channels of the produced image. In particular, <a href="https://arxiv.org/abs/1706.07156">this paper</a> shows that some classes prefer longer while other shorter window size. The preprocessing pipeline is similar to one described in <a href="https://www.kaggle.com/daisukelab/creating-fat2019-preprocessed-data">this kernel</a>.</p>
<p><strong>Model architecture</strong>: At stage 2 we used the model from <a href="https://www.kaggle.com/mhiro2/simple-2d-cnn-classifier-with-pytorch">this kernel</a> as a starting point. The performance of this base model for our best setup is 0.855 CV. Based on our prior positive experience with DensNet, we added dense connections inside convolution blocks and concatenate pooling that boosted the performance to 0.868 in our best experiments (model M1):
```
class ConvBlock(nn.Module):
def <strong>init</strong>(self, in_channels, out_channels, kernel_size=3, pool=True):
super().<strong>init</strong>()</p>
<pre><code> padding = kernel_size // 2
self.pool = pool
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size,
stride=1, padding=padding),
nn.BatchNorm2d(out_channels),
nn.ReLU(),
)
self.conv2 = nn.Sequential(
nn.Conv2d(out_channels + in_channels, out_channels,
kernel_size=kernel_size, stride=1, padding=padding),
nn.BatchNorm2d(out_channels),
nn.ReLU(),
)
def forward(self, x): # x.shape = [batch_size, in_channels, a, b]
x1 = self.conv1(x)
x = self.conv2(torch.cat([x, x1],1))
if(self.pool): x = F.avg_pool2d(x, 2)
return x # x.shape = [batch_size, out_channels, a//2, b//2]
</code></pre>
<p>```
The increase of the number of convolution blocks from 4 to 5 gave only 0.865 CV. Use of a pyramidal pooling for 2,3 and 4-th conv blocks (M2) gave slightly worse result than M1. Finally, our ultimate setup (M3) consists of 5 conv blocks with pyramidal pooling reached 0.872 CV. DenseNet121 in the same pipeline reached only 0.836 (DenseNet121 requires higher image resolution, 256x512, to reach 0.85+ CV). From the experiments, it looks for audio it is important to have nonlinear operations before size reduction by pooling, though we did not check it in details. We used M1, M2, and M3 to create variability in our ensemble. Because of the submission limit we checked performance of only a few models in public LB, with the best single model score 0.715.</p>
<p>This plot illustrates architecture of M3 model:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2F4c15c1ec24d235c7834cc1ae94a3ca38%2FM3.png?generation=1561588228369529&alt=media" alt=""></p>
<p>Here, all tested models are summarized:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2Ffaca75bb8fb9dc910a648d1c141313be%2Fmodels.png?generation=1561746099636890&alt=media" alt=""></p>
<p>And here the model performance for our best setup:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2Ff7014dbf8de75685fff13c683384f068%2Fp.png?generation=1561588425442446&alt=media" alt=""></p>
<p><strong>Data augmentation</strong>:
The main thing for working with audio data is MixUp. In contrast to images of real objects, <a href="https://towardsdatascience.com/whats-wrong-with-spectrograms-and-cnns-for-audio-processing-311377d7ccd">sounds are transparent</a>: they do not overshadow each other. Therefore, MixUp is so efficient for audio and gives 0.01-0.015 CV boost. At the stage 1 the best results were achieved for alpha MixUp parameter equal to 1.0, while at the stage 2 we used 0.4. <a href="https://arxiv.org/abs/1904.08779">Spectral augmentation</a> (with 2 masks for frequency and time with the coverage range between zero and 0.15 and 0.3, respectively) gave about 0.005 CV boost. We did not use stretching the spectrograms in time domain because it gave lower model performance. In several model we also used <a href="https://arxiv.org/abs/1905.09788">Multisample Dropout</a> (other models were trained without dropout); though, it decreased CV by ~0.002. We did not apply horizontal flip since it decreased CV and also is not natural: I do not think that people would be able to recognize sounds played from the back. It is the same as training ImageNet with use of vertical flip.</p>
<p><strong>Training</strong>: At the pretraining stage we used one cycle of cosine annealing with warm up. The maximum lr is 0.001, and the number of epochs is ranged between 50 and 70 for different setups. At the stage of fine tuning we applied ReduceLROnPlateau several times to alternate high and low lr. The code is implemented with Pytorch. The total time of training for one fold is 1-2 hours, so the training of entire model is 6-9 hours. Almost all our models were trained at kaggle, and the kernel is at <a href="https://www.kaggle.com/theoviel/9th-place-modeling-kernel">this link</a>.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2F51ade3b68a7ab201b15fb9736d618a7b%2Flr.png?generation=1561588570346398&alt=media" alt=""></p>
<p><strong>Submission</strong>:
Use of 64 mels has allowed us also to shorten the inference time. In particular, each 5-fold model takes only 1-1.5 min for generation of a prediction with 8TTA. Therefore, we were able to use an ensemble of 15 models to generate the final submissions.
The final remark about the possible reason of the gap between CV and public LB: <a href="https://arxiv.org/pdf/1904.05635v1.pdf">this paper</a> (Task 1B) reports 0.15 lwlrap drop for test performed on data recorded with different device vs the same device as used for training data. So, the public LB data may be selected for a different device, but we never know if it is true or not.</p> |
Freesound Audio Tagging 2019 | Main features of final solution [22 on public] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1.) Using power(0.125) instead of log in Spectrograms;
2) Selection balanced (or unbalanced) sample of well predicted by current model noisy files (1500, 2500, 5000, 10000). Priority for rarely predicted files on the validation;
3) Different parameters of the spectrogram extraction (hop, n_mels). We used:
- rate: 44100, n_mels: {128, 256}, n_fft: 2560, hop_length: {512, 694, 1024}, fmin: 20;
- dynamic hop preprocessing: hop choose depending on the length of the sound (min_hop:16, max_hop: 1024);</p>
<p>4) Mixup with beta distribution (0.2, 0.2);
5) Class_weights in BCE calculated by the ratio of the number of classes predicted as top1 on validation;
6) Train with different seeds and choosing the best model fold by fold ^);
7) The ensemble of 7 relatively uncorrelated models.</p> |
Freesound Audio Tagging 2019 | 13rd solution summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi,</p>
<p>As I have made a slide about my solution, I would like to post it here. Thank organizer and staffs for this great competition.</p>
<p>Here is the brief summary.</p>
<p>Inference:</p>
<ol>
<li>Raw audio to log mel.</li>
<li>Apply sliding window to the log mel and predict each windows by CNN.</li>
<li>Feed temporal prediction given step2 to RNN</li>
<li>Feed temporal prediction given step2 to Rakeld + ExtraTrees</li>
<li>Average step3 and step4.</li>
</ol>
<p>Training:</p>
<ol>
<li>Train CNN by using only curated data.</li>
<li>Apply sliding window to the log mel in noisy dataset and predict each windows by CNN.</li>
<li>Feed temporal prediction given step2 to RNN. Then, I got pseudo labels for noisy dataset.</li>
<li>Average pseudo labels with original noisy labels by 0.75:0.25.</li>
<li>Train CNN by using all data.</li>
<li>Train some models to follow the inference steps.</li>
</ol>
<p>Details in Training CNN:</p>
<ul>
<li>Densenet 121</li>
<li>Spec Aug</li>
<li>Mixup</li>
<li>CLR + SWA + Snapshot ensemble.</li>
</ul>
<p>I intensively tried SSL approach such as ICT, but it did not work.</p> |
Freesound Audio Tagging 2019 | Solution ready on the github | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is for you while waiting for 2nd stage result :)</p>
<p><a href="https://github.com/daisukelab/freesound-audio-tagging-2019">https://github.com/daisukelab/freesound-audio-tagging-2019</a></p>
<p><img src="https://github.com/daisukelab/freesound-audio-tagging-2019/raw/master/images/data_example.png" alt="title_picture"></p>
<p>Thank you for everybody organized this competition, and congratulations to the winners.</p>
<p>It was wonderful opportunity once again to dig deeper into audio data for scene understanding application, I could gain intuition and understanding for real world sound-scene-aware applications which I'm hoping to have in electric appliances around us in the future.</p>
<p>I'd share what I have done and how was that during this competitions other than public kernels.</p>
<p>Topics Summary:
- Audio preprocessing to create mel-spectrogram based features, 3 formats were tried.
- Audio preprocessing also tried background subtraction; focusing on sound events... not sure it worked or not.
- Tried to mitigate curated/noisy audio differences; inter-domain transfer regarding <em>frequency envelope</em>.
- Soft relabeling, by using co-occurrence probability of labels.
- Ensemble for multi-label problem; per-class per-model 2D weighting for better averaging.</p> |
Freesound Audio Tagging 2019 | 7th place solution with commentary kernel | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Freesound 7th place solution</h1>
<p>Thank you for all the competitors, kaggle teams, and the host of this competition!
We enjoyed a lot during the competition and learned many things. </p>
<p>We especially want to thank <a href="/daisukelab">@daisukelab</a> for his clear instruction with great kernels and datasets, <a href="/mhiro2">@mhiro2</a> for sharing excellent training framework, and <a href="/sailorwei">@sailorwei</a> for showing his Inception v3 model in his public kernel.</p>
<p>The points where the solution of our team seems to be different from other teams are as follows.</p>
<p>keypoint : Data Augmentation, Strength Adaptive Crop,Custom CNN, RandomResizedCrop</p>
<p>The detailed explanation is in the following kernel, so please read it. </p>
<p><a href="https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution">https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution</a> </p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/564097/13702/Shirogane_solution.png" alt="pipeline"> </p>
<h2>Data Augmentation</h2>
<p>We created 7 augmented training dataset with <a href="http://sox.sourceforge.net/">sox</a>.</p>
<ul>
<li>fade</li>
<li>pitch * 2</li>
<li>reverb</li>
<li>treble & bass * 2</li>
<li>equalize </li>
</ul>
<p>We trained a togal of 4970 * 8 samples without leaks.</p>
<h2>Crop policy</h2>
<p>We use random crop, because we use fixed image size(128 * 128).
Random crop got a little better cv than the first 2 seconds cropped.
At first, it wa cropped uniformly as mhiro's kernel.</p>
<h3>Strength Adaptive Crop</h3>
<p>Many sound crip has important information at the first few seconds.
Some sample has it in the middle of crip. However, due to the nature of recording, it is rare to have important information at th end of sounds.
The score drops about 0.03~0.04 when learning only the last few seconds.</p>
<p>Then, We introduce <strong>Strength Adaptive Crop</strong>.
We tried to crop the place where the total of db is high preferentially. </p>
<p>This method is very effective because most samples contain important information in places where the sound is loud. </p>
<p>CV 0.01 up <br>
LB 0.004~0.005 up </p>
<p>Detailed code is in <a href="https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution">this kernel</a>.</p>
<h2>model structure</h2>
<ul>
<li>InceptionV3 3ch </li>
<li>InceptionV3 1ch </li>
<li>CustomCNN </li>
</ul>
<p>CustomCNN is carefully designed to the characteristics of the sound. The details are in <a href="https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution">this kernel</a></p>
<h2>Augmentation in batch</h2>
<ul>
<li>Random erasing or Coarse Dropout</li>
<li>Horizontal Flip</li>
<li>mixup</li>
<li>Random Resized Crop (only InceptionV3) </li>
</ul>
<h2>Training strategy</h2>
<h3>TTA for validation</h3>
<p>When RandomResizedCrop is used, val score fluctuate,so if val tta is not used, an appropriate epoch can not be selected. So, we used tta for validation to ensure that validation can be properly evaluated. </p>
<h3>Stage 1 : pretrain with noisy data(warm-up)</h3>
<p>We used noisy data to 'pre-train' our model.
LB : about 0.01 up</p>
<h3>Stage 2 : Train with curated data 1</h3>
<p>We used curated data to 'finetune' the models, which were 'pre-trained' with noisy data.</p>
<h3>Stage 3 : Train with curated data 2(Inception only)</h3>
<p>We used stage 2 best weight to stage 3 training without random resized crop. We don't know why, but lwlrap goes up without random resized crop in Inception model.</p>
<h3>score</h3>
<p>Accurate single model public score not measured.
| | public|private |
|-----------|------------|------------|
|Inception 3ch|0.724 over|0.73865|
|Inception 1ch|??|0.73917|
|CustomCNN|0.720 over|0.73103|</p>
<h2>Ensemble</h2>
<p>(Inception 3ch + Inception 1ch + CustomCNN) / 3 </p>
<p>private score:0.75302</p> |
Freesound Audio Tagging 2019 | Video of the 18th place solution and general discussion of the competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Early this month I have told about the competition on ML trainings. You can find the video following the link below. I have described the general approach to audio processing and said about my solution as well as ideas used by the winners.
<a href="https://youtu.be/m0-JHQeLf3k">https://youtu.be/m0-JHQeLf3k</a> </p>
<p>The talk is a part of the meetup organized by ML Trainings (<a href="https://www.facebook.com/groups/1413405125598651/">https://www.facebook.com/groups/1413405125598651/</a>) – regular Moscow based meetups backed by ods.ai community. Usually the discussions about completed competitions are held in Russian, but recently some of them have been started to be held in English, so you can find some English videos on the youtube channel as well (and there will be more in the future).</p> |
Freesound Audio Tagging 2019 | 13th place solution [public LB] - brief summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The second year to take part in freesound audio tagging, thanks for the organization for the interesting and close to reality audio competition. Thanks for <a href="/daisukelab">@daisukelab</a> <a href="/mhiro2">@mhiro2</a> <a href="/ceshine">@ceshine</a> <a href="/jihangz">@jihangz</a> , and many kagglers for the starter kernels and discussions.</p>
<p>Earlier single model kernel without cv is as below:
<a href="https://www.kaggle.com/sailorwei/fat2019-2d-cnn-with-mixup-lb-0-673">https://www.kaggle.com/sailorwei/fat2019-2d-cnn-with-mixup-lb-0-673</a>
Discussion about single model LB:
<a href="https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/91881#latest-547036">https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/91881#latest-547036</a>
<a href="https://github.com/sailor88128/dcase2019-task2">https://github.com/sailor88128/dcase2019-task2</a></p>
<p>Final submission:
- On curated dataset, 5 fold cv, inception V3, single model LB is 0.691 and 5 model average is 0.713;
- On curated dataset and noisy dataset, with specific loss function, 5 fold cv, 8 layer cnn, 5 model average 0.678;
- Geometric average of these two models, 0.73+.</p>
<p>Techniques we ever used:
1. PCEN spectragram, didnot try many parameters, not sure if it work for this competition;
2. mixup, about 0.01 increase in LB;
3. SpecAugment shows no increase, while RandomResizedCrop cannot be explained for audio with a better lb increase;
4. CV and tta, about 0.05 increase for the primary single model;
5. Mixmatch, got a much lower LB with the same model, need to check the code then;
6. Schedule of lr, cycliclr and cosineannealing do not show much difference.</p>
<p>Others:
1. While lots of kagglers said shallow cnn works well, we didnot get effective shallow cnn. We ever tried alexnet or vgg, all with low local lwlrap :(
2. Different cnns were used at first, but we didnot save them well for the final model ensemble;
3. Dataset with noisy label are not well used. We just used a weight loss function to train the noisy labeled data together with curated data set. The mixmatch we realize for the task maybe wrong in detail and should be checked.</p> |
Google Landmark Retrieval 2019 | 1st/3rd place solution by Team smlyaka | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>(Updated on June 10: Priv LB was updated. Finally, our team won 1st on the retrieval challenge!)
(Updated on June 12: Our solution code and paper are available. We plan to refactor them after the CVPR conference. If you are interested in the code, I strongly recommend you to read it a few weeks later. Also, our model parameters and cleaned dataset will be available. Reproducing our work is very welcome! (code) <a href="https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution">https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution</a>, (paper) <a href="https://arxiv.org/abs/1906.04087">https://arxiv.org/abs/1906.04087</a>, (poster) <a href="https://www.dropbox.com/s/y3c3ovdiizz59j4/cvpr19_smlyaka_slides.pdf?dl=0">https://www.dropbox.com/s/y3c3ovdiizz59j4/cvpr19_smlyaka_slides.pdf?dl=0</a>)</p>
<hr>
<p>Congratulations to all participants and winners! And special thanks to Filip Radenovic for his great paper and open-source code. We learned a lot from them.
We two worked very hard and enjoyed this competition very much.</p>
<p>Here is a brief overview of our solution. (Now we are working hard for preparing the detail description toward the paper submission deadline.)</p>
<ol>
<li>Automated data cleaning with using train labels, global descriptors, and spatial verification (SV). For preparing a cleaned train set, we selected only the images that it's topk neighbors in descriptor space are same landmark_id and verified by DELF+SV. This dataset cleaning is essential to train networks since the original train set is too noisy to train networks properly. To train with the cleaned dataset significantly boosts our score compared to only training with train data from previous year's competition.</li>
<li>We use FishNet-150, ResNet-101, and SEResNeXt-101 as backbones trained with cosine-based softmax losses, ArcFace and CosFace. Besides, various tricks are used such as aspect preserving of input image, cosine annealing, GeM pooling, and finetuning at full resolution on the last epoch with freezing Batch Normalization.</li>
<li>For the recognition task, accumulating top-k (k=3) similarity in descriptor space and inliers-count by spatial verification helps a lot.</li>
<li>For the retrieval task, reranking with using (3) significantly improved our score in stage2.</li>
</ol>
<p>DBA, alpha-QE, Iterative DELF-QE (proposed by Layer6 AI team last year), Diffusion [Yang+'19] or Graph traversal (EGT, rEGT) [Chang+'19] work in stage1, but they don't work well in stage2.</p>
<p>Our experimental results for the retreival/recognition are attached. In table 1, all results are based on euclidean search only. No use of PCA/whitening, DBA, QE, Diffusion or Graph Traversal.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/546392/13423/table_1.png" alt="Table1">
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/546392/13424/table_2.png" alt="Table2">
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/546392/13425/table_3.png" alt="Table3"></p>
<h2>Post-processing for the recognition task</h2>
<p>Supplement to the Table 3. We treated the frequently occurring landmark as distractors and updated their confidence score in a post-processing step. Specifically, the corresponding landmark's confidence score is calculated by multiplying the frequency by -1. We treated landmarks that appeared more than 30 times as distractors.</p>
<h2>Things doesn’t work</h2>
<ul>
<li><a href="https://arxiv.org/abs/1905.00292">AdaCos: Adaptively Scaling Cosine Logits for Effectively Learning Deep Face Representations</a></li>
<li><a href="https://arxiv.org/abs/1903.10663">Combination of Multiple Global Descriptors for Image Retrieval</a></li>
<li>Label smoothing</li>
<li>SPoC, MAC, R-MAC, Compact bilinear pooling</li>
<li>Data Augmentation from AutoAugment paper (imagenet setting)</li>
<li>Warmup learning rate scheduling</li>
<li>Under Sampling</li>
<li>Remove distractor by Places365 indoor/outdoor classification</li>
<li>Meta classifier</li>
<li>Gaussian SARE</li>
<li>... and many other methods!</li>
</ul>
<p>Best,</p>
<p>Team smlyaka.</p>
<p>.... And I want to thank my team mate, lyakaap! Many congrats lyakaap for becoming Grandmaster ;-)</p> |
Google Landmark Retrieval 2019 | 15th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>15th Place Solution</h1>
<p>Congratulations to the winners! And I would like to express my gratitude to all the people involved in this competition.</p>
<p>This is our 15th place solution.
I am busy working on the iMaterialist Competition, so I'll explain our solution only with a brief diagram.
"Reference" is a collection of links that we referred to when implementing.</p>
<h1>Training</h1>
<p>We trained models using 1st stage train dataset.</p>
<p>Reference:
- <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">https://github.com/filipradenovic/cnnimageretrieval-pytorch</a>
- <a href="https://github.com/SeuTao/Humpback-Whale-Identification-Challenge-2019_2nd_palce_solution">https://github.com/SeuTao/Humpback-Whale-Identification-Challenge-2019_2nd_palce_solution</a></p>
<p><img src="https://gist.githubusercontent.com/chmod644/20feec5b9cc5bcc66491c698fec5e860/raw/891ad2e9c48daebe61a41d142b71082cf9269ff6/ffecd280-8798-11e9-9520-d94c7c62797d.png" alt="network_and_loss"></p>
<h1>Submit</h1>
<p>Reference:
- <a href="https://www.kaggle.com/c/landmark-retrieval-challenge/discussion/57855">https://www.kaggle.com/c/landmark-retrieval-challenge/discussion/57855</a>
- <a href="https://pdfs.semanticscholar.org/79bf/2aedcd89c116225f8e4b72fa66f7df1f7d97.pdf">https://pdfs.semanticscholar.org/79bf/2aedcd89c116225f8e4b72fa66f7df1f7d97.pdf</a>
- <a href="https://github.com/facebookresearch/faiss">https://github.com/facebookresearch/faiss</a>
- <a href="https://github.com/ducha-aiki/manifold-diffusion">https://github.com/ducha-aiki/manifold-diffusion</a></p>
<p><img src="https://gist.githubusercontent.com/chmod644/20feec5b9cc5bcc66491c698fec5e860/raw/e05e65c53f6506f236253d59c1bba7d97c743cac/1f2f3400-8788-11e9-96d7-337836617bd9.png" alt="inference"></p> |
Google Landmark Recognition 2019 | 20th place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hola,</p>
<p>TL;DR: SE-ResNext50, softmax 92k classes, 224x224 crops and data cleaning on the test set using pretrained Mask R-CNN and ImageNet classifier. The code is here: <a href="https://github.com/artyompal/google_landmark_2019">https://github.com/artyompal/google_landmark_2019</a></p>
<p>So, my solution is simple, it's a single model (SE-ResNext50), single fold. My code is very similar to my kernel: <a href="https://www.kaggle.com/artyomp/resnet50-baseline">https://www.kaggle.com/artyomp/resnet50-baseline</a>. I used 224x224 crops from 256x256 images. I trained only on classes with >= 10 samples (92740 classes). Surprisingly, it wasn't much slower than training on 18k or 35k classes (maybe 20% slower).</p>
<p>This is the config of my model: <a href="https://github.com/artyompal/google_landmark_2019/blob/master/models/v1.3.8.seresnext50_92740_classes.yml">https://github.com/artyompal/google_landmark_2019/blob/master/models/v1.3.8.seresnext50_92740_classes.yml</a> So, it adds a bottleneck layer with 1024 neurons, uses SGDR from <a href="https://arxiv.org/pdf/1608.03983.pdf">https://arxiv.org/pdf/1608.03983.pdf</a> and rectangular crops from <a href="https://arxiv.org/pdf/1812.01187.pdf">https://arxiv.org/pdf/1812.01187.pdf</a>. </p>
<p>It resulted in 0.128 on public LB after almost 5 days of training on a single 1080Ti.</p>
<p>Obviously, such a low score was caused by both a noisy train set and a noisy test set. I had no time to clean the train set and re-train, so I only cleaned the test set.</p>
<p>The first stage of cleaning was just No Landmark Detector from this kernel: <a href="https://www.kaggle.com/rsmits/keras-landmark-or-non-landmark-identification">https://www.kaggle.com/rsmits/keras-landmark-or-non-landmark-identification</a>. I just got rid of predictions where any of top3 results were "not landmark".</p>
<p>The second stage. I noticed that there's a lot of portraits, selfies, and pictures of cars in the test set. So I took the pretrained Faster R-CNN from the new TorchVision release (checkout object_detection.py) and got rid of images with too many humans or cars (confidence >0.5, total area > 0.4). Check this out: <a href="https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub2.py">https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub2.py</a></p>
<p>The third stage. I noticed that there's a lot of random images: dogs, helicopters, flowers and so on. So I took a pretrained ImageNet classifier. I used ResNet50 (I wanted to use something better, but my GPUs were busy doing something else, so I needed a simple network; ideally, I'd take some smarter model). It's in the image_classifier2.py. So I rejected images with any of those classes and confidence > 0.7 (<a href="https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub3.py">https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub3.py</a>)</p>
<p>The first stage gave me 0.02 on the public LB, but didn't help at all on the private LB. I wonder if the public/private split was made by class? The private part of the stage-2 test set seems to have cleaner data.</p>
<p>In total, before cleaning I had 0.128 on public LB, 0.144 on private LB. After cleaning I got 0.162 on public LB, 0.148 on private LB.</p>
<p>What didn't work: I implemented KNN on top of SE-ResNext50 features and it worked okay locally (GAP 0.2 on all classes), but it performed like crap on the leaderboard (about 0.00002 or something). I must have a bug somewhere.</p>
<p>PS: thanks to <a href="/pudae81">@pudae81</a> for this repo: <a href="https://github.com/pudae/kaggle-humpback">https://github.com/pudae/kaggle-humpback</a>. I borrowed some code from there.</p>
<p>Thanks for reading this and I hope this helps in future competitions!
Artyom</p> |
Google Landmark Retrieval 2019 | Share your approaches? | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to break the silence here and started to share what we have done, and hopefully more others will share as well, especially from the top teams that we can learn :)</p>
<p>In fact, we have nothing fancy in the end (all fancy things didn't work for us this time , unfortunately :( We followed similar paths of last winner's approach, and final model is ensemble of three (inception v3 + Atten&Gem pooling, as well as David's Xception and SEResNet101 with VLAD pooling, each has 512 in dim) </p>
<p>We used DBA for each model to enhance image vectors quality, and finally go through QE (1 round) using the concatenated vectors from three models, which boosted performances quite a bit. </p>
<p>We tried different poolings, different ensemble approaches, but they didn’t give much difference or improvement, and the best one seemed to be GeM (with some CNN Attention) and VLAD pooling, with one round of DBA and one round of QE, that’s it. When we trained models, we started with classification training, followed by fine tuning using Siamese fashion, which further improves a bit more. </p>
<p>However, I guess the biggest (possible) mistake was that we only trained using old train data set only, and ignore the set of new train, mainly because of the stage 1 LB shows so much correlation with old train, and we wrongly assumed that stage 2 would be similar, and final Stage 2 turned out to be from image set whose distribution was so different from old train (much more noisy) and much closer to new. </p>
<p>In the end, we found a very good approach to remove most of the distractors images from test + index, but surprisingly for us, simply removing those distractors didn’t help at all for our current pipeline (DBA + QE). It could be because non-distractors will match well with non-distractors, and won’t be affected much by distractors that we are trying to remove. </p>
<p>I found out this excellent QE code was very useful - <a href="https://github.com/fyang93/diffusion">https://github.com/fyang93/diffusion</a>, only that they shouldn’t use ANN at all even for large dataset, which unnecessarily increases computations. </p> |
Google Landmark Retrieval 2019 | Our solution: 0.10439 private 0.08832 public (code available) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our solution consists of two steps, either single model (our best single model achieves 0.06335 public 0.08813 private), and ensembling multiple models which achieves 0.10439 private 0.08832 public.</p>
<h2><a href="https://github.com/antorsae/landmark-retrieval-2019/blob/master/submission-trained.ipynb">Single model CNN extraction + NN search</a></h2>
<p>This notebook extracts the last convolution layer of a given architecture applied GeM pooling and performs L2 normalization.</p>
<p>It uses augmentation (images are LR flipped) so both the index and queries features
are duplicated.</p>
<p>Once features are extracted, it performs regular nearest neighbour search using PCA, whitening and L2 normalization
and the resuls are ready for a submission.</p>
<p>It also saves the results of nearest neighbour search for ensembling.</p>
<h2><a href="https://github.com/antorsae/landmark-retrieval-2019/blob/master/ensemble.ipynb">Ensembling</a></h2>
<p>This notebook takes the results of nearest neighbours search for each query (and flipped LR queries)
and aggregates distances of different runs (different architectures) and builds a submission accordingly.</p>
<p>For flipped LR images, we pick the minimum distance.</p>
<h2>Some results</h2>
<p><img src="https://user-images.githubusercontent.com/1516746/58861894-bc6a6980-86af-11e9-8261-21b4ebc26bb3.png" alt="image">
<img src="https://user-images.githubusercontent.com/1516746/58861581-f7b86880-86ae-11e9-8dd8-de50cf35af97.png" alt="image">
<img src="https://user-images.githubusercontent.com/1516746/58861667-34845f80-86af-11e9-9493-c012847fe0d0.png" alt="image"></p>
<p>All code here: <a href="https://github.com/antorsae/landmark-retrieval-2019">https://github.com/antorsae/landmark-retrieval-2019</a></p> |
Google Landmark Recognition 2019 | Team JL Solution Summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers and all other participants. Here is a summary of our solution.</p>
<p>Our final solution consists of the following 2 retrieval models (1 global model fine-tuned on the training set and 1 local model that is publicly available) and 3 steps.</p>
<p>Model-1: A global CNN retrieval model. By this we mean a model that takes an input image and produces a global descriptor by taking the output of the last convolutional layer and apply a global pooling method followed by L2 normalization, as common for retrieval tasks. For simplicity, we will not include all the details here (more information can be added later if needed). In the hindsight, we think it is possible that a simple ResNet+GeM model should do the job just fine, but we tried a couple of combinations with backbones such as ResNet, ResNeXt, SE-ResNet, SE-ResNeXt, SENet with attention and pooling methods such as GeM, RMAC, MAC, SPoC, as well as multiple concatenations of them. We take the "raw output" as descriptor (i.e. without any post-processing techniques such as DBA, QE, or Diffusion, except for concatenations in which case PCA and whitening is used to reduce the dimension to 2048 or 4096) and used the public leaderboard of the retrieval challenge for validation.</p>
<p>Model-2: A local CNN retrieval model. This is just the Detect-to-Retrieval (D2R) model released by Andre Araujo at this post <a href="https://www.kaggle.com/c/landmark-recognition-2019/discussion/92942#latest-536570">https://www.kaggle.com/c/landmark-recognition-2019/discussion/92942#latest-536570</a></p>
<p>Step-1: Use Model-1 to match all test images (about 120K) vs. all train images (about 4.13M). For each test image, accumulate similarites for labels in the top-5 matched images, and take the label of the highest score as prediction. This achieves private/public 0.25138/0.21534.</p>
<p>Step-2: Same as Step-1, but for each test image, keep the top-20 matched images from the training set. Then a second match was performed using Model-2, and accumulate D2R scores for labels in the top-5 closest images and take the label of the highest scores as before. This achieves private/public 0.31870/0.26782.</p>
<p>Step-3: This is perhaps the most important step in which, as everyone knows, something needs to be done to separate landmark images from distractors. Instead of training a separate classifier using external data, we made the following assumptions. (1) All the landmark images already have correct predictions, and the reason why score is low is because the evaluation metric is GAP and a lot of the distractor images are placed higher than them in the ranked list of predictions. Of course the first part of the assumption is not true, and we tried some methods to replace the labels of some of the potentially wrong predictions, but it didn't work. (2) In the ranked list of predictions, all top-500 images are real landmark images with correct predictions, and all images below rank 20000 are distractors. So if there is a distractor in the top-500, or if there is a real image below 20000, then let them be there and there is nothing we can do about it. With these two assumptions, we perform the following to the Step-2 prediction list. Starting from the rank 1 test image, match vs all other test images below until rank 20000 using Model-2, if the D2R score is greater than a certain threshold (23 in our case), then pull it up (along with its predicted label) and place it just below rank 1. It is possible that the image being pulled up has different predicted label, but we don't change it. We do this for two searches, where the first search expands top-500 to about 770, and the second search expands 770 to about 800. Since the second search only adds about 20 to 30 new images, we stop it there. This boosts the score to private/public 0.36787/0.31626. Next, we do the same to the Step-1 prediction list. this time with top-300 images (again two searches) because Step-1 score was lower than Step-2, and it expands the list to about 520 images. Finally, we merged the two new prediction lists as follows (New-List-1 has about 800 "safe images" in the top, whereas New-List-2 has about 520 "safe images" in the top.) Starting from rank 1 image in New-List-1, place it straight to rank 1 in the merge-list. Then take rank 1 image in New-List-2 and place it as rank 2 in the merge-list if it's not already there, otherwise go back to New-List-1. Continue this interleaving process until all "safe images" have been absorbed, and append the rest using New-List-1. The merge-list has about 840 "safe images", i.e. New-List-2 only adds about 40 "safe images" to New-List-1, but it could potentially change the ranks of some of the "safe images" in New-List-1 in the insertion process. This slightly increased the score to private/public 0.37601/0.32101, which was our final score.</p>
<p>P.S. We also trained another DNN model with only 1 hidden layer using features extracted by Model-1 as input and applied the steps as above to add more "safe images" to the predictions from Step-3. This scores 0.37936 on the private leaderboard, but unfortunately we did not select it because it degraded the public leaderboard score to 0.32100.</p> |
Google Landmark Recognition 2019 | 2nd Place and 2nd Place Solution to Kaggle Landmark Recognition and Retrieval Competition 2019 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for the great challenge, our solution has been released to arxiv. Further, we will open our source code. Thanks.</p>
<p>arxiv link: <a href="https://arxiv.org/pdf/1906.03990.pdf">https://arxiv.org/pdf/1906.03990.pdf</a>
source code: <a href="https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark">https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark</a></p> |
Freesound Audio Tagging 2019 | 4th solution: Multitask Learning, Semi-supervised Learning and Ensemble | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We wrote up a technical report of our solution, so you can check our solution in detail when it's published. Here I'd like to describe a brief summary of our solution. Our score is public 0.752 (3rd)/private 0.75787 (4th before pending). <br>
For this competition, we used 3 strategies mainly. <br>
1. Multitask learning with noisy labels.
2. Semi-supervised learning (SSL) with noisy data.
3. Averaging models trained with different time windows.
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/556685/13619/Fig1_3.png" alt="Multitask pipeline"></p>
<h2>Models</h2>
<ul>
<li>ResNet34 with log-mel</li>
<li>EnvNet-v2 with waveform.</li>
</ul>
<h2>Preprocessing</h2>
<ul>
<li>128 mels</li>
<li>128 Hz (347 STFT hop size)</li>
</ul>
<p>Log-mel is converted from power to dB after all augmentations applied. Thereafter, it is normalized by the mean and standard deviation of each data.</p>
<h2>Augmentations</h2>
<h3>Log-mel</h3>
<ul>
<li>Slicing</li>
<li>MixUp</li>
<li>Frequency masking</li>
<li>Gain augmentation</li>
</ul>
<p>We tried 2, 4 and 8 seconds (256, 512 and 1024 dimensions) as a slicing-length and 4 seconds scores the best. Resizing, warping, time masking and white noise don't work.
- Additional slicing</p>
<p>Expecting more strong augmentation effect, after basic slicing, we shorten data samples in a range of 25 - 100% of the basic slicing-length by additional slicing and extend to the basic slicing-length by zero paddings. </p>
<h3>Waveform</h3>
<ul>
<li>Slicing</li>
<li>MixUp</li>
<li>Gain augmentaion</li>
<li>Scaling augmentation</li>
</ul>
<p>We tried 1.51, 3.02 and 4.54 seconds (66,650, 133,300 and 200,000 dimensions) as a slicing length and 4.54 seconds scores the best. </p>
<h2>Training</h2>
<h3>ResNet</h3>
<ul>
<li>Adam</li>
<li>Cyclic cosine annealing (1e-3 -> 1e-6)</li>
<li>sigmoid and binary crossentropy</li>
</ul>
<h3>EnvNet</h3>
<ul>
<li>SGD</li>
<li>Cyclic cosine annealing (1e-1 -> 1e-6)</li>
<li>(sigmoid and binary crossentropy) or (SoftMax and KL-divergence)</li>
</ul>
<h2>Postprocessing</h2>
<p>Prediction using the full length of audio input scores better than prediction using slicing TTA. This may be because important components for classification is concentrated on the beginning part of audio samples. Actually, prediction with slices of the beginning part scores better than prediction with slices of the latter part. We found padding augmentation is effective TTA. This is an augmentation method that applies zero paddings to both sides of audio samples with various length and averages prediction results. We think this method has an effect to emphasize the start and the end part of audio samples.</p>
<h2>Multitask learning</h2>
<p>In this task, the curated data and noisy data are labeled in a different manner, therefore treating them as the same one makes the model performance worse. To tackle this problem, we used a multitask learning (MTL) approach. The aim of MTL is to get synergy between 2 tasks without reducing the performance of each task. MTL learns features shared between 2 tasks and can be expected to achieve higher performance than learning independently. In our proposal, an encoder architecture learns the features shared between curated and noisy data, and the two separated FC layers learn the difference between the two data. In this way, we can get the advantages of feature learning from noisy data and avoid the disadvantages of noisy label perturbation. <br>
Multitask module's components: <br>
<code>FC(1024)-ReLU-DropOut(0.2)-FC(1024)-ReLU-DropOut(0.1)-FC(80)-sigmoid</code> <br>
we used BCE as a loss function. The loss weight ratio of curated and noisy is set as 1:1. By this method, CV lwlrap improved from 0.829 to 0.849 and score on the public LB increased + 0.021.</p>
<h2>Semi-supervised learning</h2>
<p>Applying SSL for this competition is difficult because of 2 reasons.
1. Difference of data distribution between labeled data and unlabeled data.
2. Multi-label classification task. </p>
<p>In particular, the method that generates labels online like Mean teacher or MixMatch tends to collapse. We tried pseudo label, Mean Teacher and MixMatch and all of them are not successful. <br>
Therefore, we propose an SSL method that is robust to data distribution difference and can handle multi-label data. For each noisy data sample, we guess the label using the trained model. The guessed label is sharpened by sharpening function proposed by MixMatch (soft pseudo label). As the temperature of sharpening function, we tried a value of 1, 1.5 or 2 and 2 was the best. Predictions of the trained model are obtained using snapshot ensemble with all the folds and cycle snapshots of 5-fold CV. We used MSE as a loss function. we set loss weight of semi-supervised learning as 20. <br>
By soft pseudo labeling, The CV lwlrap improved from 0.849 to 0.870. On the other hand, on the public LB, improvement in score was slight (+0.001). We used predictions of all fold models to generate soft pseudo label so that high CV may be because of indirect label leak. However, even if we use labels generated by only the same fold model which has no label leak, CV was improved as compared to one without SSL. The model trained with the soft pseudo label is useful as a component of model averaging (+0.003 on the public LB).</p>
<h2>Ensemble</h2>
<p>For model averaging, we prepared models trained with various conditions. Especially, we found that averaging with models trained with different time window is effective. We used models below. In order to reduce prediction time, the cycles and padding lengths used for the final submissions and averaging weights were chosen based on CV.
1. ResNet34 slice=512, MTL
2. ResNet34 slice=512, MTL, SSL
3. ResNet34 slice=1024, MTL
4. EnvNet-v2 slice=133,300, MTL, sigmoid
5. EnvNet-v2 slice=133,300, MTL, SoftMax
6. EnvNet-v2 slice=200,000, MTL, SoftMax</p>
<h2>Comparison</h2>
<p>condition/CV lwlrap <br>
model #1 / 0.868 <br>
model #2 / 0.886 <br>
model #3 / 0.862 <br>
model #4 / 0.815 <br>
model #5 / 0.818 <br>
model #6 / 0.820 <br>
#1 + #3 / 0.876 <br>
#1 + #2 + #3 / 0.890 <br>
#4 + #5 + #6 / 0.836 <br>
submission 1 (#1 + #2 + #3 + #4 + #5 + #6) / 0.896 <br>
submission 2 (no pad TTA) / 0.895 </p>
<h2>Code</h2>
<p>I did all model training on kaggle kernels. So I can share all of them. Please note that code readability is low. For example, DataLoader class is named "MFCCDataset" but it actually loads log-mel.
<a href="https://www.kaggle.com/osciiart/freesound2019-solution-links?scriptVersionId=16047723">link to the summary of codes</a> <br>
The GitHub repository is available at <a href="https://github.com/OsciiArt/Freesound-Audio-Tagging-2019">here</a>. </p>
<h2>Public LB history log</h2>
<p>model / CV / Public LB <br>
ResNet18, MixUp, freq masking, slice=256 / 0.819 / 0.669 <br>
<strong>ResNet34</strong>, MixUp, freq masking, slice=256 / 0.819 / 0.676 <br>
ResNet18, MixUp, freq masking, slice=256, <strong>MTL</strong> / 0.824 / 0.690 <br>
ResNet34, MixUp, freq masking, slice=256, MTL, <strong>snapshot ensemble cycle 1-4</strong> / 0.853 / 0.715 <br>
ResNet34, MixUp, freq masking, gain, slice=256, MTL, snapshot ensemble cycle <strong>1-8</strong> / 0.862 / 0.718 <br>
ResNet34 + <strong>EnvNet</strong> / 0.867 / 0.720 <br>
ResNet34, MixUp, freq masking, gain, <strong>slice=512</strong>, MTL, snapshot ensemble cycle 1-8 (#1)/ 0.860 / 0.731 <br>
<strong>ResNet34 #1</strong> + EnvNet / 0.866 / 0.735 <br>
ResNet34 #1 + EnvNet, <strong>padding</strong> / 0.874 / 0.737 <br>
ResNet34 #1 + <strong>ResNet34 (SSL, #2)</strong> + <strong>EnvNet #4 + #5 + #6</strong> / 0.887 / 0.743 <br>
ResNet34 #1, #2 + EnvNet #4 + #5 + #6, <strong>padding TTA</strong> / 0.893 / 0.746 <br>
ResNet34 #1, #2 + <strong>ResNet34(slice=1024, #3)</strong> + EnvNet #4 + #5 + #6, padding TTA / 0.896 / 0.752 </p> |
Google Landmark Recognition 2019 | 31st place solution completely on kernels | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello</p>
<p>Thought people might be interested to know how to run this entire competition on Kaggle kernels and still finish in good positions on LB. So here is my solution to it. I have completed both the Recognition and Retrieval challenges relying solely on Kaggle kernels. Thank you Kaggle!</p>
<p>P.S. I had earlier authored a public <a href="https://www.kaggle.com/mayukh18/dataset-exploration-and-simple-on-the-fly-training">kernel</a> in Keras at the start which showed this can be done. Even Artyom did this in Torch (<a href="https://www.kaggle.com/artyomp/resnet50-baseline">https://www.kaggle.com/artyomp/resnet50-baseline</a>).</p>
<h3>Approach for Kaggle Kernels</h3>
<p>The solution was run completely on Kaggle Kernels. Since the dataset size is around 500 GB, taking advantage of the size of each tar file, the training was done on a download-train-discard methodology. In each mini-epoch, the data generators downloaded one tar file from the 500 tar files; extracted it; created batches from the valid images and at the end of the epoch deleted all the files. Each tar file downloaded in about ~30 secs, and a mini-epoch of about ~7000 images took a total of 100-110 secs(including everything).</p>
<h3>Model Architecture</h3>
<ul>
<li>I used a ResNet50 with weights pretrained on imagenet. Chopped off of the final layer. Added a GlobalMaxPooling layer and a subsequent softmax layer at the end for the output. </li>
<li>Used landmarks which have atleast 20 images.(~55k landmarks)</li>
<li>Resized every image to 192x192 pixels. </li>
<li>Used Adam optimizer with a rate of 0.0002 at the staring and reduced it after each ~150 mini-epochs. </li>
<li>Trained the model for one full epoch(500 mini-epochs) after which it started overfitting. Could have tuned the hyperparameters better for the subsequent epochs but there was not much time left for experimentation.</li>
<li>Did not use any external dataset for non landmark images.</li>
<li>TTA somehow gave me worse results.</li>
</ul>
<p>Special mention to Dawnbreaker's <a href="https://www.kaggle.com/c/landmark-recognition-challenge/discussion/57152">solution</a> from 2018 version of the challenge which helped me improve my results.</p>
<p>What didn't work? ResNet101, DenseNet etc. and more. Even ResNet50 with 128x128 images didn't give any good result. I initially started off with a ResNet101 but it failed to match the level of performance of the ResNet50. Taking this from a comment of Dawnbreaker in the above solution thread:
> "Actually I tried ResNet101 and got a higher val accuracy but a worse LB performance (LB:0.101). Eventually I think maybe an accurate prediction for the landmark images is not as important as an efficient rejection for the non-landmark images."</p>
<p>Perhaps this makes sense. Add to the fact that here we were predicting on ~55k classes compared to last year's 15k, rejection seems even more important.</p>
<p>The complete organized code(including the Keras generators) and pretrained model can be found here: (<a href="https://github.com/mayukh18/Google-Landmark-Recognition-Retrieval-2019">https://github.com/mayukh18/Google-Landmark-Recognition-Retrieval-2019</a>)</p>
<p>Happy if anyone finds this useful. Cheers!
Mayukh</p> |
Google Landmark Retrieval 2019 | 9th Place Solution: VRG Prague | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Approach by Visual Recognition Group (VRG) Prague</h1>
<p>Thanks to the organizers for the great challenge once again and congratulations for the winning teams!</p>
<p>Similarly to last year, our method is based on a combination of CNN-based global descriptors trained on the training set of the first version <a href="https://www.kaggle.com/google/google-landmarks-dataset">Google Landmarks Dataset</a> used in the challenges last year. Unfortunately, I did not have enough time to dedicate to the challenge, so I did not manage to set up a successful training on the training set of this years <a href="https://github.com/cvdfoundation/google-landmark">Google Landmarks Dataset v2</a>.</p>
<h3>Note</h3>
<p>Performance is presented in the format private / public score.</p>
<h3>Representation learning</h3>
<p>Our network consists of the convolutional layers of ResNet50, ResNet101, and ResNet152 pre-trained on ImageNet, followed by generalized-mean pooling (<a href="https://arxiv.org/abs/1711.02512">GeM</a>), l2 normalization, a fully-connected (FC) layer, and a final l2 normalization. We train a network with contrastive or triplet loss (when training for a longer time, triplet loss resulted in a superior performance) on pairs selected from the <a href="https://www.kaggle.com/google/google-landmarks-dataset">Google Landmarks Dataset</a> training set with images resized to have the largest dimensions at most equal to 1024 (900 for ResNet152 due to memory restrictions). We randomly sample at most 50 positive pairs per landmark, while the hard negatives are re-mined for each epoch of the training. The FC layer is initialized by the parameters of <a href="https://arxiv.org/abs/1604.02426">supervised whitening</a> learned on such pairs. </p>
<p>Training was done using our publicly available <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">CNN Image Retrieval in PyTorch</a> toolbox. </p>
<p>Validation was done on <a href="https://github.com/filipradenovic/revisitop">Revisited Oxford and Paris datasets</a>.</p>
<p>During testing, we perform descriptor extraction at 3 scales (scaling factors of <code>1, 1/sqrt(2), sqrt(2)</code>), where the original scale corresponds to largest image size equal to 800 (the maximum image size of v2 dataset), sum-aggregate the descriptors, and l2 normalize. The <a href="https://arxiv.org/pdf/1811.11147.pdf">PCA whitening with shrinkage</a> is learned/applied on the multi-scale descriptors. The dimensionality of the final descriptor is 2048.</p>
<h3>Simple NN-search</h3>
<p>When using our trained networks with cosine similarity and nearest neighbor search the performance is:
<code>
ResNet101-GeM: 0.200 / 0.178
ResNet152-GeM: 0.196 / 0.167
ResNet50-GeM: 0.184 / 0.163
</code></p>
<p>Additionally, we evaluated <a href="https://arxiv.org/pdf/1610.07940.pdf">ResNet101 with RMAC pre-trained</a>:
<code>
ResNet101-RMAC: 0.181 / 0.150
</code></p>
<p>Concatenating these four representations and jointly PCA-reducing (with shrinkage) to 4096 dimensions results in:
<code>
concat[4096]: 0.213 / 0.186
</code></p>
<h3>Iterative DBA and QE</h3>
<p>We perform iterative DBA with an increasing number of images used in each iteration. More precisely, we use three iterations, starting with one nearest-neighbor image being used to average with the respective DB image, repeating nearest-neighbor search and adding one additional nearest neighbor in each step. Instead of a plain average, we use weighted average, where the weight is given as squared cosine similarity of the respective DB image and its neighbor. We denote this method as <code>2dba1:1:3</code>.</p>
<p>We perform iterative QE in a similar way as DBA, but now in 10 itearation, denoting it as <code>2qe1:1:10</code>.</p>
<p>Performance of our concatenated 4096-dimensional representation is now:
<code>
concat[4096] -&gt; 2dba1:1:3 -&gt; 2qe1:1:10: 0.254 / 0.229
</code></p>
<h3>Graph-based QE and Diffusion</h3>
<p>We tested two approaches for the final rank computation, graph traversal and diffusion.</p>
<p>Our graph traversal algorithm explores the knn graph with a greedy algorithm where the next image to be retrieved is the one with the highest similarity to any already retrieved image or to the query. This approach is a special case of <a href="http://www.cs.toronto.edu/~mvolkovs/cvpr2019EGT.pdf">Explore-Exploit Graph Traversal for Image Retrieval</a> where only one image is always exploited. This yielded the score of <code>0.257 / 0.235</code>.</p>
<p>For diffusion, we used the conjugate gradient method from the <a href="https://github.com/ducha-aiki/manifold-diffusion">Python implementation</a> of <a href="https://arxiv.org/pdf/1611.05113.pdf">Efficient Diffusion on Region Manifolds</a> which was patched to support sparse matrices on the input. This composes our final submission of <code>0.263 / 0.243</code> on private / public score.</p>
<h3>[14. 06. 2019] UPDATE</h3>
<p>I released our models as a part of our <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch#networks-with-whitening-learned-end-to-end">CNN Image Retrieval in PyTorch</a> toolbox, with the script to evaluate them on R-Oxford and R-Paris.</p>
<p>Multi-scale performance in the following table:</p>
<p>| Model | ROxf (M) | RPar (M) | ROxf (H) | RPar (H) |
|:------|:------:|:------:|:------:|:------:|
| <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet50-gem-w-83fdc30.pth">gl18-tl-resnet50-gem-w</a> | 63.6 | 78.0 | 40.9 | 57.5 |
| <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet101-gem-w-a4d43db.pth">gl18-tl-resnet101-gem-w</a> | 67.3 | 80.6 | 44.3 | 61.5 |
| <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet152-gem-w-21278d5.pth">gl18-tl-resnet152-gem-w</a> | 68.7 | 79.7 | 44.2 | 60.3 |</p> |
Google Landmark Retrieval 2019 | 8th Place Solution: Clova Vision, NAVER/LINE Corp. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Approach by Clova Vision, NAVER/LINE Corp.</h1>
<p>Thanks to the organizers for the awesome challenges (both recognition and retrieval), and congratulations to all participants! <br>
We briefly explain our approach here, and hope the future participants can benefit from this :)</p>
<hr>
<h2>Quick summary of pipeline</h2>
<ul>
<li>[step 0.] Dataset Clustering </li>
<li>[step 1.] Learning Representation</li>
<li>[step 2.] Feature Ensemble</li>
<li>[step 3.] DBA/QE + PCA/Whitening</li>
<li>[step 4.] DELF / Diffusion </li>
</ul>
<hr>
<h3>(1) Dataset Clustering</h3>
<p>At the very beginning of the challenge, stage1 train/test/index datasets were all released, and the labels were provided for the train set only. We wanted to have the model biased to the test/index data distribution since it might help to achieve a high score in the challenge. To do so, we did clustering on stage1 test/index datasets and filtered out noisy clusters using several rules. The virtual class ids were given to each cluster and added to the train v1 dataset to use it for training. In this way, we got approximately 0.01 mAP increase on stage1.</p>
<p>At the stage2, we found the newly released train/test/index dataset is super noisy comparing to the stage1. We also made clusters on train/test/index datasets to remove the "garbage images" based on the constructed clusters. </p>
<p>Finally we train our model using the datasets below: </p>
<ul>
<li>TR1: train v1</li>
<li>TR2: train v1 + test/index cluster v1</li>
<li>TR3: train v1 + test/index cluster v2 (noise cluster)</li>
<li>TR4: train v1 + test/index cluster v1 + train v2 (garbage removed) + test/index cluster v2 (garbage removed), </li>
</ul>
<p>where v1 is from stage1, v2 is from stage2. </p>
<h3>(2) Learning Representation</h3>
<p>We've tried to train various models by combining different backbones, losses, aggregation methods and train sets,</p>
<ul>
<li>training data: TR1, TR2, TR3, TR4</li>
<li>backbones: resnet50, resnet101, seresneet50, seresnet101, seresnext50, seresnext101, efficientnet</li>
<li>losses: xent+triplet, npair+angular</li>
<li>aggregation methods: GeM, SPoC, MAC, </li>
</ul>
<p>which results in 4 x 7 x 2 x 3 = 168 combinations (or more…) <br>
In general, GeM worked well. We didn't have much time to fully fine-tune the efficientnet (tensorflow released version) on the google-landmark-challenge 2019 datasets so we early stopped the training. </p>
<h3>(3) Feature Ensemble</h3>
<p>Based on the trained model, we extracted the features and found the best combinations for the ensemble. To test the result on a small dataset, we sampled validation set (VAL_V2) which is a subset of stage2 datasets. We run auto-search process which automatically concatenates 4 different off-the-shelf features which are randomly chosen every time, and evaluates the performance of the concatenated feature on VAL_V2. We found the mAP result on VAL_V2 is aligned with the result on real submissions, and this trick helped us saving submission trial and time. </p>
<p>Finally, we found the best combinations: </p>
<ul>
<li>seresnet50 / SPoC / TR4 / npair+angular</li>
<li>seresnext50 / SPoC / TR4 / npair+angular</li>
<li>resnet101 / GeM / TR1 / xent+triplet</li>
<li>seresnext50 / GeM / TR3 / xent+triplet</li>
</ul>
<p>The features were L2-normalized before the concatenation.</p>
<h3>(4) DBA/QE + PCA/Whitening</h3>
<p>DBA/QE is a well-known technique for boosting the performance. We found 10-nearest neighbors for each data points for this. Based on our model, we got 9~10% mAP increase at stage1. The PCA/Whitening improved mAP a bit more when the output dimension was set to 1024. </p>
<h3>(5) DELF / Diffusion</h3>
<p>We applied DELF in two ways: </p>
<ul>
<li>reranking top-100</li>
<li>Diffusion with SV</li>
</ul>
<p>we found simply reranking the top-100 retrieved result was not really helpful. We tested only one case, but it showed +0.00018 performance increase after DELF reranking (it might depend on models though). <br>
Instead, we used DELF matching score for graph construction for diffusion, and we got the better performance increase in this case. The gain was about 0.002 in mAP. </p>
<p>For the last step, we applied diffusion on the features after DBA/QE + PCA/Whitening. It always improved the performance, but we performed a limited number of experiments because the diffusion process was very slow.</p>
<hr>
<h2>Our Thoughts on Frequently Asked Questions</h2>
<p><strong>Q1. Our Best Single Model?</strong> <br>
The stage2 mAP of best single model is approximately in range of [0.1718 ~ 0.1995] in public, [0.1878 ~ 0.2211] in private with 1024-dimensional feature. The high score model in public did not necessarily achieve a high score in private. </p>
<p><strong>Q2. Local Descriptor vs Global Descriptor?</strong> <br>
It's not a good idea to use DELF directly to retrieve images from 1M index images due to excessive computation. Suppose we have top-K ranked candidates retrieved by global descriptors, we can rerank the order using local descriptors with geometric verification.<br>
In our case, the gain was very slight though. </p>
<p><strong>Q3. Things that didn't work?</strong> </p>
<ul>
<li>Ensemble with features extracted from multi-resolution input images. (didn't work)</li>
<li>Explore-Exploit Graph Traversal (EGT) (slow on our implementation) </li>
<li>Detect-to-Retrieve (D2R) (cropped regions from D2R landmark detector were used for feature extraction, and the result was worse.)</li>
</ul>
<p>** for more detailed experimental results, please refer to the paper we upload to arxiv. (<a href="https://arxiv.org/abs/1907.11854" target="_blank">https://arxiv.org/abs/1907.11854</a>)</p> |
Google Landmark Recognition 2019 | 27th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all congratulations to all gold and other medal winners. Also thanks to Kaggle/Google for hosting such an extremely challenging and interresting competition.</p>
<p>I'll share a brief overview of the solution I used to get to a 27th place. I'll share my code later when I have some spare time to clean it up.</p>
<p>I downloaded and resized all train and test images to 224 * 224 pixels. Using the pretrained models generated from the Places365 dataset I made a top-5 class prediction for all train and test images. Initially I started with a simple voting classification. For all train images I removed the ones that had 4 of 5 classes predicted as indoor (which can be loosely interpreted as non-landmark). For the test images I used a 3 out of 5 prediction to specify which ones where non-landmark. This removed several hundreds of thousands of images from the train set and also allowed me to specify tens of thousands of images from the test set as non-landmark.</p>
<p>As model I used a pretrained Resnet50 with a custom head layer. I added a Generalized-Mean (GeM) pooling layer followed with a fully connected layer to trim down the number of parameters. I used basic augmentation for all images (flip lr, updown, shear and zoom). Head-only training I usually did 4 to 8 epochs (each epoch with 1/4 of the train images). Learning rate 0.0005 and a batch size of 128. Full training of the network for 6 to 18 epochs (each epoch with 1/6 of the train images). For full training batch size would be about 20 to 24 (depending on the amount of classes selected). Learning rate would drop from 0.0001 to 0.00003.</p>
<p>Initially I started with using validation of about 10% of the train data. I noticed quickly that it took a lot of additional time and there was no noticable correlation between local metrics and the LB. So I just skipped validation completely...not something you would do in any production scenario but it worked OK for me in this competition.</p>
<p>For generating my submissions I usually used 3 to 4 models from different epochs to create the predictions. I used TTA varying between 2 to 14. For 2 just a simple fliplr of the test image. When using 14 I also made several crops. In the end the simple fliplr seemed to give the most stable results surprisingly.</p>
<p>In the last week I spent a couple of evenings to further optimize the landmark/non-landmark identification and apply that to the stage2 test images and also to further select train images.
That way I was able to gain a 50% increase in my score in the last 3 days. The final model was trained on 83K of classes.</p>
<p>Things that didn't work for me:
I tried various Xception models. Training took however way longer then my Resnet50 models. Also I got the impression that Xception did not handle the large amount of classes very well. In the end I tried 4 or 5 different models but decided to stick with Resnet50.
I also tried using the Hadamard classifier to be able to cope with the large amount of classes. The performance seemed quitte good but I was only able to get a model with a significant score on the LB if the number of classes that I used was not to large (< 10K ). </p>
<p>I'am looking forward to reading the Gold solutions from the winners.</p> |
Google Landmark Recognition 2019 | 15th place Rapids.ai solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to winners. I'm by no means an expert in computer vision but I did my best for this competition. Here is a brief version of my solution and more code sharing will be done later.</p>
<p><strong>The major takeaway: please check out the cuml's KNN:</strong>
<a href="https://github.com/rapidsai/notebooks/blob/branch-0.8/cuml/knn_demo.ipynb">https://github.com/rapidsai/notebooks/blob/branch-0.8/cuml/knn_demo.ipynb</a> It is just so much faster. It just took <strong>less than 5 mins</strong> to find 5 neighbors of 110K test images from 4M train images.</p>
<p>The challenge of this competition is the large data size and number of classes. Given 200K classes and 4 million images with extremely uneven distribution, a simple classification model is difficult to make progress. My best classification model only yields 0.04 GAP, far less than the 0.19+ with the following pipeline:</p>
<ol>
<li><p>Finetune a resnet152 and a densnet 161 for the most popular 10k landmarks classification. The validation accuracy is about 0.3.</p></li>
<li><p>Extract bottleneck layers, which are 512 features in my case. And apply a KNN to find K=5 nearest images in train given a test image based on these 512 features.</p></li>
<li><p>Repeat step 2 for different checkpoints of the two CNN models and aggregate the their KNN results. I used 10 saved CNN models to get 10 KNN models. At this point, each image in test has 10*5 = 50 candidate images from train. There are of course duplicates in these candidates.</p></li>
<li><p>Make it a recommendation problem by predicting a pair: a test image and a candidate train class. The target to predict is binary: if the test image is from the class than the target is 1.</p></li>
<li><p>Build features for such pair, like minimum distance from the test image to the images of a class, popularity of the class, so on so forth. I built 15 features in the end.</p></li>
<li><p>Run a xgb model and a NN model for the above data and average their predictions. For each test image, select the highest prob class. </p></li>
</ol>
<p>To be continued!</p> |
Freesound Audio Tagging 2019 | 96th Place Solution: validation of noisy data improve ~0.02 LB score | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Background:</strong> First time kaggler, no server with GPU locally, all computation was on Kaggle kernels.
<strong>Final Workflow:</strong>
1. Curated-only 5-fold CNN model got the optimal LB sscore
2. Preprocess noisy data to get a raw selected noisy dataset, by clustering 7 features (both temporal and spectral features)
3. Validate the selected noisy dataset by curated-only model
4. Train the mixed model with the curated and validated noisy data, got around 0.02 score boost.</p>
<p><strong>The detailed workflow pic is too big to display in the post, so here it is</strong> <a href="https://drive.google.com/file/d/1AaFDTsedh-3liF7K4V-QAyCWA8l8RtA_/view?usp=sharing">Final Setup</a>
- <strong>Features for deep learning:</strong>
After inspecting the distribution of the audio duration of both curated and test dataset, I've tried clip duration 1, 2, 3 seconds, mel channels 128, 180, 192, and other parameters combination. The best result came with the above config in the final setup. 0.01 and 0.02-3 LB boost from 1s, 3s to 2s respectively. 180 mel channels can improve the result by 0.02 LB, similar to 192 (choose the less feature to save computation time).
- <strong>Noisy Data Preprocessing:</strong>
- - This step is mainly for reducing the size of noisy dataset, for it's difficult to deal with all the noisy data in limited kaggle kernels.
- - Thanks to Essentia: <a href="https://essentia.upf.edu/documentation/algorithms_reference.html">https://essentia.upf.edu/documentation/algorithms_reference.html</a>
- - Found a critical error during writing this summary: use the wrong window size config(too short) for feature extraction, which may make this step useless and more like random pick of noisy data.
- <strong>Random pick:</strong>
The curated data would overfit if used all frames from the audio files. Tried max = 5 or 10, 5 gave the better result, 0.01 LB boost from max = 10.
- <strong>Balance the data or not?</strong>
There is no time left for me to do the test with balanced dataset, because I chose the wrong balance method at first by both undersampling and upsampling each class to the average count, which produced a worse result because undersampling would reduce the useful data. When I realized this at a pretty late stage, I decided to use the unbalanced dataset, considering the lwlrap from curated-only model didn't show much bias against the imbalance. But this is still an open question and any discussion is welcome.
- <strong>Network Choice</strong>
Tried several network setups, CRNN with GRU as well (produced competitive result with curated-only data, but no time to tune the config), 1d RNN network with raw data as input was built and tried but performed badly. So finally chose VGG16 and did some experiments on the last layers.
Key takeway:
1. 2 FC layers with 2048 neurons produced worse local lwlrap than 1 FC with 512, and may produce memory error in kaggle kernel. Though not fully tested, it looks in my experiments that the single FC at last model stage produced better result.
2. Thanks to a newly published paper <em><a href="https://arxiv.org/pdf/1905.05928.pdf">Rethinking the Usage of Batch Normalization and Dropout in the Training of Deep Neural Networks</a></em>, my LB score boosted 0.03 by changing the unit order accordingly, that is BN-Dropout before the weight layer.
3. Dropout rate from 0.2 to 0.5 to 0.7, the LB score boosted by 0.031 and 0.001. Though 0.5 and 0.7 produced similar result, chose 0.7 may make it more robust to overfitting.
4. Batch size didn't show much difference in my experiments (64, 128,256 batch sizes).
5. The more training fold, the better performance, but it requires more training time. And 10-fold improved only 0.004 compared to 5-fold. So used 5-fold at last for both curated and mixed model.
6. The CRNN network showed different lwlrap per class result, and may be a complement to CNN. (But as a beginner in sound scene classification and my future focus is model interpretability, I'd rather not to use ensembling but focus on finding a single optimal network this time)
- <strong>Data Augmentation:</strong>
SpecAugment without time warping, and mixup are used.
- <strong>Noisy Data Validation:</strong>
*The confidence value = r * label_weight_per_class / lwlrap_per_class*
The greater <em>r</em> is, the stricter the validation rule is. The best score is produced when r=5.
- <strong>Test Setup:</strong>
Found no improvement with any TTA, so simply use the average of the predicetions from all frames.</p>
<p>Thanks to Kaggle and the competition organization. I've learnt a lot along this competition. Good luck to all in the second stage.p</p> |
Freesound Audio Tagging 2019 | [8th place solution] : SpecMix and warm-up pipeline | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello,</p>
<p>I released on a <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">github repository</a> my solution. It is highly inspired from <a href="/daisukelab">@daisukelab</a> notebooks for preprocessing and <a href="/mhiro2">@mhiro2</a> for his simple CNN model.</p>
<p>Keys points of this solution, 2 techniques I imagined (maybe they exist under another name 😄)
- <strong>warm-up pipeline</strong> : try to use noisy dataset as pretrain model and as semi-supervised to increase diversity in generating a set of models. In another words, first use noisy set to warmup model training, then fine tune model with curated set and then use noisy set again in a semi-supervised way.
- <strong>SpecMix</strong> : my new data-augmentation technique which takes what I consider the best from SpecAugment and mixup. It generates new samples by applying frequency replacement and time replacement on inputs and compute a weighted average on targets. More details in the github <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">README</a></p>
<p>Inference is done using an ensemble of 2 models (<a href="/mhiro2">@mhiro2</a> simple CNN model + VGG16) on 10 folds CV.</p>
<p>More details are available in the github <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">README</a>, as well code and weights.</p> |
Freesound Audio Tagging 2019 | 1st place solution released on a Github | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone, thanks for the competition.</p>
<p>Link to solution <a href="https://github.com/lRomul/argus-freesound">https://github.com/lRomul/argus-freesound</a></p>
<p>Key points:</p>
<ul>
<li>CNN model with attention, skip connections and auxiliary classifiers </li>
<li>SpecAugment, Mixup augmentations </li>
<li>Ensemble with MLP second-level model and geometric mean blending </li>
</ul>
<p>More details you can find in the repository <a href="https://github.com/lRomul/argus-freesound/blob/master/README.md">README</a>.</p> |
Google Landmark Recognition 2019 | 88th place write-up [fast.ai] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>88th place is not super exciting. However, you can surely learn from some of my mistakes and pick apart some of my code. </p>
<p><a href="https://towardsdatascience.com/diving-into-googles-landmark-recognition-kaggle-competition-6975dbe11072?source=friends_link&sk=2f6cf21fd799a1b02302243d487256b5">Here is the medium "friend" link so it's not behind a paywall. </a></p> |
Google Landmark Retrieval 2019 | 26th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Everyone,</p>
<p>Ending final validation, I share my solution.<br>
I trained two models. One uses raw images as input. The other uses DELF as input. I concatenated two vectors from two models for indexing. I also used query expansion and geometric verification.</p>
<p>Code: <a href="https://github.com/toshi-k/kaggle-google-landmark-retrieval-2019">https://github.com/toshi-k/kaggle-google-landmark-retrieval-2019</a></p>
<p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-google-landmark-retrieval-2019/master/img/solution.png" alt="conceptual diagram"></p> |
iMaterialist (Fashion) 2019 at FGVC6 | 16-th Solution: Mask-RCNN on Steroids | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><em>(Also known as Alex doesn't know how to code)</em></p>
<p>After seeing first place's solution, I was shocked at how close I was to reaching the gold area... This competition has taught me a lot on hyper-parameter fine-tuning and architecture selection for instance segmentation, and the greatest (and most painful) of them all is the threshold for non-maximum-suppression- and I fell for it. This is my first serious attempt at aiming for grandmaster status, and though the outcome is not so satisfactory, for the most part I enjoyed the competition.</p>
<p>For my solution, it is based on mmdetection, an awesome codebase for detection related tasks. </p>
<h3>I started with Hybrid Task Cascade with ResNeXt-101-64x4d-FPN backbone, with the following modifications:</h3>
<ol>
<li><p>DCN-v2 for part of the backbone</p></li>
<li><p>Removing mask fusion branch (since it takes too much memory and semantic information in this task is tricky to define)</p></li>
<li><p>Multi-scale training with flipping and resizing range (512,512) -> (1333,1333)</p></li>
<li><p>Guided-anchoring RPN</p></li>
<li><p>Increasing initial mask branch output from 14 pixels to 28 pixels</p></li>
</ol>
<h3>Then I modified the training progress a bit.</h3>
<ol>
<li><p>Default training schedule, 1 image per GPU for 4/8 GPUs</p></li>
<li><p>Changing mask loss from BCE to symmetric lovasz:</p></li>
</ol>
<p><code>def symmetric_lovasz(outputs, targets):
return (lovasz_hinge(outputs, targets) + lovasz_hinge(-outputs, 1 - targets)) / 2</code></p>
<ol>
<li><p>Changing smooth L1 loss to GHM-R loss</p></li>
<li><p>Changing sigmoid-based losses to Focal Loss and GHM-C loss</p></li>
</ol>
<h3>Then, for inference:</h3>
<ol>
<li><p>Used soft-nms ( </p>
<h1>but I didn't fine-tune the threshold :(</h1></li>
<li><p>Multi-scale testing in mmdetection wasn't implemented, so I wrote one myself that merge predictions in multiple scales in proposals, bboxes and masks. The following scales are used (with flipping): [(1333,1333), (1333,800), (800,1333), (900,600)]</p></li>
<li><p>Deleting masks with low confidence & low pixels (with fine-tuned threshold)</p></li>
<li><p>Heuristic that make sure that the same pixel are not assigned to two instances w. the same ClassId</p></li>
<li><p>(I couldn't submit for the final minutes so I didn't have a chance to test this out) Binary classifier that predict if an object is fine-grained or not. Then, in final submission, if an instance is determined to be fine-grained, it is excluded. The rationale is as below:</p>
<p>a) An included object that is fine-grained but the mask generated is not: 1 FP + 1 FN
b) Simply not predicting that object: 1 FN</p></li>
</ol>
<p>And for this network, it's a se-resnext50 based image classifier with 4-channel input (image + binary instance mask) and one-hot category fusion in the fc-layer. Honestly I could be doing a DAE before the classifier (from dirty testing mask to imagined clean train mask) but I ran out of GPUs and time.</p>
<h3>What didn't work:</h3>
<ol>
<li><p>Un-freezing bn / gn / synchronized bn</p></li>
<li><p>Parallelized testing</p></li>
<li><p>After-nms ensemble (like the one used in Top3)</p></li>
<li><p>Attribute classification (my local F1 was < 0.1)</p></li>
</ol>
<h3>Machines</h3>
<p>I used a variety of rented cloud machines, from 1 * 2080 Ti, 1 * P6000 to 8 * 1080 Ti to 4 * P100. I've basically spent all of the prizes I earned in the Whales competition on this one, plus a few hundred bucks :(</p>
<p>For your reference:</p>
<ol>
<li><p>For 1-image per GPU, you can fit an (1333,1333) image into an RTX 2080 Ti w.o. the enlargement of mask branch or GA-RPN</p></li>
<li><p>But if you want to add GA-RPN, only GTX 1080 Ti would suffice (the pitfall: <strong>1080 Ti has a little bit more memory than it's RTX counterpart</strong>)</p></li>
<li><p>My full setting fits barely in an P100.</p></li>
</ol>
<h3>Final words</h3>
<p>I have to say goodbye to kaggle for the following months due to internships and school-related stuffs, so it's a little discouraging to know that I missed the last chance this year to getting a solo gold by such a small margin. But I guess the journey and the things I've learnt is more important than the recognition itself. So I will come back with full strength after a small break. Also congratulations to the winners, you guys did a fantastic job!</p> |
Google Landmark Recognition 2019 | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to everyone that participated! This was a rough competition, more than 4 million images (> 600GB) and 200 thousand classes!
And, of course, congratulations to the winners! Well deserved!</p>
<h1>Summary</h1>
<ul>
<li><strong>Training set</strong>: Sub-set of 2018 Google Landmark Recognition</li>
<li><strong>Descriptor</strong>: Ensemble of DenseNet121-GeM-512FC (512x512, ArcFace) and ResNet101-GeM-2048FC (768x~, ContrastiveLoss)</li>
<li><strong>Landmark classifier</strong>: ResNet50 with T-scalling trained on OpenImages and Places. Remove images with less than 0.99 probability from train-set (4M->2.6M)</li>
<li><strong>Inference</strong>: Extract features for train and test set and compute the cosine similarity matrix between test and class centers. Then, apply K-NN to retrieve the 12 nearest neighbours and apply softmax on them.</li>
<li><strong>Re-rank 1</strong>: Multiply confidence score from inference by classification's score on test.</li>
<li><strong>Re-rank 2</strong>: Perform K-NN search (K=1750) of test set on itself and apply softmax. Multiply last score by this one.</li>
</ul>
<h1>Long version</h1>
<p>I believe two issues must be addressed to tackle this competition. The first is a similarity search problem, where we want to create a good descriptor that would allow us to correctly classify an image based on a K-NN search in a gallery (trainset).
The second part is a detection/re-rank of non-landmarks in both train (gallery) and test (query) sets. This is paramount since the GAP metric used in this competition penalize us if we make a prediction for a non-landmark with higher confidence score than any other landmark images.</p>
<h2>Similarity search</h2>
<h3>Descriptor</h3>
<p>I reckon a metric learning approach is much more natural to this task than a pure classification one. This is because we have a huge number of classes (> 200k) and some of them have just a few examples. Furthermore, in practice we would not be able to re-train our classifier every time we need to add a new landmark to the gallery.</p>
<p>With this mindeset, I spent most of the time (until the last 2 weeks) training several models with ArcFace loss. I tried several types of pooling schemes such as SPoC, GeM and REMAP (my own re-implementation). Also tried several types of backbones like DenseNet121, ResNet101 and SEResNeXt101.</p>
<p>The ones that worked best for me were DenseNet121-GeM-512FC (512x512) and SEResNeXt101-REMAP-2048FC (384x384). However, SEResNeXt101-REMAP was really slow so I only used DenseNet for the second stage.</p>
<p>I trained on a sub-set of last years' competition, it was around 80k images of 15k classes. A margin of 0.8 for <a href="https://arxiv.org/abs/1801.07698">ArcFace</a> loss was used and images were resized to the specified size. Weird as it may sound, I found it was better to squash it into a square than keeping the original aspect ratio with zero padding.</p>
<p>I (unfortunately) spent most of my time here and the result were not so great. My best model only score around 0.33 on the image <strong>retrieval</strong> public LB (no QE, DBA or any re-rank). At this time I decided to give up ArcFace and go for the siamese approach.</p>
<p>In the last weeks I trained a ResNet101-GeM-2048FC (768x~) following Radenovic's <a href="https://arxiv.org/abs/1711.02512">paper</a> and despite I didn't submit to the retrieval competition (deadline was near) I believe it was considerably better than the other models I had trained so far.</p>
<p>My final descriptor was an 2560-dim. ensemble of both models. I L2-normalized both and concatenated as follows: [1.5 * ResNet, 1 * DenseNet]. I didn't have enough time nor RAM to apply PCA on the resultant descriptor.</p>
<h3>Landmark classifier</h3>
<p>Since there were a lot of non-landmark images both in test and train sets I trained a landmark classifier using the descriptor's train set as landmark examples. For the non-landmarks, I used images from OpenImages and Places datasets. I used classes such as: sky, forest, bathroom, bedroom, restaurant, Human faces, food, etc...
It was important to use outdoor classes as negative examples, otherwise the model would be biased to predict any outdoor image as landmark. Finally, I used <a href="https://arxiv.org/pdf/1706.04599.pdf">T-scalling</a> to scale the predictions.</p>
<p>For the train set I removed all the images with classification score less than 0.99, which resulted in around 2.6M images.</p>
<p>For test set, on the other hand, I did not remove any images but used the scores to re-rank the final submission's confidence scores. I did this because, if the conf. score is well ranked, there is no penalty for making predictions on non-landmark images.</p>
<h3>Out-of-distribution images</h3>
<p>Since the descriptor was only training on landmark images it is very bad in differentiating between images that are non-landmarks, e.g. human faces or types of food. As consequence of this fact, non-landmark images are matched with very high confidence scores, decreasing the GAP metric. In order to overcome this issue, three re-rank steps were used (will be explained later).</p>
<h3>Extract features</h3>
<p>The CNNs previously mentioned are used to extract features (descriptors) for both train and test sets. Then for the train set, the class centers are computed by averaging all the image descriptors that belong to the same class. Finally, both test and class centers descriptors are L2 normalized.</p>
<h3>Search</h3>
<p>The K-NN search was performed on the cosine distance matrix using <a href="https://github.com/facebookresearch/faiss">Faiss</a> library. The top 12 NN were kept and softmax was applied on them.</p>
<p>Even though we only need the first nearest neighbour, it was important to keep more of them to apply softmax and convert the cos. similarities into a probability distribution. By doing this we filter out some non-landmarks that have high similarity with a lot of classes in the gallery (train set). Without this or any other re-ranking steps, the score is likely to be close to zero even for a good descriptor.</p>
<h3>Re-rank:</h3>
<p>The second re-rank was very straightforward. The original confidence score was multiplied by the probability of being a landmark from the classifier. With this step many non-landmark images got their conf. score reduced.</p>
<h3>Re-rank based on out-of-distribution similarity</h3>
<p>For this step, the K-NN (K=1750) of the test set with itself was computed. The intuition here is that since our model is known to produce high similarity between out-of-distribution images, if we compute the similarity between the test set it is likely that non-landmark images will have much more similar neighbours than landmark ones. Thus, I apply the softmax and multiply by the confidence score from the last step.</p>
<p>Hence, the final score is given by: <code>score = metric_learning_score * classifier_score * ood_score</code></p>
<h1>Leaderboard progression (stage 2)</h1>
<ul>
<li><strong>DenseNet121-GeM-512FC (cls re-rank)</strong>: 0.22379</li>
<li><strong>DenseNet121-GeM-512FC (cls and ood re-rank)</strong>: 0.23862</li>
<li><strong>ResNet101-GeM-2048FC (cls and ood re-rank</strong>): 0.26994</li>
<li><p><strong>Ensemble (cls and ood re-rank)</strong>: 0.28792</p>
<h1>Resources</h1></li>
<li><p><strong>GPU</strong>: 1x Nvidia 1080ti (11GB)</p></li>
<li><strong>RAM</strong>: 32GB</li>
<li><strong>HD</strong>: 1TB SSD</li>
<li><strong>Processor</strong>: Intel i7 6800k</li>
</ul>
<h1>Acknowledgments</h1>
<p>Thanks Kaggle and Google for hosting this competition. Thanks as well to all the community and <a href="/filipradenovic">@filipradenovic</a> for opening his 2018 Google-Landmark-Retrieval solution.</p> |
iMaterialist (Fashion) 2019 at FGVC6 | 18 place write up (short meditation on results) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is our short meditation on our results at strategy:</p>
<ol>
<li><p>r101 mask rcnn 1024x1024 (1 fold, it took 5 days on our machine)</p></li>
<li><p>ansemble predictions from multiple snapshots of r101 mask rcnn (select prediction with highest confidence, when multiple predictions overlap)</p></li>
<li><p>Train several multiclass classifiers (xception,resnext,densenet) two detect if particular object classes present on image. Fins optimal treshold for discarding predictions from mask rcnn. <a href="https://github.com/musket-ml/classification_training_pipeline">Our Classification Pipeline</a></p></li>
<li><p>Train a lot of segmentation networks at least one per class to refine masks from mask rcnn (<a href="https://github.com/musket-ml/segmentation_training_pipeline">Segmentation Pipeline</a>) -> This was our main source of improvements.</p></li>
</ol>
<p>At the same time we were desperately trying to find solution for attributes. Initially nothing worked. But later we have used following approach:</p>
<p>Take a mask crop and a full image, and feed them in two independent inputs of image classifier, with multiple outputs per independent attribute groups (we have found them by analizing attribute co occurances, and assuming that sometimes coocurences labeling errors).</p>
<p>Train a lot of such classifiers, then take only those objects for which 75% of classifiers agree with the result of blend and also when most of them are doing confident predictions.</p>
<p>This gave us very slight improvement of our score (~0.00500 in total) </p>
<p><strong>Errors:</strong></p>
<ul>
<li>Spending our pretty limited resources on attributes was a big error.</li>
<li>We did not actually realised and used full potential of mask rcnn :-(. Our initially high place on leaderboard, gave us false feeling that we used most of its potential. </li>
</ul>
<p><strong>Resources</strong></p>
<p>We have used: 2x1080Ti on my machine, 2x1080Ti on Denis machine and 1080 on Konst machine. </p>
<p>Regards,
Pavel</p> |
Freesound Audio Tagging 2019 | Semi-supervised part of 20th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First I want to thank researchers at CVSSP, University of Surrey, UK for their great baseline <a href="https://github.com/qiuqiangkong/dcase2019_task2">code</a>. My entire system is built upon it.</p>
<p><strong>Pre-processing</strong>
Nothing special. 32 kHz sampling rate, 500 hop size so that 64 fps. Silence trimming below <code>max - 55 dB</code>, mel size 128. Random excerpt 4-second patches from entire spectrograms. Shorter spectrograms are padded repeatedly.</p>
<p><strong>Data augmentation</strong>
1. Spec-augment without time warping
2. Time reversal with 80 more classes (binary target with 160 classes in total)
3. I'll talk about MixUp later</p>
<p><strong>Semi-supervised learning</strong>
Almost the same as <a href="https://link.springer.com/chapter/10.1007/978-3-030-20873-8_26">https://link.springer.com/chapter/10.1007/978-3-030-20873-8_26</a>. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1605499%2Fcc2a1c6259b8b72194abef259d524448%2Ffreesound_model.png?generation=1561783764472053&alt=media" alt=""></p>
<p>Suppose V X y_V is a batch of curated data and W is a batch of noisy data without label, then we have
![](<a href="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1605499%2F23d2ce6b7bb8c8d60b6e5577b09c32d1%2Ffreesound_algo.png?generation=1561784302344628&alt=media">https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1605499%2F23d2ce6b7bb8c8d60b6e5577b09c32d1%2Ffreesound_algo.png?generation=1561784302344628&alt=media</a> =400x320)</p>
<p>The backbone is 9-layer CNN, which is the same as the one in public kernel and CVSSP baseline.</p>
<p><strong>Loss function</strong>
Focal + ArcFace.
ArcFace is modified for mult-label classification. 2 reasons to use ArcFace here
1. Enlarge the margin between positive and negative, so that the threshold eta becomes less sensitive.
2. In the <a href="https://arxiv.org/abs/1801.07698">original ArcFace paper</a>, the authors state that ArcFace enforces less penalty towards samples whose feature vectors have large angle with the weight matrix than other margin-based loss. I suppose it would apply less "weight" to wrong pseudo-labels.</p>
<p>In practice using ArcFace makes my NN converge faster and have higher local lwlrap when using only curated data. I don't know what would happen if I trained the semi-supervised model without it because I started using it at a very early stage.</p>
<p><strong>Training Details</strong>
First I warm up the backbone CNN with noisy set. For the semi-supervised model, I have 6 samples from curated set and 58 samples from noisy set in each batch. Nadam optimizer, cosine annealing learning rate with linear warm-up are used. One can refer to my technical report, Table 2 for details about hyperparameters.</p>
<p><strong>Inference</strong>
Arithmetic mean on 0.25s sliding window. TTA only on clips with short duration by repeating the sliding window with Spec-augment. </p>
<p>Single model trained on entire training set gives me 0.712 lwlrap on stage 1 test.</p>
<p>Again, thank the host for hosting such interesting competition. I am still on my way to my first gold medal :)</p> |
iMaterialist (Fashion) 2019 at FGVC6 | [Update] 1st place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Kagglers,</p>
<p>My solution is based on the COCO challenge 2018 winners article: <a href="https://arxiv.org/abs/1901.07518">https://arxiv.org/abs/1901.07518</a>. </p>
<h1>Code:</h1>
<p><a href="https://github.com/amirassov/kaggle-imaterialist">https://github.com/amirassov/kaggle-imaterialist</a></p>
<h1>Model:</h1>
<p><a href="https://github.com/open-mmlab/mmdetection/blob/master/configs/htc/htc_dconv_c3-c5_mstrain_400_1400_x101_64x4d_fpn_20e.py">Hybrid Task Cascade with ResNeXt-101-64x4d-FPN backbone</a>. This model has a metric Mask mAP = 43.9 on COCO dataset. This is SOTA for instance segmentation.</p>
<h1>Validation:</h1>
<p>For validation, I used 450 training samples splitted using <a href="https://github.com/trent-b/iterative-stratification">https://github.com/trent-b/iterative-stratification</a>.</p>
<h1>Preprocessing:</h1>
<p>I applied light augmentatios from the <a href="https://github.com/albu/albumentations">albumentations</a> library to the original image. Then I use multi-scale training: in each iteration, the scale of short edge is randomly sampled
from [600, 1200], and the scale of long edge is fixed as 1900.
<img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/preproc.png" alt="preprocessing"></p>
<h1>Training details:</h1>
<ul>
<li>pre-train from COCO</li>
<li>optimizer: <code>SGD(lr=0.03, momentum=0.9, weight_decay=0.0001)</code></li>
<li>batch_size: 16 = 2 images per gpu x 8 gpus Tesla V100</li>
<li>learning rate scheduler:
<code>if iterations &lt; 500:
lr = warmup(warmup_ratio=1 / 3)
if epochs == 10:
lr = lr ∗ 0.1
if epochs == 18:
lr = lr ∗ 0.1
if epochs &gt; 20:
stop</code></li>
<li>training time: ~3 days.</li>
</ul>
<h1>Parameter tuning:</h1>
<p>After the 12th epoch with the default parameters, the metric on LB was <strong>0.21913</strong>. Next, I tuned postprocessing thresholds using validation data:
* <code>score_thr=0.5</code>
* <code>nms={type: 'nms', iou_thr: 0.3}</code>
* <code>max_per_img=100</code>
* <code>mask_thr_binary=0.45</code></p>
<p>This improved the metric on LB: <strong>0.21913 -> 0.30011.</strong></p>
<h1>Test time augmentation:</h1>
<p>I use 3 scales as well as horizontal flip at test time and ensemble the results. Testing scales are (1000, 1600), (1200, 1900), (1400, 2200). </p>
<p>I drew a TTA scheme for Mask R-CNN, which is implemented in mmdetection library. For Hybrid Task Cascade R-CNN, I rewrote this code. </p>
<p>This improved the metric on LB: <strong>0.30011 -> 0.31074.</strong>
<img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/tta.png" alt="TTA"></p>
<h1>Ensemble:</h1>
<p>I ensemble the 3 best checkpoints of my model. The ensemble scheme is similar to TTA. </p>
<p>This improved the metric on LB: <strong>0.31074 -> 0.31626.</strong>
<img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/ensemble.png" alt="ensemble"></p>
<h1>Attributes:</h1>
<p>I didn't use attributes at all: they were difficult to predict and the removal of classes with attributes greatly improved the metric. </p>
<p>During the whole competition, I deleted classes with attributes: <code>{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}</code> U <code>{27, 28, 33}</code>. But two days before the end I read [the discussion] (<a href="https://www.kaggle.com/c/imaterialist-fashion-2019-FGVC6/discussion/94811#latest548137">https://www.kaggle.com/c/imaterialist-fashion-2019-FGVC6/discussion/94811#latest548137</a>) and added back classes <code>{27, 28, 33 }</code>. </p>
<p>This improved the metric on LB: <strong>0.31626 -> 0.33511.</strong></p>
<h1>Postprocessing for masks</h1>
<p>My post-processing algorithm for avoid intersections of masks of the same class:
<code>def hard_overlaps_suppression(binary_mask, scores):
not_overlap_mask = []
for i in np.argsort(scores)[::-1]:
current_mask = binary_mask[..., i].copy()
for mask in not_overlap_mask:
current_mask = np.bitwise_and(current_mask, np.invert(mask))
not_overlap_mask.append(current_mask)
return np.stack(not_overlap_mask, -1)</code></p>
<h1>Small postprocessing:</h1>
<p>I deleted objects with an area of less than 20 pixels. </p>
<p>This improved the metric on LB: <strong>0.33511 -> 0.33621.</strong></p> |
Instant Gratification | IG's magic 22nd Public / 33rd Private [LB 0.9756] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In our case, the key to break the 0.974 score was clustering each label to 4 clusters. Actually, we ran many experiments like 3 clusters for label 1 VS 3 clusters for labels 0, 3VS2, 2VS2 and 4VS4. Our best CV was with models which use 3:3 but LB score was always lower and showed that 4:4 gave better result. <strong>So, we continued working on 3:3 and especially on 4:4 models.</strong> </p>
<p>After breaking 0.745 on LB. CV and LB scores were no longer correlated and our best submission which was 0.97488 has 0.968 CV. So we decided to stop EDA and modeling and we started focusing on finding a way that would stabilize the gap between CV and LB... <strong>The key for that was ensembling</strong> : We used many models for ensembling but the best results were based on these combinations:
model1: GMM 4vs4
model2: model1 + pseudolabeling with thresholds 0.9 for 1 and 0.1 for 0
model3: GMM 3vs3
model4: model3 + pseudolabeling with thresholds 0.9 for 1 and 0.1 for 0</p>
<h3>What did not work for us :</h3>
<ul>
<li>Outliers removal</li>
<li>Inverting outliers 1 to 0 and 0 to 1 using isolation forest</li>
<li>Neural nets with embedding layers</li>
<li>Stacking with linear metamodels</li>
<li>PCA features </li>
<li>Ordering features according to their importance to finally concatenate all subdatasets together.</li>
</ul>
<p>Finally I want to thanks <a href="/abhishek">@abhishek</a> for being such an amazing teammate and for all his great ideas.
PS: This magic was discovered in the last 3 days and we were using 1:3 models during the first 2 days. Our all top models were submitted 1 hours before the competitions would end. So try your best and never give up.
<a href="https://www.kaggle.com/rinnqd/kmeans-4-clusters-per-label-gmm-33th-lb-9756">You will find here one of our selected kernels</a></p> |
Instant Gratification | Solution high light and how to choose submission | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>High light.</p>
<ol>
<li><p>After training GMM model, re-training GMM by initializing the means of each class to
[(means[0]+ means[1])/2, (means[1]+ means[2])/2, (means[2]+ means[0])/2,] for avoiding falling into local minimum.</p></li>
<li><p>Assuming your samples’ mean is u, Class 0’s mean is init_u0, Class 1’s mean is init_u1.
Because some targets are flipped(5%). So the real Class means is:
u0 = init_u0 + 0.05 * (init_u0 - init_u1)
u1 = init_u1 + 0.05 * (init_u1 - init_u0)
So, I use x0’ = 2 * u0 – x0, x1’ = 2 * u1 - x1 to augment data.</p></li>
</ol>
<p>How to choose submission.</p>
<p>I tested my model on data generated by make_classification. I found that the AUC of 512*512 test samples always close to 0.975. I concluded that the (Public score + Private score) / 2 would close to 0.975. So I choose a low Public score model as my one submission.</p>
<p>My kernel: <a href="https://www.kaggle.com/daishu/gmm-3-3-2">https://www.kaggle.com/daishu/gmm-3-3-2</a></p>
<p>It’s really hard to post in English for me. </p> |
Instant Gratification | GMM solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I entered the Instant Gratification competition when it was 16 days to go, so there was already a lot of important information available in Kernels and Discussions. Shortly, in the Instant-Gratification competition we deal with a dataset of the following structure:</p>
<ul>
<li><p><a href="https://www.kaggle.com/c/instant-gratification/discussion/92930#latest-541375">Data seemed to be generated by groups</a> corresponding to the column <strong>wheezy-copper-turtle-magic</strong>, which means that we have 512 independent datasets of the size approximately 512 by 255 for training our models.</p></li>
<li><p>Out of 255 features <a href="https://www.kaggle.com/fchmiel/low-variance-features-useless/">only features with high variance supposed to be important</a>. The number of important features varies from 33 to 47.</p></li>
<li><p><a href="https://www.kaggle.com/speedwagon/quadratic-discriminant-analysis">QDA proved to be a good modeling approach</a> for this case and can give AUC of 0.966. </p></li>
<li><p>This high quality can be explained by the fact that the <a href="https://www.kaggle.com/mhviraf/synthetic-data-for-next-instant-gratification">data probably was generated with make_classification function</a>.</p></li>
</ul>
<p>Knowing that let's look closer at what make_classification function has under the hood. This function has the following parameters:</p>
<ul>
<li>n_samples: The number of samples.</li>
<li>n_features: The total number of features.</li>
<li>n_informative: The number of informative features.</li>
<li>n_redundant: The number of redundant features, linear combinations of informative features.</li>
<li>n_repeated: The number of duplicated features.</li>
<li>n_classes: The number of classes.</li>
<li>n_clusters_per_class: The number of clusters per class.</li>
<li>weights: The proportions of samples assigned to each class.</li>
<li>flip_y: The fraction of samples whose class are randomly exchanged.</li>
<li>class_sep: Larger values spread out the clusters/classes and make the classification task easier.</li>
<li>hypercube: Parameter corresponding to the placement of the cluster centroids.</li>
<li>shift: Shift of the feature values.</li>
<li>scale: Scaling of the features by the specified value.</li>
<li>shuffle: Shuffling the samples and the features.</li>
<li>random_state: Random state.</li>
</ul>
<p>I invite you to follow the sequence of my thoughts about it.</p>
<hr>
<h3>Step 1. Size and features (first 6 parameters of make_classification function).</h3>
<p>From the data structure (512 groups and 255 variables) and size (262144 = 512 * 512), we can assume that <strong>n_features=255</strong> and <strong>n_samples=1024</strong> were taken to generate both training and test data sets (both private and public).</p>
<p>Now, let's talk about features. First of all, repeated features are exact copies of important features, and because we don't see such columns in competition data we'll set <strong>n_repeated=0</strong>. Let's generate data set with make_classification function and following parameters:</p>
<p>~~~~
fake_data = make_classification(
n_samples=1024,
n_features=255,
n_informative=30,
n_redundant=30,
n_repeated=0,
n_classes=2,
shuffle=False)
~~~~</p>
<p>By setting shuffle=False, I force the first 30 columns to be informative, next 30 to be redundant and others to be just noise. By doing it 1000 times lets look at the distribution of the standard deviation of the informative, redundant and random features.</p>
<p><a href="https://ibb.co/HG8WWDR"></a></p>
<p>We see a clear difference in standard deviation for informative, redundant and random features, that's why selecting important features with 1.5 threshold works so well. Moreover, there are no features in the competition data that have std bigger than 5, which leads us to an assumption that <strong>n_redundant=0</strong>.</p>
<p>Number of important features ranges from 33 to 47, so <strong>n_informative</strong> is in <strong>{33,...,47}</strong>. Number of classes is obviously <strong>n_classes=2</strong>. </p>
<hr>
<h3>Step 2. Shift, Scale, and randomness (last 4 parameters of make_classification function).</h3>
<p>Because mean and standard deviation for random columns are 0 and 1 respectively, we can assume that shift and scale were set to default <strong>shift=0</strong>, <strong>scale=1</strong>. On top of that, the standard deviation for important columns also looks the same for competition data and for data that we've just generated. So it's a pretty promising assumption and we can go further. Parameters <strong>shuffle</strong> and <strong>random_state</strong> are not so important, because they don't change the nature of the data set.</p>
<hr>
<h3>Step 3. Most interesting parameters (why QDA is not so perfect?).</h3>
<p>Parameters <strong>n_clusters_per_class, weights, class_sep, hypercube</strong> are the ones that we are not very sure about, especially <strong>n_clusters_per_class</strong>. Weights look to be the same because the target seemed to be balanced. What probably makes it impossible to get 100% accurate model is that <strong>flip_y>0</strong>, and we can not predict this random factor in any case. However, what we can try is to build a model that can perfectly predict when <strong>flip_y=0</strong> and leave the rest for the luck.</p>
<p>QDA shown to be a very good approach, but let me show you the case when it's working not so good:</p>
<p><a href="https://ibb.co/bKhGFTq"></a></p>
<p>Data set was generated with make_classification:</p>
<p>~~~~
make_classification(
n_samples=1000, n_features=2, n_informative=2, n_redundant=0,
n_classes=2, n_clusters_per_class=2, flip_y=0.0, class_sep=5,
random_state=7, shuffle=False
)
~~~~</p>
<p>Now, lets look how QDA algorithm can hadle it:</p>
<p><a href="https://ibb.co/pWRTdtD"></a></p>
<p>Pretty good, however, we see that it doesn't see this structure of four clusters in the data. From make_classification documentation one can find that these clusters are Gaussian components, so the best way to find them is to apply a Gaussian Mixture model. </p>
<p><em>Note: Gaussian Mixture model will only give you clusters, you need to bind these clusters to one of the classes 0 or 1. That you can do by looking at the proportion of the certain class in each cluster and assigning cluster to the dominant class.</em></p>
<p>Lets look how GMM algorithm performed at this data:</p>
<p><a href="https://ibb.co/Q9sPPxm"></a></p>
<p>Nearly perfect! Let's move to step 4.</p>
<hr>
<h3>Step 4. Effect of flipping the target</h3>
<p>Knowing that we can build nearly the best classifier, what is the effect of flipping the target? Suppose that your classes are perfectly separable and you can assign probabilities that will lead to a perfect AUC. Let's see an example of 10000 points, 5000 in each class:</p>
<p>Classes = [0, 0, ..., 0, 1, ..., 1, 1]</p>
<p>Predictions = [0.0000, 0.0001, ..., 0.9998, 0.9999]</p>
<p>Lets flip the target value for 2.5% (250) of the points (1) in the middle, (2) on the sides, (3) randomly, and look at the AUC:</p>
<p><a href="https://ibb.co/M1cNwkw"></a></p>
<p>We see that the impact of flips can dramatically change the result. However, let's face two facts:</p>
<ol>
<li>We can not predict this randomness.</li>
<li>We can assume that the flips are evenly spread across our predictions.</li>
</ol>
<p>Let's also look at the distribution of AUC for the randomly flipped target.</p>
<p><a href="https://ibb.co/PjBttyk"></a></p>
<p>It means that results on unseen data can deviate by more than 0.005 in AUC even for the perfect model. So our strategy will be not to find the parameters to raise on the LB of the competition but to find the parameters that can perfectly predict the target for most of the data set.</p>
<hr>
<h3>Step 5. Experiments with synthetic data</h3>
<p>How we can build a model that perfectly predicts data and not overfit to it? Because we have a strong hypothesis on how data was created we can tune model parameters such that it will predict perfectly for most of the synthetically generated data sets. Not showing competition data to it will save us from overfitting.</p>
<p>We will be generating a data set of the same structure as the competition data:</p>
<ul>
<li><strong>n_samples=768</strong>, 512 for training and 256 for test</li>
<li><strong>n_features=255</strong></li>
<li><strong>n_important=33</strong></li>
<li><strong>n_clusters_per_class=3</strong></li>
<li>by variating <strong>weights</strong>, we will change the propotion of the dominant class from 0.5 to 0.6</li>
<li><strong>flip=0</strong> to experiment if GM model can be completely correct</li>
<li><strong>class_sep</strong> will be a random number between 0.9 and 1</li>
<li><strong>hypercube</strong> will be chosen randomly between True and False</li>
</ul>
<p>Next thing to do is to generate hundreds of such data sets and optimize parameters of the GMM.</p>
<p>For the first attempt, we'll set the parameter n_components in the GMM to 6, because we know that there will be 3 clusters per each of the 2 classes, and do the modeling with other parameters set to default. Let's look at the minimum, maximum and mean score that we can get for 100 different data sets:</p>
<p><code>
Min score = 0.44783168741949336
Mean score = 0.8159223370950822
Max score = 1.0
</code></p>
<p>Wow! Sometimes we even reached the perfect classifier. Let's try to optimize some of the GMM parameters. It has parameters n_init and init_params which means that GMM model will be run several times starting from random initializations and then it selects the best iteration. Let's try to set n_init to 20 and init_params to random.</p>
<p><code>
Min score = 1.0
Mean score = 1.0
Max score = 1.0
</code></p>
<p>Now it's perfect! Let's try it for more than 33 features (for example, let's take 47):</p>
<p><code>
Min score = 0.562083024462565
Mean score = 0.7917138432059158
Max score = 1.0
</code></p>
<p>For 47 features it worked not so well. The secret sauce is the regularization of the covariance matrix. Let's look at the model quality for different values of reg_covar parameter of GM model:</p>
<p><code>
reg_covar = 1: Min score = 0.6936146721462308, Mean score = 0.8782139741161032
reg_covar = 5: Min score = 0.991250080422055, Mean score = 0.9998679217326921
reg_covar = 10: Min score = 0.9987766087594813, Mean score = 0.999945378120418
</code></p>
<p>To tell you the short story, by a simple grid search for the best reg_covar parameter one can find that more features require larger reg_covar for better performance. By doing so you can find that the good parameters are:</p>
<ul>
<li>33 features: 2.0,</li>
<li>34 features: 2.5,</li>
<li>35 features: 3.5,</li>
<li>36 features: 4.0,</li>
<li>37 features: 4.5,</li>
<li>38 features: 5.0,</li>
<li>39 features: 5.5,</li>
<li>40 features: 6.0,</li>
<li>41 features: 6.5,</li>
<li>42 features: 7.0,</li>
<li>43 features: 7.5,</li>
<li>44 features: 8.0,</li>
<li>45 features: 9,</li>
<li>46 features: 9.5,</li>
<li>47 features: 10.</li>
</ul>
<p>We still have one thing not clear.</p>
<h3>Important note: how do we know the number of clusters in advance?</h3>
<p>We were building models in the assumption that there are 3 clusters per class, but how can we know it? Well, the answer is pretty simple, the right choice of the n_components will give better results:</p>
<p>#Clusters per class = 4, #Components of GMM = 4: AUC = 0.7170955882352941</p>
<p>#Clusters per class = 4, #Components of GMM = 6: AUC = 0.8263480392156862</p>
<p>#Clusters per class = 4, #Components of GMM = 8: AUC = 0.9587009803921569</p>
<p>#Clusters per class = 3, #Components of GMM = 4: AUC = 0.7400829259236339</p>
<p>#Clusters per class = 3, #Components of GMM = 6: AUC = 1.0</p>
<p>#Clusters per class = 3, #Components of GMM = 8: AUC = 0.9844049755554181</p>
<p>#Clusters per class = 2, #Components of GMM = 4: AUC = 1.0</p>
<p>#Clusters per class = 2, #Components of GMM = 6: AUC = 1.0</p>
<p>#Clusters per class = 2, #Components of GMM = 8: AUC = 0.9924635532493205</p>
<p>For me it seemed like for most of the groups the best option was taking 3 clusters per class, sometimes 2 was slightly better (but it might be the effect of the flipped target). I assumed that this parameter was set to 3 for all groups.</p>
<hr>
<h3>Conclusion</h3>
<p>Parameters found in step 5 will give you an AUC of 0.99 minimum (for the true target of course)! So we will use these parameters to build a model for competition data. </p>
<p>That approach gave me a 52nd place. Thanks for reading!</p> |
Google Landmark Retrieval 2019 | 2nd Place and 2nd Place Solution to Kaggle Landmark Recognition and Retrieval Competition 2019 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for the great challenge, our solution has been released to arxiv. Further, we will open our source code. Thanks.</p>
<p>arxiv link: <a href="https://arxiv.org/pdf/1906.03990.pdf">https://arxiv.org/pdf/1906.03990.pdf</a>
source code: <a href="https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark">https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark</a></p> |
Instant Gratification | Hoxosh solution (private 37) with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is a sketch of our solution. I didn't plan to share given there are better ones, and, more importantly, because I thought our solution would be similar to others. In fact there are two differences, most important one being due to my team mate <a href="/cashfeg">@cashfeg</a> hence the title. Given he is very modest he didn't think it was worth sharing. You can also have a look at how he found 3 clusters per class <a href="https://www.kaggle.com/c/instant-gratification/discussion/96632#latest-558215">here</a>.</p>
<p>We started from the good QDA models disclosed by Vlad. QDA is great, but it was hard to get above 0.9736 on public LB with it if I remember well. The reason was clear to me, it was because QDA only used half of the data, the train data. </p>
<p>A model close to QDA is Gaussian Mixture, it is the unsupervised equivalent of QDA. Issue is GM with 2 components hardly gets you better than 0.80 CV and LB, which is probably why people did not use it till the last week of competition. Then I had the idea to use QDA output to init GM. However, I didn't thought of using means and precisions from QDA to init GM (I did later), or use Gaussian Lasso as Dieter proposed later <a href="https://www.kaggle.com/christofhenkel/graphicallasso-gaussianmixture">here</a>. I rather subclassed sklearn GaussianMixture with an additional way to initialize it with cluster representatives. Feeding QDA prediction to my GM was able to reach 0.9703 on public LB.</p>
<p>This is the point when <a href="/cashfeg">@cashfeg</a> moved us near the top. He found that there were 3 clusters per class. He also had this simple, yet clever idea: he ran GM with 3 clusters on train with target 0, and a separate GM with 3 clusters on train with target 1. Then the 6 clusters were input to my GM model and got 0.973x on public LB.</p>
<p>The rest is great team work with everybody contributing. W,e explored variants, tuned GM parameters, and tried to exploit more of the data generation properties. We didn't find a good way to align centroids on hypercube vertices, but we did filter seeding clusters by keeping runs where the seeding clusters had similar size (another <a href="/cashfeg">@cashfeg</a> idea).</p>
<p>What didn't work? </p>
<p>Using a likelihood function that takes into account target flip rate (yet another idea of <a href="/cashfeg">@cashfeg</a> ). We are still wondering why it didn't help. </p>
<p>Kernels were a pain. The submission process where you have to run kernel twice, in commit, then in submit. There were outages in the last few days, and also lot of time lost because of lack of warning or proper error messages. I think KO competitions are a good idea, but I will rely on notebooks I run on another machine if I enter another KO competition. I'll then copy notebooks to kernels for submissions. I do think kernels will keep improving and that my concerns will be memories, but until then I'll rely on Anaconda ;)</p>
<p>Main takeaway for me is that I was very happy to team with such talented team mates.</p>
<p>Update: I made public one of our kernels. It is not the very best nor one that we selected, but it is close to best, and simpler, with private LB above 0.975. <a href="https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412">https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412</a></p> |
Instant Gratification | How to Score LB 0.975 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>LB 0.970</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F50e532f1b9ca5d0ddd783e04e7422f20%2Fimg1.jpg?generation=1602508821789459&alt=media" alt=""></p>
<h1>LB 0.975</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fc2be5521e7c585f7776c21272d0ec925%2Fimg2.jpg?generation=1602508834244506&alt=media" alt=""></p>
<h1>Summary</h1>
<p>It was shown <a href="https://www.kaggle.com/cdeotte/pseudo-labeling-qda-0-969" target="_blank">here</a> that QDA + Pseudo Labeling will score LB 0.970. The competition data appears to be created with Sklearn's <code>make_classification</code>. EDA shows that <code>n_clusters_per_class=3</code>. To score LB 0.975, we need to use QDA with 6 ellipses instead of 2 ellipses. Code is posted <a href="https://www.kaggle.com/cdeotte/3-clusters-per-class-0-975/" target="_blank">here</a>. (with more advanced techniques in Appendix 2).</p>
<p>We find the 6 ellipses with <code>sklearn.mixture.GaussianMixture</code>. First find the 3 ellipses of <code>target=0</code> then the 3 ellipses for <code>target=1</code>. Then label each row of the training data with its ellipse number 0, 1, 2, 3, 4, 5. Finally call <code>QDA.fit(X,labels)</code> and <code>QDA.predict_proba(test)</code>. Then <code>test_pred = Pr(in ellipse 3) + Pr(in ellipse 4) + Pr(in ellipse 5)</code> where <code>Pr()</code> denotes probability.</p>
<h1>EDA to determine <code>n_clusters_per_class</code></h1>
<p>Clusters are located at hypercube corners. Therefore clusters have centers whose coordinates are 1's and -1's. To determine how many clusters per class, plot a histogram of all the variables' means grouped by target. If there is only 1 cluster per class we will see bumps at -1, and 1. If two clusters per class, we will see bumps at -1, 0, and 1. If three clusters per class, we will see bumps at -1, -1/3, 1/3, and 1.</p>
<p>Within <code>magic=0</code> sub dataset, there are approx 40 features and 2 targets. Calculate the means of these 80 groups of numbers. Do this for every sub dataset. In total, we have approx 41,000 means. Now plot their histogram.</p>
<p>More explanation why this works is provided in Appendix 1 <a href="https://www.kaggle.com/cdeotte/3-clusters-per-class-0-975/" target="_blank">here</a>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F37c5c7d4faa1689bf77c01efa433ab66%2Fimg3.png?generation=1602508847384837&alt=media" alt=""></p> |
Instant Gratification | Public 4th place / Private 32th place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Before I begin, I'm thankful to my teammates. The idea is below.</p>
<p><strong>1. Feature Engineering</strong>
- KernelPCA(cosine kernel)
- GMM
- AgglomerativeClustering</p>
<p><strong>2. Outlier Replace</strong>
- I predicted values using nusvc.
- Based on the threshold, the true values were replaced when it differ from the predicted value.
- Re training</p>
<p><strong>3. Pseudo Labeling</strong>
- Run each 512 cases at ratio 0.2.</p>
<p><strong>4. Model</strong>
- nusvc</p>
<p><strong>5. Partial Prediction Value Scaling</strong>
- StandardScaler(Improve Lb 0.002)</p>
<p><strong>6. Seed ensembles</strong>
- Using four seeds</p>
<p>My code is below
- Kernel public score : 0.97513
- Kernel private score : 0.97565
<a href="https://www.kaggle.com/hyeonho/32th-solution-public-0-97513-private-0-97565?scriptVersionId=16042798">https://www.kaggle.com/hyeonho/32th-solution-public-0-97513-private-0-97565?scriptVersionId=16042798</a></p> |
Instant Gratification | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was a fun competition and I enjoyed it much. Hope you all have had the same experience. </p>
<p>Most the things I did was revealed in public kernels. The only unrevealed key to this competition was to cluster each class before modeling it. I explained pictorially why it works at <a href="https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974">https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974</a>. Everything else was revealed by the many genius and talented people in this competition. They were not only smart but kind and generous enough to share their findings. I am not gonna name them one by one here because we all know who they are.</p>
<p>I experimented so many different things for this competition including:
* <strong>different number of models to blend</strong>: I believed the more number of models I blend together the more robust answer I would get so I tried to maximize this parameter and have the maximum number of models I could have in 1 run which ended up being 42 models of the same type.
* <strong>different number of folds per model</strong>: this item was another important factor because it had a trade-off in it. the more folds I used, the more robust predictions. I achieved robustness by increasing number of models I used though, so I used this parameter for regularization. I chose 2 folds in the end so that despite we have only 512 samples per each <code>wheezy-copper-turtle-magic</code> I only used half of them to train a model and the other half to validate my model. I noticed if my model can have AUC 0.9748+ on only 50% of the data and generalize well on both the training set and public leader board, why not? let's continue with this.
* <strong>different clustering algorithms</strong>: because I knew the data was multivariate gaussian distribution around the nodes of a hypercube (refer to <code>make_classification</code> documentation on sklearn and its source code on github) I assumed the best way to cluster them would be by using <code>mixture.GaussianMixture</code> and I ended up using it too. However, I though a lot about different clustering algorithms and studied the document at <a href="https://scikit-learn.org/stable/modules/clustering.html">https://scikit-learn.org/stable/modules/clustering.html</a> carefully and experimented with other clustering algorithms I believed might be useful.
* <strong>different number of clusters</strong>: I used the elbow rule and tsne and other algorithms to figure this one out but I couldn't verify what is the exact number of cluster per class they used in data generation. Nonetheless, I ran an experiment on my own synthetic data and by analyzing 1000 experiments I figured whether they are 2 or 3 clusters in the data itself if I lump data into 3 clusters I would get better validation and test AUCs so I continued with 3 clusters per class.
* <strong>different classifier algorithms</strong>: It was kinda obvious that GMM is the way to go (Thanks to <a href="/christofhenkel">@christofhenkel</a>, he was very underrepresented in this competition despite his great contributions. You have my respect). Nonetheless, I tried two different algorithms as well.
* <strong>different scalers</strong>: First of all I figured I need to add scaler to ensure non-ill-conditioned matrices for covariance calculations. I tried two different scalers, results were not that much different but I continued with <code>StandardScaler</code> again because features were linear combination of standard normal distributions.
* <strong>different regularizations</strong>: other than using 2 folds for training I used regularization parameters in both of the clustering and classifier algorithms. In the end I decided to go with their default values since training on 50% of the data was enough regularization.
* <strong>model by model hyperparameter tuning and regularization</strong>: I tried these options as well but they didn't work out well for me.
* <strong>averaging ranks instead of probabilities</strong>: I tried it both ways sometimes ranks worked out better sometimes probabilities. </p>
<p><strong>my full code</strong>: <a href="https://www.kaggle.com/mhviraf/mhviraf-s-best-submission-in-instant-gratification">https://www.kaggle.com/mhviraf/mhviraf-s-best-submission-in-instant-gratification</a></p>
<p><strong>how data was generated</strong>: <a href="https://www.kaggle.com/mhviraf/synthetic-data-for-next-instant-gratification">https://www.kaggle.com/mhviraf/synthetic-data-for-next-instant-gratification</a></p>
<p>Below is the TL;DR of <a href="https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974">https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974</a></p>
<p><img src="https://www.kaggleusercontent.com/kf/16019409/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..jKX7oeRp9n8vXS4jU1p_FA.8jtkixQmRslkeBCAyKsGy2kXmOPDxZER9Z0Zud0AT8uSDhMEG_-5Qjov5X9LTzhWx6bdgcAVH56qkIvIzERPdaQM5no1505hT0bsBidIGd93TTUGVAUg1YKsLADVtepqaXPEBPUQjAO-zO6-GnvjAoQ6He0WDuOBxkvlHPpzrRY.zeCjOzfRQVEgKzKBrtkz_g/__results___files/__results___7_2.png" alt=""></p>
<p><img src="https://www.kaggleusercontent.com/kf/16019409/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..jKX7oeRp9n8vXS4jU1p_FA.8jtkixQmRslkeBCAyKsGy2kXmOPDxZER9Z0Zud0AT8uSDhMEG_-5Qjov5X9LTzhWx6bdgcAVH56qkIvIzERPdaQM5no1505hT0bsBidIGd93TTUGVAUg1YKsLADVtepqaXPEBPUQjAO-zO6-GnvjAoQ6He0WDuOBxkvlHPpzrRY.zeCjOzfRQVEgKzKBrtkz_g/__results___files/__results___9_2.png" alt=""></p>
<p>Happy kaggling. Hope to see you all soon in the next competitions.</p> |
Instant Gratification | Public 13th / Private 14th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to the winners and all of those who learned from this competition and thank you to Kaggle team for hosting this interesting competition.</p>
<p>Detailed explanation of my journey and the code to reproduce the result can be found here( <a href="https://www.kaggle.com/harangdev/instant-gratification-14th-solution">https://www.kaggle.com/harangdev/instant-gratification-14th-solution</a> ).</p>
<p>In this thread I'll summarize methods that I used.</p>
<ol>
<li><p><code>GaussianMixture</code> with <code>n_components=6</code> and <code>n_init=20</code> can classify perfectly except the flipped labels if you fit all of the data including private test set.</p></li>
<li><p>To leverage the fact that the metric is roc auc, I randomly assigned integer to each test samples within each predicted class such that max of negative class is smaller than min of positive class. (That is, I ordered the test samples in line, rather than grouping them into 0 and 1)</p></li>
<li><p>Then I selected two submissions that scored lowest on public leaderboard. I thought that with fixed number of flipped labels, low score in public set means high score in private set. I actually confirmed it by experimenting with synthetic dataset. But it turned out that this method wasn't much helpful. (Maybe I should dig deeper why -> <strong>6/25 EDITTED: Actually my experiment setting was wrong. Getting low score for public leaderboard has nothing to do with getting high score for private leaderboard. So it was completely random after making a perfect classifier.</strong>)</p></li>
</ol>
<p>I had about 35 submissions that could have brought me gold, and 15 submissions that could have brought me to the 1st place, but I don't regret cause I had my rules when selecting the submissions.</p>
<p>Lastly, special thanks to <a href="/cdeotte">@cdeotte</a> and <a href="/mhviraf">@mhviraf</a> for sharing important ideas about the dataset in the early stage.</p> |
Instant Gratification | Please, share your basic idea! [17th place inside] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>tldr; I used <code>sklearn.GaussianMixture</code> with parameter <code>n_components=6</code>. My best result is CV: 0.97533, Public: 0.97444 Private: 0.97578</strong></p>
<p>Well, we know that datasets possibly were generated using <code>sklearn.make_classification</code>! So, let's play with 2D synthetic <code>make_classification</code> dataset:</p>
<p><code>
X, y = make_classification(
n_samples=2000,
n_features=2,
n_informative=2,
n_redundant=0,
n_repeated=0,
n_classes=2,
n_clusters_per_class=2,
flip_y=0.05,
class_sep=2.5,
hypercube=True,
shuffle=False,
random_state=777
)
</code>
Let's fit <code>sklearn.QuadraticDiscriminantAnalysis</code>:
<img src="https://i8.wampi.ru/2019/06/21/qda.png" alt="qda">
Looks like a not bad. But wait what do we know about these clusters? Yes. They all have Gaussian nature! Let's try <code>sklearn.GaussianMixture</code>:</p>
<p><code>
gm = GaussianMixture(
n_components=4,
n_init=20,
init_params="kmeans",
random_state=456
)
</code>
<img src="https://i8.wampi.ru/2019/06/21/gm.png" alt="gm">
Hmmm... really better! And we just need to map these clusters to target. Let's do it. Check <strong>InstatGratificationClassifier.py</strong> file:
<code>
igc = InstantGratificationClassifier(
estimator=GaussianMixture(
n_components=4,
n_init=20,
init_params="kmeans",
random_state=456
)
)
</code>
<img src="https://i7.wampi.ru/2019/06/21/igc.png" alt="igc">
It looks greate 🥈 </p>
<p>And it is really easy to estimate that all of 512 blocks were generated with parameter <code>n_clusters_per_class=3</code>. This is my basic idea to beat 0.974+ score.</p>
<p><strong>Please, share your basic idea!</strong></p> |
Instant Gratification | Public 19st Solution/ Private 70st Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>Preprocessing</h3>
<ul>
<li><ol><li>Feature Selection with variance </li></ol></li>
<li><ol><li>KERNEL PCA</li></ol></li>
</ul>
<p>Try and Fail : PCA, SVD, AutoEncoder, DAE etc</p>
<h3>Feature Engineering</h3>
<p>We used the following three methods to make variable of the distribution of the dataset.</p>
<ul>
<li><ol><li>GMM_PRED </li></ol></li>
<li><ol><li>GMM_SCORE</li></ol></li>
<li><ol><li>HIST_PRED</li></ol></li>
</ul>
<p><img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FcdwpIM%2Fbtqwfl7ksHu%2FHrOefi5xBBPdVrJGSd2tSk%2Fimg.png" alt=""></p>
<p>By the way, one of the mistakes I found was that GMM_PRED and GMM_SCORE would increase their score if they were duplicated. The results below are still a mystery to our team.</p>
<ul>
<li>Gmm_pred 1 + Gmm_score 0 : LB 0.97465</li>
<li>Gmm_pred 2 + Gmm_score 0 : LB 0.97475</li>
<li>Gmm_pred 3 + Gmm_score 0 : LB 0.97479</li>
<li>Gmm_pred 4 + Gmm_score 0 : LB 0.97480</li>
<li>Gmm_pred 5 + Gmm_score 0 : LB 0.97481 </li>
<li>Gmm_pred 5 + Gmm_score 1 : LB 0.97482</li>
<li>Gmm_pred 5 + Gmm_score 2 : LB 0.97482</li>
<li>Gmm_pred 5 + Gmm_score 3 : LB 0.97473</li>
</ul>
<h3>Model</h3>
<ul>
<li>1st layer : Nusvc + Nusvc2 + qda + svc + knn + lr / Stratified with GMM_LABEL </li>
<li>2nd layer : Lgbm + mlp / Stratified</li>
<li>3rd layer : 1st layer + 2st layer / Average
<img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FVj3FG%2Fbtqwf3ylVfO%2FvvVCmYTe7z4TFIn8AKkSK1%2Fimg.png" alt=""></li>
</ul>
<h3>Private Test with Make Classification</h3>
<p>the process is as follows.
- 1. Estimate the number of private magic n(0~511) has number using the linear relationship between the number of magic and auc.
(ex. private magic 0 auc 0.97xx then magic number 250)
- 2. Create public and private data by seed using make_classification with estimated magic number.
- 3. Test Model with maked public, private dataset and check score cv, pb, v
- 4. Finally, choose a model with cv + pb + pv close to 0.975. Others thought that overfitting.
<img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FcsSklH%2FbtqweIV0tL1%2FecqDnPjxbQ0CKLUPodtEm0%2Fimg.png" alt=""></p>
<p>But actually it did not work out.</p>
<h3>Resource</h3>
<p>Code is below
- kernel public 0.97491 : <a href="https://www.kaggle.com/chocozzz/hyun-stacking?scriptVersionId=15820611">https://www.kaggle.com/chocozzz/hyun-stacking?scriptVersionId=15820611</a> </p>
<p>Best Score
<img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FbX5sHq%2FbtqwfkURVtU%2FNzDybSbf4K5mKcC6whKhf0%2Fimg.png" alt=""></p> |
iMaterialist (Fashion) 2019 at FGVC6 | Attributes classification solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all!</p>
<p>We've tried to do attributes classification. The first attempts were with 92-way classifier, but later switched to thechnique from image captioning, which generates next attribute, using image features and knowledge about previous attributes. We've used implementation of the paper <a href="https://arxiv.org/pdf/1502.03044.pdf">"Show, Attend and Tell: Neural Image Caption Generation with Visual Attention"</a> from <a href="https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning">this</a> repo. Below is some findings and visualizations.
<img src="http://i68.tinypic.com/rvfrtl.png" alt="t-shirt proper">
It works somehow. Above are the proper attributes for t-shirt. And this with one incorrect attribute, however it looks ambiguous (but I'm not an expert in this issue =)):
<img src="http://i66.tinypic.com/ta2emv.png" alt="t-short ambiguous">
Some annotations missed the length of t-shirt, so if model predicts it, this turns to false positive:
<img src="http://i67.tinypic.com/33ngcio.jpg" alt="t-shirt missed length">
Same for shorts and other classes. Model can predict correctly the majority of atributes:
<img src="http://i68.tinypic.com/2r6ehc5.png" alt="shorts proper">
However there are plenty of images with 1 wrongly predicted attribute, so all entry goes to false positive. But they are ambiguous, compare the shorts length on the first two images and style on the last two (frayed in annotations and no/washed in predictions):
<img src="http://i64.tinypic.com/2882ov5.png" alt="mislabeled shorts"></p>
<p>So with better metric on attributes this challenge can be more interesting.</p> |
Google Landmark Recognition 2019 | From 0.08695 to 0.14681 by TTA | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My computation resource only allowed me to train the Se_NeXt 50 with the image size 96 X 96 in a reasonable time. I got the private score 0.08695 if I didn't use any TTA. So for TTA, I tried to use the "larger" images to pick up details by gradually chopping off the edges by certain points (like 2.5%, 5%, 7.5%,10%, 12.5%), then resized the image back to 96 X 96. With left-right flip, I got the total 12 predictions. I then selected the top 1 for each image and the score became 0.14681.
It was interesting to note that the averaged probabilities for the common classes for each image gave worse score. </p> |
iMaterialist (Fashion) 2019 at FGVC6 | just a short tutorial to get 25th place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>After removing classes with attributes and following <a href="https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html">this tutorial</a> with images that have a minimal edge length of 800, I got 25th place (Public: 0.22209/Private: 0.20699)</p> |
Instant Gratification | A way to get silver medal without magic | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to every teams who go through the competition. I have found that most top team solutions are related to Gauss distribution and data generation, so I'm going to talk about some that have not been mentioned. In this competition, stacking and pseudo labeling are enough to win the silver medal, provided they work together.</p>
<p>Before I find the wonderful use of GMM, I used QDA, NuSVC and other models to stack and try to improve my scores. At that time, pseudo labeling brought us to 0.969 but 0.970 was still a hard barrier to break through.</p>
<p>Maybe a lot of people, like me, have found that <strong>stacking doesn't work with pseudo labeling</strong>. After a few days of observation and reflection, I found <strong>stacking could improve CV, but LB did not</strong>. Then I came to a conclusion that some modifications to the public kernels about pseudo labeling were needed to make stacking work. </p>
<p>Key to the problem is that the original code uses the same test points for training and prediction. If we use cross validation to predict the test points that used to make pseudo labels, stacking will start to work.</p>
<p>More detailed code <a href="https://www.kaggle.com/wanliyu/how-to-make-stacking-work-with-pseudo-labeling">here</a>. The kernel uses QDA, GMM(not magic), NuSVC and Label Spreading to stack and gets 0.97248 in private score. That's enough to win a silver medal without magic. </p> |
Predicting Molecular Properties | #57 Summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This turned out to be one of my favorite contests. I have always been fascinated by chemistry and have a reasonable knowledge base in chemistry in general. Originally, I hoped to use my chemical intuition to get an edge in this contest. Bu I realized soon enough that, since I had zero knowledge in molecular modeling, my edge was very slim.</p>
<p>I sat on the fence for a very long time reluctant to invest time in learning the state of the art of molecule representation and modeling of molecular properties. Luckily, the Kaggle community came through with many great kernels that I could sample quickly and zero down to a few that were most promising in my hands.</p>
<p>I started working hard on this contest about two weeks ago. For a long time I was out of medal range. Then, things started to click around one week ago and I managed quickly move into solid silver range within a few days.</p>
<p>There was not much time left and models took long time to run. Since there was very little danger of overfitting to LB from what I observed on oof predictions, I chose not waste resources on creating oof prediction and blend old models with newer versions as they came in using LB as the only guide. The current standings and scores: #57 and -2.21819 (Private) and #58 and -2.22542 (Public), prove that that was a solid strategic choice.</p>
<p>The three kernels below (with minor tweaks) contributed the most to the final blend:
<a href="https://www.kaggle.com/toshik/schnet-starter-kit">SchNet Starter Kit</a> by <a href="/toshik">@toshik</a> , who barely missed Gold with -2.8 solo score! Amazing!
<a href="https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481">Distance - is all you need. LB -1.481</a> by <a href="/criskiev">@criskiev</a> , whose team got Silver!
<a href="https://www.kaggle.com/xwxw2929/keras-neural-net-and-distance-features">Keras Neural Net and Distance Features</a> by <a href="/xwxw2929">@xwxw2929</a>, who got solo Bronze!
Basically, most credit for my 🥈 goes to the creators of these great kernels. I am glad they all did great in the contest. Congrats!</p>
<p>At first the core of my blend was Sergii’s kernel, then Aicanghai_Smile’s Keras version started to weigh in more, but, in the last 5 days or so, I realized that <strong>the biggest hidden gem is actually SchNet.</strong> Wow! In the final blend it has by far the most weight. </p>
<p>For SchNet my main modifications was training separate models for each type and adjustment of batch size upward (often I used 16 for mini-batches). Each of 8 models took 6-8 hrs to train for 60-90 epochs using Kaggle’s GPUs. The best SchNet model was -1.971 and I feel that I barely scratched the surface with what it could really accomplish.</p>
<p>99% of computing was done with Kernels. It is great to have access to such powerful GPUs. My main wish is that GPU kernels would not freeze two days before the deadline.
In the end, I ran out of time before I could apply my secret chemical intuition skills : (. I am guessing that a promising strategy to improve further is to split molecules into finer subgroups based on molecular features such as double bonds, rings, etc. and then train separate SchNet models on these subgroups.</p>
<p>Congrats to all! Thank you the organizers and Kaggle for making this a great contest!</p>
<p>**<em>Update: *</em><a href="https://www.kaggle.com/stocks/schnet-by-j-type-helped-me-win-silver-57">SchNet Kernel by J-Type</a></p> |
Predicting Molecular Properties | 3rd solution - BERT in chemistry - End to End is all you need | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our story</p>
<p>As you know, we need to find the appropriate representation for data. That’s why we’re struggling to do feature engineering.</p>
<p>As CPMP mentioned, two member of our team have domain knowledge on this competition.
Sunghwan Choi has Ph.D at quantum chemistry and works on the field of quantum chemistry and chemical application of machine learning. </p>
<p>I’m going on the Ph.D course in chemical engineering and have some experience for quantum calculation and dealing molecules. More important thing is that I’m kaggler :).</p>
<p>We thought that our problem can be solved by conventional graph models whose edge features are distance-derived properties. Those models are quite conventional in machine learning applications on chemical system. Hence, we tried to figure out appropriate hyper parameters and edge features for models</p>
<p>As many kagglers did, we also tested message passing neural networks by adopting many useful kernels. We thank to many kegglers; especially heng cher keng. :) by modification of their solution, we got a silver place 1 month ago. But, the gap between leading group and us was getting large. </p>
<p>We had to find out breakthrough. We brainstormed a lot. At that time, limerobot, who is an expert of natural language processing suggested to use the raw xyz coordinates,
I and Sunghwan didn’t agree with that, because if we use xyz coordinates instead of distance, translational and rotational invariances are not satisfied. The model which do not preserve those invariances seems to be ridiculous</p>
<p>But, limerobot did on his way. And he showed that his model won my GNN.
His model was based on BERT model. You can see the big success in toxic competition on his profile. </p>
<p>Maybe, insights from toxic competition save us :).</p>
<p>Anyway, he don’t have any domain knowledge. He just wanted to make a model to learn the complex representation using xyz coordinates. (end-to-end)</p>
<p>He just input the xyz coordinates of atom1, atom2, coupling type, distance and difference of each xyz coordinates (he thought the model can learn the distance formula based on that...amazing)</p>
<p>Because we had only one month, we decided to do all things based on the transformer.
After that, we did a number of experiment for hyper parameter tuning.</p>
<p>Because BERT is very large model, the performances differed according to the number of hidden layers, type of embedding and learning schedule.</p>
<p>Before 1 week to end of competition, we found our own scheduling and parameters and tiny modification of readout layer ( I will explain model later ) </p>
<p>We made multiples of models. Based on the ensemble of them, we got 3rd place :).</p>
<p>Thanks for reading our stories. We want to share the specific magics below. Please keep going :)</p>
<h1>Overall architecture</h1>
<p>Here is our overall architecture for our model. As you can see, our input sequence is transformed into output sequence by BERT-encoder.</p>
<p>• The number of encoder layer: 8
• The number of heads for attention 8
• Dropout ratio is 0.1 which is conventional choice for BERT model.
• For each type, different readout networks but having same architecture are used</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1155353%2F9d825e9c577db8b951a18cb9827e683b%2Ffig_1.png?generation=1567130786795298&alt=media" alt=""></p>
<h1>Input features and embedding layer for them</h1>
<p>Our float sequences composed of multiple embedding results, are the magic to get the achievement.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1155353%2Fac01c25bcce0d2e5e1638b6f6bd2c952%2Ffig_2.png?generation=1567130820868426&alt=media" alt=""></p>
<p>Multi-head attention layer itself preserve permutational invariance therefore, order of couplings do not change the results but invariance when atomic_index_0 and atomic_index1 are changed, is not preserved since we use feature vector as concatenation of the embedding results of atomic charge, xyz coordinate(position), atomic number of two atoms. </p>
<p>• Size of embedding for atomic charge: 32
• Size of embedding for position: 256
• Size of embedding for atomic number: 64
• Size of embedding for distance: 64
• Size of embedding for type: 64</p>
<p>Total feature size for single feature vector is (32+256+64)*2+64+64=832</p>
<h1>Augmentation</h1>
<p>In order to impose pseudo-invariance on our model, we use rotational and translational noise when augmenting data.</p>
<p>• Translational noise: For each axis, Gaussian noise( mean: 0, std: 2) was added
• Rotational noise: Rotational transformation whose axis is translational noise vector and angle is from Gaussian noise( mean:0, std: 3.14/2 )</p>
<h1>Regression layers for predicting scalar coupling constants</h1>
<p>• As you know that, spin-coupling (sc) value can be decomposed into four different terms (fc, sd, pso, dso)
• After optimizing architecture and various losses, we found that auxiliary target using contributions gave a high boost.
• During training, model minimize loss1 + loss2 with AdamW algorithm.
• There are 8 regression layers to cover 8 different coupling types</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1155353%2F939a6ff0e4a6848477a655a2bfc6ad54%2Ffig_3.png?generation=1567130847675121&alt=media" alt=""></p>
<h1>Specific learning rate</h1>
<p>• We were always using linear learning rate decay.
• We think that there might be improvement with various learning schedule such as cycle lr. But, we didn’t have time because BERT is very large…(about 75M parameters. It took 1~2 days to get a model using 2~4 V100 machine)</p>
<h1>Pseudo labeling</h1>
<p>• To get the more results, we needed some magics. With having an insight that there are less probability to be overfitted (Sunghwan choi’s insight) and experiment result from limerobot, we adopted pseudo-labeling.
• After predicting for test set, we used the pseudo-labeled test dataset for training.
• The model showed more than -3.4 CV. So, we trained model more with only train data to minigate overfitting. (finally we got ~-3.11 LB single models)
• Overall learning process is illustrated below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1155353%2F92bc1322f07e876a7e5f1c60684a095d%2Ffig_4.png?generation=1567130868749389&alt=media" alt=""></p>
<h1>Final submission</h1>
<p>• We made 14 models with various seed and hidden layers (most have 8 layers, other have 6 layers)
• After weighted average according to cv score of types, we got -3.16.
• After multiple procedure of pseudo labeling, we had 8 models.
• With simple average, we got -3.19 (our final score :) 2 hours before the end of competition.</p>
<h1>What we’ve learned</h1>
<p>• End-to-End works!!!. Amazing BERT.
• Learning schedule is very important for modelling of molecular property.</p> |
Predicting Molecular Properties | 22nd place solution - Vanilla SchNet | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used SchNet models from SchNetPack. The notebook for 1JHN is available from <a href="https://www.kaggle.com/tonyyy/vanilla-schnet">this link</a>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2F53f40ec4718c0dab9c0eb747728925ae%2Fslide%201.png?generation=1567084000793466&alt=media" alt="Slide 1"></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2F819fdf2b2b7d03dbbc49d6502139e628%2Fslide%202.png?generation=1567084094339157&alt=media" alt="Slide 2"></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2F9f6e3dbdd63a073dd7e7fb4f4f7ba64b%2Fslide%203.png?generation=1567084180507117&alt=media" alt="Slide 3"></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2Ffc2292ec0d8065469d092414fce5c9ec%2Fslide%204.png?generation=1567084247618833&alt=media" alt="Slide 4"></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2F3708ad0cf1709d6c7abdcf863beb401c%2Fslide%205.png?generation=1567119800569793&alt=media" alt="Slide 5"></p>
<h2>Related Kernels</h2>
<p><a href="https://www.kaggle.com/tonyyy/schnet-champs">Potential Energy</a>
<a href="https://www.kaggle.com/tonyyy/schnet-magnetic-shielding">Chemical Shielding</a></p> |
Predicting Molecular Properties | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the competition hosts. <br>
Here I will try to summarize some of the main points of our solution.</p>
<h2>1. Model architecture</h2>
<p>We chose SchNet (CFConv) as a base neural network architecture.
Details are shown below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2311404%2F77175b9a4b8e3a6f892a7ad421fe1851%2F2019-08-29%2018.31.19.png?generation=1567071105142394&alt=media" alt=""></p>
<ul>
<li>Message passing part
<ul><li>SchNet with Edge Update
<ul><li>Might be similar to "SchNet EU" in <a href="https://storage.googleapis.com/kaggle-forum-message-attachments/610869/14042/csc_10th.pdf">10th place</a></li>
<li>Atom pair features which we used are explained in <code>2. Features</code></li></ul></li>
<li>Prepare Edge Update Networks independently per graph distances (distance=1, 2, 3, and 4 or more)</li></ul></li>
<li>Readout part
<ul><li>Sum-up target atom embeddings updated by CFConv (message passing part)</li>
<li>Concatenate Atom pair feature (same as used in message passing part)</li></ul></li>
<li>Prediction part: MLP
<ul><li>2 Layer MLPs for each coupling type (1JHC, 1JHN etc.)</li></ul></li>
</ul>
<h2>2. Features</h2>
<ul>
<li>Atom features
<ul><li>Only use one-hot expression of chemical elements (C, H, etc.)</li></ul></li>
<li>Atom pair features
<ul><li>Gaussian expansion of 3D distance</li>
<li>Gaussian expansion of dihedral angle features (cosθ, (cosθ)^2)</li>
<li>Bond information (e.g. single-bond, double-bond, etc.)</li></ul></li>
</ul>
<h2>3. Training</h2>
<ul>
<li>1st-stage
<ul><li>Training the model for all coupling types simultaneously.</li></ul></li>
<li>2nd-stage
<ul><li>Fine-tuning the model trained on the 1st-stage to each coupling types individually.</li></ul></li>
<li>3rd-stage
<ul><li>Perform the same training as the 1st stage, using data including examples pseudo-labeled on 2nd-stage.</li></ul></li>
<li>4th-stage
<ul><li>Fine-tuning the model trained on the 3rd-stage to each coupling types individually.</li></ul></li>
</ul> |
Predicting Molecular Properties | 8th place solution - a densely connected GNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for such an interesting competition. It was a lot of fun exploring the current state of the art of graph neural networks and trying to improve it. I also want to thank <a href="/artgor">@artgor</a> , <a href="/tunguz">@tunguz</a> , <a href="/borisdee">@borisdee</a> and <a href="/psilogram">@psilogram</a> for being amazing team mates. In short, our solution is a custom Graph Neural Network with SchNet interaction at its core, which alone scores ~ -2.95 on leaderboard. The model is a single model for all bonding types.</p>
<p>Since my team already had good engineered features for their lgb (with lb score of -2.0) when I joined, I could fully concentrate on what I love: finding a strong deep learning based solution. </p>
<p><strong>The journey was as following</strong></p>
<p>I started with refactoring Hengs starter kit for MPNN which was at around (-1.4 LB) for speed optimization to have a better speed of iterating ideas. After 2 days I got it down from 15 min to 90 sec per epoch on my GTX1080Ti , without losing any accuracy. That enabled to quickly explore different ideas to improve MPNN which I implemented then. However the leaderboard score was still worse than our lgb. </p>
<p>So I continued to read a lot of papers related to GNN:</p>
<ul>
<li>papers referenced in forum, and </li>
<li>newer papers referencing that papers</li>
<li>papers and git repos found by stalking top LB positions</li>
<li>papers shared by <a href="/artgor">@artgor</a> who posted new ones in our slack before I have read the previous</li>
</ul>
<p>I implemented most of them in hacky way to see if they fit for this competition. Besides others I implemented the papers listed below. Some had usable git repositories for others I need to implement more or less from scratch because they used a different input format or were coded in tensorflow/keras whereas I had setup everything in pytorch. All of them are supposed to predict properties on molecular level, so I adjusted where necessary to work on atom pair level:</p>
<ul>
<li><a href="https://arxiv.org/abs/1704.01212">MPNN</a></li>
<li><a href="https://github.com/SeongokRyu/Graph-neural-networks/blob/master/tutorials/gat.ipynb">GAT</a></li>
<li><a href="https://github.com/txie-93/cgcnn">Crystal GNN</a></li>
<li><a href="https://arxiv.org/abs/1706.08566">SchNet</a></li>
<li><a href="https://github.com/materialsvirtuallab/megnet">MegNet</a></li>
<li><a href="https://github.com/blackmints/3DGCN">3DGNN</a></li>
</ul>
<p>For me SchNet not only had the best code basis but also got decent results and was relatively fast, so I started building on that one. In general the model consists of 3 parts:</p>
<ol>
<li>encoder for nodes and edges</li>
<li>consecutive blocks of interaction between encoded nodes and edges</li>
<li>regression head </li>
</ol>
<p>As mentioned by other solutions SchNet lacks several things, but the core interaction between atoms and edges is good and fast.
So I spent some weeks improving that baseline step by step by adding new stuff I read in other papers or I found useful during computer vision competitions. From every 10 ideas I implemented a maximum of one worked, and it felt like digging for treasures. At the end not much was left from the original SchNet apart from the core interaction function between nodes and edges.</p>
<p><strong>The most architectural enhancements were the following</strong></p>
<ul>
<li>using additionally edge features like bondtype, angles</li>
<li>additionally updating edges (as in MegNet)</li>
<li><a href="https://arxiv.org/pdf/1608.06993.pdf">densenet similar connections</a> between interaction blocks</li>
<li><a href="https://arxiv.org/abs/1709.01507">squeeze and excitation block</a> at the end of an Interaction</li>
<li>encoding and using also molecule level features as in Hengs starter</li>
<li>using edge level hidden units for final regression</li>
<li>using molecule level hidden units for final regression</li>
<li>using a deep architecture (12 interaction blocks)</li>
</ul>
<p><strong>some details on training</strong></p>
<ul>
<li>preprocessing: save graph to disk (40 min) </li>
<li>bs: 64</li>
<li>optimizer: Adam</li>
<li>loss: weighted lmae (weights by inverse frequency of coupling type counts)</li>
<li>lr_schedule: Cyclic Cosine annealing with initial lr 0.0005 and cycles of 200 epochs which enables to continue training easily by just adding a new cycle.</li>
<li>validation: groupkfold (10fold) per molecule</li>
<li>epochs: 1200 (and still underfitting)</li>
</ul>
<p><strong>our final submission:</strong></p>
<p>We basically just bagged versions of the above model trained with different hyperparameters which took some resources. So we split training by <a href="/borisdee">@borisdee</a> <a href="/tunguz">@tunguz</a> and myself. Some version of MegNet scoring around -2.5 LB made it also in the final blend but with very minor contribution. <a href="/psilogram">@psilogram</a> then used a simple mae optimizer to blend the different models on bonding type level. -> private leaderboard score of -3.001 </p>
<p><strong>things that did not work</strong>
- additional task of autoencoding edge or node features
- metric learning
- and thousand of others.</p>
<p>Happy to answer any questions and thanks for reading.</p> |
Predicting Molecular Properties | #9 Solution: model with only atom type and distance | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, all Kagglers,</p>
<p>I am glad to share a brief explanation of my method and why I design the model as follows. I really appreciate organizers for holding such a nice competition. I learned a lot and really enjoyed the competition. All questions and opinions are always welcome!</p>
<ol>
<li><p>Strategy
I have a few domain knowledge in quantum chemistry but not specific in this competition. The only domain expertise I used is that "all quantum mechanical properties are determined by atom type and distances between atoms". This was my fundamental strategy so I used only atom type and distance as input features. However, other participants showed that other features can improve the performance of their models. More surprisingly, some models which don't preserve permutation, rotational, and translational invariance outperformed my models. It really shocked me! I really appreciate their efforts to find such an innovative idea. It gave me inspiration in my research topic. I guess that the reason why such things don't make any problem in this competition is that there are some hidden patterns in atom ordering and 3D geometry of molecules of QM9 set. I wonder how the performance of the models change if we randomly change atom ordering, randomly rotate and translate the geometry of molecules in a large amount. In addition, I still personally believe that such conditions must be satisfied to develop robust and reliable models to predict molecular properties. </p></li>
<li><p>Model
I represented a molecule with a fully connected graph. Initial node feature includes only atom type and initial edge feature includes an only distance between atoms. I started with MPNN architecture. In contrast to original MPNN, my model predict scalar coupling between two atoms from their node state obtained after the propagation step. I used 8 different fully connected layers for each type of scalar coupling. I found that not sharing interaction layers improved the performance. I used 4 interaction block with 8 towers and 256 dimensions. This model achieved about -1.7--1.8 in public LD. The followings are my major modification to my models and corresponding LB score.</p>
<ul><li>Replace onehot encoding of distance to RBF proposed in Schnet paper : LB:-2.1---2.2</li>
<li>Edge update network: -2.4</li>
<li>Edge update network with GRU update function: -2.6</li>
<li>Hyperparameter tunning : -2.8</li>
<li>Ensemble with 10 models with different initial weight parameters: -2.98</li></ul></li>
</ol>
<p>Finally, I congrats all the prize winners and thank all participants. I appreciate organizers' efforts again.</p> |
Instant Gratification | 0.97596 Private LB. My best-scored submission (not chosen) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Surprisingly, <a href="https://www.kaggle.com/merkylove/best-conserative-unfair-cv-with-rf">this</a> was my best private score submission.
It shares the same idea that <a href="https://www.kaggle.com/merkylove/10th-public-8th-private-solution?scriptVersionId=15757147">my final chosen submission </a> but I use another booster (gbdt instead of goss) and several other parameters</p> |
Predicting Molecular Properties | Solution 7th (CPMP view) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to thank my team mates, without whom I would not have landed where we landed, and even would not have entered the competition. Let me also way that I'm very happy to see Ahmet become competition GM. And Outrunner is only one gold away from it. I'm sure he'll become GM in his next competition. I was very lucky to team with them. It proves 1 + 1 + 1 > 3.</p>
<p>I also want to thank <a href="/inversion">@inversion</a>, Kaggle staff in general, and CHAMPS for setting up this very challenging, unflawed contest. It is refreshing to not have any leak. Using test data wasn't helpful really either. And the way <a href="/inversion">@inversion</a> managed the two crisis (QM9 use, and unplanned kernel share) was very effective.</p>
<p>I was very reluctant to enter this competition, for two reasons. First, I was convinced that people with domain knowledge would have an unfair advantage. I was right as the top of LB shows a high proportion of teams with chemists on board. Second, it was clear from early discussions that graph NN would be key, and I had absolutely no experience with it. This second reason is a bit weak given most of us here are in same situation!</p>
<p>Then Ahmet asked me if I would join, because we almost teamed in a previous competition and said we would team ASAP. It motivated me to complete an initial effort using lightgbm. I developed a core set of features, then added features specific to each type. The features are based on the graph defined by bonds computed using open babel (thanks <a href="/borisd">@borisd</a> for the starter code). Angles for 2 bonds path, and dihedral angle for 3 bonds paths were key. I then added features based on the number of 1, 2, 3 bond paths starting on every atom on the path, using aggregates like number by bond type, angle averages, distance averages, etc. Depending on the type, number of features was between 500 and 2000. I trained one lgb per type. With this I got to 2.0 on LB with one fold the day I entered. Once trained on all folds, this model got 2.1 on LB. </p>
<p>There are few things I did to improve the model. </p>
<p>First, I use a very large number of leaves, 2**10 -1. I am not sure why this is good, but it was. Other parameter tuning wasn't very important.</p>
<p>Second, I duplicated train and test rows with types 2JHH and 3JHH by swapping the two atoms, before feature engineering, because scalar coupling is symmetric. Final prediction for one row in the original train or test was the average of the two rows predictions after expansion. This was mentioned early in the forum I think.</p>
<p>Third, for 3Jxx bond types, there is often more than one 3 bonds path between the two atoms. Here again I duplicated, using one row for each 3 bonds path between the two atoms, then averaging their predictions back for final prediction. I did not see this discussed anywhere.</p>
<p>Row duplication improves lgb score by about 0.002, not much, but still useful.</p>
<p>I then teamed with Ahmet, and given he had a NN with much better performance than my lgb model I gave up on my base model. I'll let Ahmet describe his NN (edit: description is available <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106421#611963">here</a>). Let me just say that with some improvements, like using logcosh as loss, and expanding its size, we got to -2.64 on LB using single 4 fold cv run. I was very happy given this moved us way above where lgb could move us to. </p>
<p>Lgb was useful still, this time as a post processing step. As I did in a previous competition (Web Traffic Forecasting), I used lgb to predict the residuals of the NN, i.e. predict the difference between the NN prediction and the target. Tuning lgb with huber loss and a decaying learning rate (yes, this common deep learning practice can also be used for lgb) yields a -0.15 boost on LB. This plus bagging several runs eventually led to a score better than -2.8. </p>
<p>To make sure we get a gold medal we knew we would need another strong model. I started developing a graph NN from scratch in keras, and was getting CV o -1.9 and LB -2.08 nwith just 4 features (atom and bond type, mulliken charge, and dihedral angle for 3 bond types). I stopped working on it when we teamed with Outrunner. Indeed, Outrunner skills in NN clearly outperfom mine, by far. He had a set of 13 NN models that yield almost -2.9 LB once bagged. He further improved them to yield -2.92 or so soon after merging. I will let him describe these models if he wants to. </p>
<p>The first thing we did after teaming was to run my lgb postprocessing on his validation prediction. He was using a single train/val split with only 3% or so of validation data. Yet, even with this limited data lgb could extract about 0.025 more. Blending the result with what we had added another 0.03 or so, which moved us better than -2.95. </p>
<p>The last progress we made was based on the NN approach used. A lot of people have used Graph NN where one molecule is a sample, and all coupling for that molecule can be predicted together. Outrunner and Ahmet use another way, with one sample per train and test row. They have features that capture the relative position of every atom compared to the two for which the coupling is computed. For instance distances to these two atoms, and an angle, and more. But distance to the two atoms and an angle doesn't uniquely identify atom positions. There is a circular symmetry. A way to remove this symmetry is to consider other atoms. Ahmet was doing it but not Outrunner. I suggested to use middle atoms. For 2JHx, a0 and a1 are both bond to a middle atom am. For 3JHx, a0 is bond to am0 which is bond to am1 which is bond to a1. Outrunner added distance to am, or am0/am1 depending on the type. This boosted performance to where we finally ended. Unfortunately, given we teamed late, he could only retrain 3 of his models with the new feature. Bagging these leads to -2.99 LB. Running lgb on top of it and blending with Ahmet-CPMP blend yields -3.032. I guess we could have landed higher with more models retrained, but we lacked time and/or GPUs.</p>
<p>Bottom line is I am happy this very challenging competition ended. I probably should have entered earlier, my first sub is from July 22, only 5 weeks ago. But teaming with two talented people more than made for the lack of time.</p>
<p>Edt, you'll find Ahmet NN description <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106421#611963">here</a>. Outrunner told me his NN is similar to Ahmet's hence he won't provide a separate description.</p> |
Instant Gratification | Public LB 37 and private LB 10 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My model is very simple Just use GMM and QDA.
1. Removed all the irrelevant features having variance less than 2.
2. Further removed all those feature whose Pearson correlation with target is less than 0.010
3. Trained a GMM model for 6 clusters(3 for each class)
4. Used 6 predicted probabilities as feature for next classifier mentioned in step 6.
5. Used a QDA classifier for classifying 6 label obtained in GMM and used its predicted probabilities as 6 another feature.
6. Used another QDA classifier with 12 above feature to predict target class.
7. Trained this model for 3 iterations and in iteration 2 and 3, inverted the labels of 90% miss-classified targets using oof of previous iteration. pseudo leveling was decreasing my score so I didn't use it.</p>
<p>My kernal's URI:
<a href="https://www.kaggle.com/raghaw/my-gratification-v2-10th-place-on-public-lb?scriptVersionId=16030728">https://www.kaggle.com/raghaw/my-gratification-v2-10th-place-on-public-lb?scriptVersionId=16030728</a></p> |
Predicting Molecular Properties | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks the organizers for such a great competition and great dataset. I just remembered the very recent nightmarish LANL competition, where its organizers were totally inactive and leaked dataset, then this Molecule competition is indeed one of the best ever. I love it very much.</p>
<p>And special thanks to all of my teammates, who each one actually did a great deal of work from the beginning. Thanks so much.</p>
<p>In summary, main methods are:</p>
<ul>
<li><p>SchNet1 (1st train) with input features: node-features (x, y, z, atom_type, mulliken_charges), edge-features (bond binary (for pairs of atoms with distance 1), angle (for pairs of atoms with distance 2), and dihedral (for pairs of atoms with distance 3). We predict 4 scc (scalar_coupling_constant) contributions and scc itself. This model gave us around -2.7 LB.</p></li>
<li><p>SchNet2 (2nd train): the model is the same as SchNet1, except we feed 4 out-of-fold scc contributions and scc itself as edge features where applicable, EXCEPT the edge of the learning instance itself (so in other words, we prevent leak from this, and this is not a stacking method). This model gave us -2.85 LB.</p></li>
<li><p>Scheduling: Cyclic LR. </p></li>
<li><p>Train 3 to 4 different seeds of 4-fold for each type. Some small types (xJHN) we also had an extra training of 8-fold.</p></li>
<li><p>Quantile regression and median averaging: since MAE cares more about predicting median of the variables, we tried to use quantile regression (with q=0.5), instead of linear regression, for 2 sub-tasks: 1) blending periodical valley model checkpoints of CyclicLR, where we removed the first 50% earlier checkpoints and only kept the last 50% checkpoints. 2) blending oof predictions of seeds of SchNet1 and SchNet2. This gave us around 0.01-0.02 boost on LB. For averaging test folds predictions, we also use median instead of mean, and found that it worked perfectly on LB.</p></li>
</ul>
<p>Now, I would like to describe the process of methods from start to end, like a journey.</p>
<p>First we began with tree-based models, and without much chemistry knowledge.
- Zidmie <a href="/zidmie">@zidmie</a> in our team wrote a custom code to extract bond information from only 3D coords, which was really accurate and impressive (and later proved unnecessary since we have some chemistry toolboxes to do that).
- We even know aromatic indicator of each atom by his method! His early work laid a good foundation to get our team to top 15, at the time we were strong enough to add in <a href="/joshxsarah">@joshxsarah</a> and Kyle <a href="/petersk20">@petersk20</a>, who had great mindsets and diverse ideas.</p>
<p>It was proved that feature importance permutation is a spectacularly good technique to eliminate useless features. Josh and Kyle with their strong features, together with importance permutation once got us to 6th place. But not late after that we gradually slipped on the LB as we saw tree-based models can't compete GCN in this competition.</p>
<p>We then developed a custom GCN code based on Heng's shared code in the forum, which got us -2.5 to -2.6 LB, but can never reach -2.7. </p>
<p>Then we merged with Akira <a href="/akirasosa">@akirasosa</a> , who has extremely great skills in NN!</p>
<p>In the remaining of the competition onwards when having 5 people, we exploited all possible local GPUs (6) as well as 6 accounts of Google Cloud Platform (each with $300 free credits), to boost the score until the end.</p>
<p>My teammates, especially Akira, will add in necessary information regarding SchNet if I missed something. You can view his separate post about SchNet architecture here: <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106293#latest-610869">https://www.kaggle.com/c/champs-scalar-coupling/discussion/106293#latest-610869</a></p>
<p>Our kernels are here:
[GCN from Heng's code] <a href="https://www.kaggle.com/joshxsarah/custom-gcn-10th-place-solution">https://www.kaggle.com/joshxsarah/custom-gcn-10th-place-solution</a>
[SchNet light Kaggle version] <a href="https://www.kaggle.com/petersk20/schnet-10th-place-solution">https://www.kaggle.com/petersk20/schnet-10th-place-solution</a></p>
<p>Thanks for reading.</p> |
Predicting Molecular Properties | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First off, I would like to thank Kaggle and the hosts for a well organised competition and giving us such an interesting scientific problem to work on. I learned a great deal about training and developing neural nets (and a little bit about chemistry ;)). I hope that the final solutions are useful to the organizers as well.</p>
<p>That being said, I’m excited to share my final model. In short: I used a custom GNN architecture that combines the message passing elements from the MPNN model (<a href="https://arxiv.org/pdf/1704.01212.pdf">https://arxiv.org/pdf/1704.01212.pdf</a>) with the heavy use of multi-head attention layers as in the Transformer Encoder (<a href="https://arxiv.org/pdf/1706.03762.pdf">https://arxiv.org/pdf/1706.03762.pdf</a>). On its own an 8-fold submission of this model scored -3.039 on private LB (single folds scored between -2.83 and -2.88 on their validation sets).</p>
<p><em>Journey to the final model:</em>
Quite quickly upon entering this competition I found several discussion topics on Graph CNNs, which led me to the MPNN paper. I started implementing this model from scratch in Pytorch and given initial promising results and later also finding Cheng’s MPNN starter kit, I was convinced I was on the right track.</p>
<p>As many others have noted, the MPNN model doesn’t necessarily work well as a plug and go solution for this problem. But with some customisation, I was able to get an 8-fold MPNN model to produce a -2.873 score on private LB.</p>
<p>The most important things I changed or added to the MPNN architecture to get there:
• Add separate message passing function for scalar-coupling atom pairs.
• Incorporate additional edge, atom and molecule level features. Most significantly, angle based features for scalar coupling edges (i.e. dihedrals for 3J, cosine angles for 2J and nearest neighbour cosine angles for all types).
• For the Message Passing Function applied to bond connections, I added an attention mechanism over incoming messages based on cosine angles.
• Replace set2set with a Gaussian attention layer based on Euclidean distance.
• Introduced scalar coupling edge state that is updated alongside atom states.
• The head of the model used dense skip connection to the final node states of the update process and raw features.
• The write head also included a residual block that was specialised per scalar coupling type.
• Before the final prediction I first predicted each of the four scalar coupling contributions separately. These four terms plus a learned residual are finally added to predict the scalar coupling constant. The four contribution predictions where added to the loss function.
• Increased hidden state dimension (up to 300).</p>
<p>As the long list suggests, the model grew increasingly complex and the additions started to feel increasingly hacky. Also, I wasn’t happy with how the model scaled with hidden state dimension. Anything above 300 no longer fit on the Kaggle kernel GPU memory. This mostly stems from the edge network whose final layer has a weight parameter whose size grows at a cubic rate (i.e. 300^3 = 27M params). I wasn’t able to make any compromise on the flexibility of this edge network without drastically reducing performance.</p>
<p><em>Final Model:</em>
With this in mind I started thinking of a different architecture where message passing layers and attention layers (which seemed to be the most promising elements from my MPNN) could be neatly stacked. The general architecture of the Transformer Encoder provided the basic building blocks for this: stacked encoder blocks and sublayers connected through residual connections with layer norm. In this case however the encoder blocks are build up of two message passing layers, followed by three different types of multi-head attention layers with a final point-wise feed-forward network.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2862251%2Fbfb9329eff94b50b6d393bed498f6188%2FMolTransformer.png?generation=1567079164494797&alt=media" alt=""></p>
<p>The message passing layers (one for bond connections and one for virtual scalar coupling edges) are largely as described above for the MPNN. Note that unlike the attention layers, the message passing layers' parameters are tied across encoder blocks. To allow for much larger node dimension I changed the full matrix multiplication for message passing into a convolution with a fixed kernel size of 128. In the MPNN model this significantly decreased performance but for the Transformer model the gain from increased hidden state size easily offset the loss due to less flexible message passing.</p>
<p>The three multi-head attention layers are:
1. Euclidean distance based Gaussian attention. Similar to the Gaussian attention module in my MPNN.
2. Graph distance based attention. Computes attention through an embedding of the graph distance matrix.
3. Scaled dot product self-attention. Exactly the same as in the Transformer Encoder.</p>
<p>Although the final layers in the block resemble the encoder blocks of the Transformer model, there are several additional layers designed specifically to capture the structure and relationships among atoms in a molecule. The final write head is the same as described for the MPNN.</p>
<p><em>Training:</em>
Training was facilitated by the fastai library. The final model was trained for 100 epochs on all folds using the one_cycle learning rate policy, a max learning rate of 5e-4 and weight decay of 1e-2. I also experimented with snapshot ensembling over an additional 40 epochs using a warm restart learning rate schedule, but the gains from this technique are marginal (~ -0.01). Up to the final submission I had done nearly all training and experimenting on Kaggle kernels. But, for the final submission I used a model with a hidden node dimension of 650, 10 encoder blocks and 10 attention heads. This model was pretty big (at least by my standards) and implied I had to do the final training on the GCP platform, using distributed training on 2 V100 GPUs. Total training time on this setup was roughly 1.5 days per fold. My top submission also included the 8-fold submission of the MPNN model. This gave me about -0.05 of a boost.</p>
<p>I want to specially thank <a href="/hengck23">@hengck23</a> for sharing the MPNN starter kit, which really helped me along with how to setup training and load in data for these GNN models, and the people who submitted public kernels. Nearly all my features either came from or where heavily inspired by these kernels.</p>
<p>I’ll try to get a kernel published in the next few days that trains a smaller version of the transformer model.</p>
<p>Thanks for reading and I’ll happily answer any questions</p>
<p><em>Update:</em>
- github repo: <a href="https://github.com/robinniesert/kaggle-champs">https://github.com/robinniesert/kaggle-champs</a>
- kaggle kernel of smaller model w 15 epochs of training: <a href="https://www.kaggle.com/robin111/molecule-transformer-w-message-passing">https://www.kaggle.com/robin111/molecule-transformer-w-message-passing</a></p> |
Predicting Molecular Properties | chimps solution (#37) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, </p>
<p>37ish is not too glamoourous but we worked very hard and I wanted to share our aproach :)
Our solution was based mainly in GNN which scored -2.317 provided by our team mate <a href="https://www.kaggle.com/jessky">https://www.kaggle.com/jessky</a> and some lgb ensemble that scores a bit less than -2.1. Then our best nnet (bagged) scores almost -1.9. For lgb and nnet models we benefit a lot from oof predictions of FC. We kept adding different prediction and the score improved by ~ -0.4 just adding those. Besides some ad-hoc FE, we obviously made use of all the amazing distances and amazing calculations available in kernels. Mainly: acsf descriptors and the sergii distances ( <a href="https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481">https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481</a>) </p>
<p>Our final ensemble can be seen in here: <a href="https://www.kaggle.com/chechir/champs-ensemble">https://www.kaggle.com/chechir/champs-ensemble</a>
Thanks to the organisers, and all the awesome kernel authors in this competition !</p> |
Predicting Molecular Properties | 43 place solution of 2 Experts and the farmer | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we would like to thank everyone who participated in this competition, host members and Kaggle Team! I’m happy to finish this competition with really stable LB, no shake down :D</p>
<h2>Modeling part</h2>
<p>Mainly our team used LightGBM with various features (around 500), and adopt NN model (MPNN) customizing based on <a href="/hengck23">@hengck23</a> ’s MPNN model for bringing model diversity to enhance the blending. Also we understood the seed averaging on this competition is very useful, so utilized large number of seed average.</p>
<p>In detail, please see the following slide.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F291723%2F70b9d209795661e831c5216aea880947%2Fsolution.png?generation=1567041478975554&alt=media" alt="solution"></p>
<h1>Features part</h1>
<p>We struggled to find the features describing the global/local environment of atoms. We have made a lot of features by using chemical libraries (Openbabel, Dscribe, RDkit). These libraries are very helpful because we have little domain knowledge at the starting point.</p>
<p>Some effective features are below.</p>
<h3>distance</h3>
<p>Some distances are essential features. We experimented with many distance between
- index0 and index1 atoms
- index0/1 and atom in α,β,γ-substituents
- index0/1 and atom of each type(‘H’,‘C’,‘N’,‘O’)
- nearest neighbors of index0/1</p>
<p>We also use features from this brilliant notebook(<a href="https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481">https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481</a>) by <a href="/criskiev">@criskiev</a></p>
<h3>Angle</h3>
<p>Angle feaures also have very strong effects. We tried making a lot of hand-crafted features.
- bond angle
- plain angle
- dihedral angle with index0/1
- stats of dihedral angle with index0, 3-atoms-away from index0</p>
<h3>atom type</h3>
<p>We use atom type(‘H’,‘C’,‘N’,‘O’) as features instead of their electronegativity. Using atom type not only of index0/1 but of some substituents and neibor atoms, we tried to describe the structures around the atoms involved in the coupling.</p>
<h3>partial charge</h3>
<p>partial charge features played important role. We mainly made them using <code>GetPartialCharges</code> in Openbabel. Some important features are below.
- partial charge valus itself of idnex0/1
- the diff/ratio of partail charge of index0/1</p>
<h3>characteristics of substituents</h3>
<p>According to this document (<a href="https://www.ucl.ac.uk/nmr/NMR_lecture_notes/L3_3_97_web.pdf">https://www.ucl.ac.uk/nmr/NMR_lecture_notes/L3_3_97_web.pdf</a>), it seems effective to make features describing α,β,γ-substituents.</p>
<h3>the hybridization of atoms</h3>
<p>This feature are also from Openbabel.</p>
<h3>ACSF features</h3>
<p>This local descriptor works well. This features derive from the chemical library Dscribe (<a href="https://singroup.github.io/dscribe/tutorials/acsf.html">https://singroup.github.io/dscribe/tutorials/acsf.html</a>) We didn’t tune g2,4_params because we didn’t have time enough. So, this features may have more room of improvement.</p>
<h3>Fingerprint</h3>
<p>To describe the characteristics of the molecules, chemists seems to molecular fingerprints. We made some featrues using Morgan Fingerprint(very huge bits), MACCS Fingerprint(167 bits) They are global descriptors, and it works a little.</p>
<h3>topological data analysis</h3>
<p>Maximum and minimum radius of persistent homology with <a href="https://ripser.scikit-tda.org/">ripser</a>.</p>
<h3>Bond ring feature</h3>
<p>Circle size of bond ring created with networkx.</p>
<h3>PCA feature</h3>
<p>Explained variance PCA feature calculated based on xyz position, aming to represent global shape of a molecular.</p>
<h1>Team member</h1>
<p><a href="/yoichi7yamakawa">@yoichi7yamakawa</a> & <a href="/kenmatsu4">@kenmatsu4</a> </p> |
Freesound Audio Tagging 2019 | 2nd place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the competition and congratulations of all participants! Here is a summary of our solution.
code: <a href="https://github.com/qrfaction/1st-Freesound-Audio-Tagging-2019">https://github.com/qrfaction/1st-Freesound-Audio-Tagging-2019</a></p>
<h3>Solution</h3>
<p>single model CV: 0.89763
ensemble CV: 0.9108</p>
feature engineering
<ul>
<li>log mel (441,64) (time,mels)</li>
<li>global feature (128,12) (Split the clip evenly, and create 12 features for each frame. local cv +0.005)</li>
<li>length</li>
</ul>
<p><code>
def get_global_feat(x,num_steps):
stride = len(x)/num_steps
ts = []
for s in range(num_steps):
i = s * stride
wl = max(0,int(i - stride/2))
wr = int(i + 1.5*stride)
local_x = x[wl:wr]
percent_feat = np.percentile(local_x, [0, 1, 25, 30, 50, 60, 75, 99, 100]).tolist()
range_feat = local_x.max()-local_x.min()
ts.append([np.mean(local_x),np.std(local_x),range_feat]+percent_feat)
ts = np.array(ts)
assert ts.shape == (128,12),(len(x),ts.shape)
return ts
</code></p>
prepocess
<ul>
<li>audio clips are first trimmed of leading and trailing silence</li>
<li>random select a 5s clip from audio clip</li>
</ul>
model
<p>For details, please refer to code/models.py
* Melspectrogram Layer(code from kapre,We use it to search the hyperparameter of log mel end2end)
* Our main model is a 9-layer CNN.
In this competition, we consider that the two axes of the log mel feature have different physical meanings,
so the max pooling and average pooling in the model are replaced
by one axis using max pooling and the other axis using average pooling.
(Our local cv gain a lot from it, but the exact number is forgotten).
* global pooling: pixelshuffle + max pooling in time axes + ave pooling in mel axes.
* se block(several of our models use se block)
* highway + 1*1 conv(several of our models use se block)
* label smoothing</p>
<p>```</p>
<h1>log mel layer</h1>
<p>x_mel = Melspectrogram(n_dft=1024, n_hop=cfg.stride, input_shape=(1, K.int_shape(x_in)[1]),
# n_hop -> stride n_dft kernel_size
padding='same', sr=44100, n_mels=64,
power_melgram=2, return_decibel_melgram=True,
trainable_fb=False, trainable_kernel=False,
image_data_format='channels_last', trainable=False)(x)
<code>
</code></p>
<h1>pooling mode</h1>
<p>x = AveragePooling2D(pool_size=(pool_size1,1), padding='same', strides=(stride,1))(x)
x = MaxPool2D(pool_size=(1,pool_size2), padding='same', strides=(1,stride))(x)
<code>
</code></p>
<h1>model head</h1>
<p>def pixelShuffle(x):
_,h,w,c = K.int_shape(x)
bs = K.shape(x)[0]
assert w%2==0
x = K.reshape(x,(bs,h,w//2,c*2))</p>
<pre><code># assert h % 2 == 0
# x = K.permute_dimensions(x,(0,2,1,3))
# x = K.reshape(x,(bs,w//2,h//2,c*4))
# x = K.permute_dimensions(x,(0,2,1,3))
return x
</code></pre>
<p>x = Lambda(pixelShuffle)(x)
x = Lambda(lambda x: K.max(x, axis=1))(x)
x = Lambda(lambda x: K.mean(x, axis=1))(x)
```</p>
data augmentation
<ul>
<li>mixup (local cv +0.002, lb +0.008)</li>
<li>random select 5s clip + random padding</li>
<li>3TTA</li>
</ul>
pretrain
<ul>
<li>train a model only on train_noisy as pretrained model</li>
</ul>
ensemble
<p>For details, please refer to code/ensemble.py
* We use nn for stacking,
which uses localconnect1D to learn the ensemble weights of each class,
then use fully connect to learn about label correlation,
using some initialization and weight constraint tricks.
```
def stacker(cfg,n):
def kinit(shape, name=None):
value = np.zeros(shape)
value[:, -1] = 1
return K.variable(value, name=name)</p>
<pre><code>x_in = Input((80,n))
x = x_in
# x = Lambda(lambda x: 1.5*x)(x)
x = LocallyConnected1D(1,1,kernel_initializer=kinit,kernel_constraint=normNorm(1),use_bias=False)(x)
x = Flatten()(x)
x = Dense(80, use_bias=False, kernel_initializer=Identity(1))(x)
x = Lambda(lambda x: (x - 1.6))(x)
x = Activation('tanh')(x)
x = Lambda(lambda x:(x+1)*0.5)(x)
model = Model(inputs=x_in, outputs=x)
model.compile(
loss='binary_crossentropy',
optimizer=Nadam(lr=cfg.lr),
)
return model
</code></pre>
<p>```</p> |
Instant Gratification | How I dropped from first place to 63 place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The kernel is here: <a href="https://www.kaggle.com/kongliangyu/public-first-place-and-private-60-place">https://www.kaggle.com/kongliangyu/public-first-place-and-private-60-place</a>
fun time anyway :)</p> |
Instant Gratification | 20th position kernel -- LB 0.97572 (private) 0.97438 (public) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for all the people who shared kernels/ideas during this competition and after the competition end. I started after the QDA, pseudo-labeling have been shared (i.e. Public LB 0.969 kernel). Once I came across the post where <code>make_classification</code> was mentioned as the data generator, I thought of GMM and that was my first Kernel. My main idea was to use GMM to cluster the data (train and test data) then use the probability from GMM as additional features to pass to QDA. I played briefly with different <code>n_components</code> for GMM and optimized that for each of the 512 models using information criteria. Then my best CV was for brute force by simply concatenating features for multiple GMM with <code>n_components</code> from 2 till 9 (these should have been even numbers !!). played around with different regularization parameter for QDA. Tried pseudo-labeling but obviously it was reinforcing the bad labels. So, I used bagging to try to eliminate the effects of the flipped labels on QDA.</p>
<p><a href="https://www.kaggle.com/ggrizzly/model-gmm-v4-even-clusters?scriptVersionId=16002313">Here is the link to the kernel (version 5)</a></p>
<p>One questions is still in my mind, what would be a correct CV scheme for this kind of problems where the test data are always available for clustering and/or dimension reduction. A pure approach where each folds' data is removed before GMM does not corresponds to true deployment phase as the test data is available. Similar dilemma probably exists in all semi-supervised learning approaches.</p>
<p>Instead of mentioning my best private LB, I am proud to say that all my submissions scored > 0.97524 in the private LB except the first submission !!</p> |
Predicting Molecular Properties | #4 Solution [Hyperspatial Engineers] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone,</p>
<p>First I'd like to thank the organizers for the well organized competition, but I'd also like to thank other competitors for making things fun, your great scores pushed us to do much more and get much better scores than we have originally though we could achieve.</p>
<p>Here we'd like to share our solution which is also based on a Graph Transformer network, though with a few twists.</p>
<h3>Input data</h3>
<p>We have used OpenBabel to infer bonds from atom coordinates, and have used some custom code to fix what we identified as mistakes made by OpenBabel.</p>
<p>We construct a graph where each node is an atom, each bond becomes an edge, then we add artificial edges between all nodes at distance 2 and 3 in the graph (2J and 3J edges respectively).
All edges are directional, so we have one edge for each direction to make the graph fully bidirectional.</p>
<p><strong>Attributes</strong>:
- Nodes have atom type embedding, electronegativity, first ionization energy, electron affinity, Mulliken charge taken from the QM9 data (originally predicted using the same model architecture trained on the provided Mulliken charges)
- All edges have distance and edge type embedding (single bond, double bond, triple bond, 2J, 3J)
- 2J edges have bond angle on the atom they are skipping over.
- 3J edges have dihedral angle between the atoms they are connecting</p>
<p>No explicit XYZ data is used, as we wanted to make the model rotation and translation invariant.</p>
<p>All input data was normalized to zero mean unit variance.</p>
<h3>Model</h3>
<p>Core of the architecture is the graph attention network with multiple heads, with a few twists:
- Attention heads do not attend to all data from previous layer, but only the output of the same head from previous layer
- All edge embeddings are first updated from triplets (src, edge, dst), the attention then updates atom embeddings by aggregating over edge embeddings, not neighboring node embeddings
- We use gated residual connections between attention blocks similar to: <a href="https://arxiv.org/abs/1805.10988">https://arxiv.org/abs/1805.10988</a>
- We output scalar coupling constants directly on the edges, this makes it two predictions, one for each edge direction and each is treated independently in the loss function. These two predictions are averaged to get a final prediction making this a kind of micro-ensemble.</p>
<p>Loss function is a mean of MAEs per coupling type. We have tried mean log MAE but it was giving us worse results. We have used both scaled targets to zero mean unit variance and zero mean targets with no variance scaling, as different types benefited from different setups.</p>
<p>Quickest way to outline the network would be using code:</p>
<p>```python
emb = 48
heads = 24
bias = False</p>
<p>def AttnBlock(in_emb, out_emb):
return nn.Sequential(
EdgeLinear(in_emb, out_emb),
NodeLinear(in_emb, out_emb),
GraphLambda(lambda x: x.view(x.shape[0], heads, -1)),
TripletCat(out='triplet'),
MagicAttn(emb, 3 * emb, heads, attn_key='triplet'),
TripletMultiLinear(emb, emb, emb, heads, bias=bias),
GraphLambda(torch.nn.LayerNorm(heads * emb))
)</p>
<p>net = nn.Sequential(
Embed(emb, emb),
AttnBlock(emb, emb * heads), GraphLambda(nn.PReLU()),
GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()),
GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()),
GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()),
GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()),
GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()),
GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()),
GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()),
EdgeLinear(emb * heads, 512, bias=True), GraphLambda(nn.PReLU(), node_key=None),
EdgeLinear(512, 8, bias=True)
)
```</p>
<h3>Optimizer</h3>
<p>We have used LAMB optimizer (<a href="https://arxiv.org/abs/1904.00962">https://arxiv.org/abs/1904.00962</a>), again with a small twist, we have noticed that the weight decay is included in the step norm on which the trust ratio is calculated, this didn't make sense to us as weight decay should be independent of the update based on the batch gradient, so we moved the application of weight decay after the application of LAMB update, and this gave us better results. We call this LAMBW</p>
<h3>Training regime</h3>
<p>We have split the data into 90/10 train/eval, two times with two different. We have used one cycle learning rate for 30 epochs with high weight decay, then dropped weight decay and continued training until eval saturation (~70 more epochs). Then we fine tune 100 epochs for each type to get further improvement (except for *JHN types, as they didn't improve with fine tuning).</p>
<p>We have used Stochastic Weight Averaging (<a href="https://arxiv.org/abs/1803.05407">https://arxiv.org/abs/1803.05407</a>) of last 25 epoch to get to a better minimum.</p>
<p>We then average the predictions of the two training runs for the two splits we had.</p>
<h3>What didn't work</h3>
<p>Many, many things:
- RAdam
- combining RAdam and LAMBW
- many different forms of attention mechanisms
- mean log MAE loss, mean MSE loss
- GradNorm for multitask learning for different coupling types <a href="https://arxiv.org/abs/1711.02257">https://arxiv.org/abs/1711.02257</a>
- Multi-Task Learning as Multi-Objective Optimization <a href="https://arxiv.org/abs/1810.04650">https://arxiv.org/abs/1810.04650</a>
And many many other things</p>
<p>The full package with transformed data, additional data (and all code, which is also included in champs_code.tgz) is 7GB, so we are providing a link <a href="https://zenodo.org/record/3406154#.XXpX_ygzabg">https://zenodo.org/record/3406154#.XXpX_ygzabg</a> instead of uploading directly to Kaggle forum.</p> |
Predicting Molecular Properties | #2 solution 🤖 Quantum Uncertainty 🤖 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We want to thank Kaggle and CHAMPs for organizing such an awesome competition: </p>
<ul>
<li>No leakage.</li>
<li>Same distribution in train, private and public test (very stable CV vs. LB, public LB vs. private LB). This is very relevant in other competitions given so many participants when the 3 distributions are different some winners (not all, e.g. <a href="https://www.kaggle.com/c/microsoft-malware-prediction/discussion/84069#latest-499864">CPMP approach to manually align test ~ train in the Microsoft malware comp</a>) are just lucky they fit private test distribution by chance; not the case here. </li>
<li>Inspiring and useful science problem hopefully used for good purposes as described in the context (new drugs, etc.).</li>
</ul>
<p>Neither my teammate <a href="/pavelgonchar">@pavelgonchar</a> nor I had any previous domain expertise and we made the decision early on that we would tackle this problem using a very pure deep learning way: letting the model build the features for us, not the other way around (b/c obviously we were at a disadvantage if we tried to become quantum experts in 1 month… hence our team name 🤖Quantum Uncertainty🤖 we didn't know if our yet-to-be-developed approach was going to work).</p>
<p>Our solution had two major parts: 1) the input representation and 2) deep learning architecture.</p>
<p><strong>Input representation</strong></p>
<p>This is in our opinion the key part: we take a molecule and a source atom and move it so the source atom is @ (0,0,0). For each molecule we create N molecule siblings (N being as many source atoms are defined for that molecule), and each molecule sibling is translated so its source is at (0,0,0). </p>
<p>The <code>x</code> (input) are three arrays of dimension 29 (maximum number of atoms): </p>
<p>1) <code>x,y,z</code> position of each atom,
2) <code>atom type</code> index (C=0, H=1, etc…)
3) <code>j-coupling type</code> index (1JHC=0,'2JHH=1,etc.)</p>
<p>Padding is done by placing -1 in <code>atom type</code> index and <code>j-coupling type</code> for molecules which have less than 29 atoms.</p>
<p>The <code>y</code> (ground truth) is just an array of dimension 29 containing j-couplings located at target atom indices.</p>
<p>Note that there is no graph information nor any other manually engineered features.</p>
<p><strong>Data augmentation</strong></p>
<p>We did two types of data augmentation:
- Rotations: which worked and were useful in our first attempt model: pointnet-based, but proved worthless in the final models (atomic transformer).
- J-coupling symmetry: as described <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/94706#latest-563148">in this discussion</a></p>
<p><strong>First attempt: Pointnet-inspired architecture (got up to -2.28200 LB)</strong></p>
<p>Our input representation is basically a point cloud: an unordered set of elements with absolute positions <code>x,y,z</code> and two attributes <code>atom type</code> and <code>j-coupling type</code>. We modified the <a href="https://arxiv.org/abs/1612.00593">Pointnet</a> architecture to regress j-couplings. Training was a bit unstable and we tried many variations of the architecture, swapping FC layers by linear (fixed) projections (Hadamard), adding coulomb matrix as input, etc. </p>
<p>While this worked OK and got us to -2.28200 LB (ensembling a few models) we felt that this architecture was limited by the extreme pooling/bottleneck operation so we decided to explore other architectures: meet the Atomic Transformer.</p>
<p><strong>Final architecture: meet the Atomic Transformer</strong></p>
<p>You may know that the recent NLP revolution is mostly due to the transformer architecture described in the <a href="https://arxiv.org/abs/1706.03762">Attention is all you need paper</a>. The vanilla transformer architecture uses a very clever technique to add positional encodings that are needed for position-dependent input, such as language. </p>
<p>Our input representation is a set, which means we can (and should) remove positional encoding. Prior to this competition we had no experience with transformers either but there's a section in <a href="https://www.youtube.com/watch?v=Kedt2or9xlo">Lex Fridman MIT podcast interviewing Orion Vinyals</a> where he mentions the inherent position invariance of a barebone transformer encoder layer. This immediately triggered the idea of using transformer layers (encoders) stacked taking as an input <code>x,y,z</code> (normalized but otherwise as-is), and <code>atom type</code> and <code>j-coupling type</code> embeddings; just concatenated… nothing fancy. </p>
<p>The dimension of the embeddings was such that the total dimension of the input vectors was <code>d_model</code> (as normally reference in transformer literature). We started with 256 and got immediately great results surpassing our pointnet-inspired architecture so we followed this path.</p>
<p>We trained a total of 14 models, with varying dimensions from 512 to 2048 and layers from 6 to 24. Each model parameter size ranged from ~12M to ~100M (biggest model).</p>
<p>We trained some models from scratch, others we fine-tuned. We also fine-tuned a few models on the troublesome j-couplings: reaching -2.12 CV on 1JHC on and -2.19 CV on 1JHN.</p>
<p>Our best score is an ensemble of 14 models achieving private LB of -3.22349, and our best single model achieved private LB of -3.16234, again just with <code>x,y,z</code>, <code>atom type</code> and <code>j-coupling type</code> inputs (no QM9, etc.).</p>
<p><strong>What didn't work</strong></p>
<p>Many things! We tried:</p>
<ul>
<li>Multi-task learning using contributions and other organization provided values.</li>
<li>Dropout: We tried multiple attempts to add dropout at various stages (embeddings, encoder layers, pre-decoder, etc.). None of them worked.</li>
<li>Knock-out: We added a variation in which as input we deleted 10% of the input atoms, the idea being that the model would build an internal representation of the missing atoms. Surprisingly this worked in that the model still converged nicely but failed to reduce train ~ val gap.</li>
<li>Rotations and TTA in Atomic Transformer: it didn't reduce train ~ val gap and didn't produce meaningful TTA gains.</li>
<li>Deep decoder: Our decoder is just a projection of ~ the model dimension to 1 (scalar coupling). We tried adding more expressive power to the decoder but this didn't help.</li>
<li>Fp16 training. This worked for models of dimensions 256 but as training evolved gave <code>NaN</code>s despite numerous attempts to fix it.</li>
</ul>
<p><strong>Source code</strong></p>
<p>We will make source code available once we do clean up. It's a single jupyter notebook using FastAI. Be patient.</p>
<p><strong>Computational resources</strong></p>
<p>We had more ideas than computational resources, even if our computational resources were not tiny:
- 3 x 2080 Ti + 128 Gb RAM + 16c32t processor
- 2 x 1080 Ti + 64 Gb RAM + 8c16t processor
- Rented 8+ 2080 Ti + 64 Gb RAM + 16c32t processor (multiple machines rented as needed)</p>
<p><strong>Final thoughts</strong></p>
<p>This was our most fun and hardest competition so far:
- Challenging problem
- Most teams in top 5 had domain experts (although we went <em>domainless</em> as part of our strategy)
- Hungry computational resources.</p>
<p>Even if we lost #1 position just a few hours before competition end we feel very excited we were able to achieve such [useful results for the organizers] (<a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/98375#latest-569312">https://www.kaggle.com/c/champs-scalar-coupling/discussion/98375#latest-569312</a>). In retrospect we believe a single model of the Atomic Transformer may achieve ever better results with further training.</p>
<p>Best - Pavel & Andres</p>
<p>p.s. No graph NNs. We though graphs as manually engineered features that the model can infer by itself.</p> |
Predicting Molecular Properties | 20th place solution, MPNN part | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>So the main inspiration for our best solution came mostly from two papers: the <a href="https://arxiv.org/abs/1704.01212">Gilmer/Brain </a> paper, as implemented in my <a href="https://www.kaggle.com/fnands/1-mpnn">public kernel</a>, and then <a href="https://dl4physicalsciences.github.io/files/nips_dlps_2017_29.pdf">Neural Message Passing for Jet Physics.</a>. </p>
<p>From the latter paper I took the idea of adding an extra step of not only updating the node values at each step, but also updating the adjacency matrix at each step, as in this algorithm from the above mentioned paper: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F314368%2Fcabe0b91e3333064b4d81ed5984ef351%2FNMPAlgo1.png?generation=1567064230681357&alt=media" alt=""></p>
<p>Also, from <a href="https://arxiv.org/abs/1805.10988">this paper</a>, of which <a href="/jaechang">@jaechang</a> is a co-author, I added skip connections between layers, which allowed us to go deeper (although we saw no improvement above 7 layers). </p>
<p>The message passing functions and update functions are different from what was done in my public kernel, and for those interested in the details, please have a look at <a href="https://www.kaggle.com/fnands/mpnn-skip-2">this kernel</a> (and <a href="https://www.kaggle.com/fnands/internalmakegraphinput/output">pre-processing kernel</a> )
Of course our solution used 4/5 fold CV, and a bit of stacking, but the main workhorse of out model is described in the kernel. If you have any questions about the model please ask and I will clarify. I did it in TF2.0 as one of my goals for this Kaggle was to get familiar with the new version. </p>
<p>My gut feeling is that this model has too many parameters and there is a simpler model out there that will do a better job. Any comments/suggestions/ideas are greatly appreciated. </p>
<p>On a personal note, this has been the first Kaggle in which I have really competed and I had a great time! Thanks to everyone who made kernels, participated in discussions and who were just cool people in general.
I learned a lot in this competition and hope to apply what I have learned in the future. </p>
<p>Specifically, thanks to my teammates <a href="/kazanova">@kazanova</a> , <a href="/rohanrao">@rohanrao</a> , <a href="/supreethmanyam">@supreethmanyam</a> and <a href="/ogrellier">@ogrellier</a> who were great and really helped to squeeze the most out of our solution. They were all great teammates and I can highly recommend working with them!</p>
<p>Now, I'm going to go read up on what the winners did... 🤓 </p> |
Open Images 2019 - Object Detection | Code to ensemble boxes for object detection | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I made a release of Python module for ensembling boxes for Object Detection models. It includes very promicing Weighted Boxes Fusion (WBF) method, which we used instead of NMS in 2019 and 2018 competitions.</p>
<p><a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion">https://github.com/ZFTurbo/Weighted-Boxes-Fusion</a></p>
<p>Repository contains the following methods:
- Non-maximum Suppression (NMS)
- Soft-NMS
- Non-maximum weighted (NMW)
- Weighted boxes fusion (WBF)</p>
<p>Installation available using pip: <code>pip install ensemble-boxes</code></p>
<p>It will be interesting to hear if WBF works better for other teams too. )</p> |
Open Images 2019 - Object Detection | 6th place solution [0.6023 private LB] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the competition organizers and all the competitors!
This was my first Kaggle competition and I really had a great time here 😀 </p>
<p>Here's my brief solution writeup:</p>
<h2>1. Dataset</h2>
<ul>
<li>No external dataset.
I only use FAIR's ImageNet pretrained weights for initialization, as I have described in the Official External Data Thread.</li>
<li>Class balancing.
For each class, images are sampled so that probability to have at least one instance of the class is equal across 500 classes. For example, a model encounters very rare 'pressure cooker' images with probability of 1/500. For non-rare classes, the number of the images is limited.</li>
</ul>
<h2>2. Models</h2>
<p>The baseline model is Feature Pyramid Network with ResNeXt152 backbone.
Modulated deformable convolution layers are introduced in the backbone network.
The model and training pipeline are developed based on the maskrcnn-benchmark repo.</p>
<h2>3. Training</h2>
<ul>
<li>Single GPU training.
The training conditions are optimized for single GPU (V100) training.
The baseline model has been trained for 3 million iterations and cosine decay is scheduled for the last 1.2 million iterations. Batch size is 1 (!) and loss is accumulated for 4 batches.</li>
<li>Parent class expansion.
The models are trained with the ground truth boxes without parent class expansion. Parent boxes are added after inference, which achieves empirically better AP than multi-class training.</li>
<li>Mini-validation.
A subset of validation dataset consisting of 5,700 images is used. Validation is performed every 0.2 million iterations using an instance with K80 GPU.</li>
</ul>
<h2>4. Ensembling</h2>
<ul>
<li>Ensembling eight models.
Eight models with different image sampling seeds and different model conditions (ResNeXt 152 / 101, with and without DCN) are chosen and ensembled (after NMS).</li>
<li>Final NMS.
NMS is performed again on the ensembled bounding boxes class by class. IoU threshold of NMS has been chosen carefully so that the resulting AP is maximized. Scores of box pairs with higher overlap than the threshold are added together.</li>
<li>Results.
Model Ensembling improved private LB score from 0.56369 (single model) to 0.60231.</li>
</ul> |
Open Images 2019 - Visual Relationship | brief summary of 2nd place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Visual Relationship
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners, and thanks to competition organizers for this interesting competition again.</p>
<p>I used almost same architecture as <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64651">last year</a>.
So I'll summarize only difference.</p>
<h1>Model 1: object detection</h1>
<p>I made cascade-rcnn model using <a href="https://github.com/open-mmlab/mmdetection">mmdetection</a>.</p>
<p>mAP for 57 classes improved a lot, more than 0.1 compared to last year yolo model.</p>
<h1>Model 2: visual relationship</h1>
<h2>2-1: relation 'is'</h2>
<p>I made 3 models for this part, and then I made ensemble of them.</p>
<h3>2-1-1: relation 'is' (2 stage model)</h3>
<p>This is the model I used for relation 'is' last year.</p>
<h3>2-1-2: relation 'is' (1 stage model)</h3>
<p>I made cascade-rcnn model which detect 42 'is-relation' classes.
This model is almost same as <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64642">'toshif' explained last year</a>.</p>
<h3>2-1-3: relation 'is' (1 stage model with material head)</h3>
<p>I added 'material' detection head to cascade-rcnn.
This model predict Bounding Box and class and material at the same time.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F548996%2Fbb42658ab15675df44a4ab36aff46630%2Fmaterial_head.png?generation=1570245068374646&alt=media" alt=""></p>
<p>Results:</p>
<p>|model |public |private |
|---|---|---|
|2-1-1 |0.07523 |0.07264 |
|2-1-2 |0.08332 |0.08075 |
|2-1-3 |0.08191 |0.07948 |
|ensemble |0.08514 |0.08232 |</p>
<p>I expected 2-1-3 to have better score...</p>
<h2>2-2: Triplet Relationships</h2>
<p>Base model is almost same as I shared <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64651#380288">here</a> for this part.</p>
<p>I made expert models which only in charge of small sample class, and made ensemble of them with weighted average of their probability.</p>
<p>This is the result AP for validation data:</p>
<p>|class |grand truth BB |predicted BB |
|---|---|---|
|at |93% |31% |
|on |92% |32% |
|holds |89% |54% |
|plays |94% |58% |
|interacts with |82% |45% |
|inside of |72% |37% |
|wears |94% |55% |
|hits |55% |57% |
|under |50% |20% |
|mAP without hits/under |88% |45% |
|mAP |80% |43% |</p>
<p>For grand truth BB pairs, this relationships prediction model has very high accuracy.
mAP without hits/under which have very small samples is 88%!</p>
<h1>Model 3: Final Score Prediction</h1>
<p>I did not used Light GBM for this part.
I just used simple formula.</p>
<p><code>Final Score = Object1Score x Object2Score x RelationsipScore</code></p>
<p>This year, my LB score improved to 0.38818 from last year score 0.23709.
Most of this improvement comes from object detection improvement.</p>
<p>It seems that good object detection is the most important part of this competition.</p>
<p>BTW, I became GM as of this competition.
I'd like to thank to my previous team mate.
I learned lots of things from them and I could not be GM without them.
Thank you, Carl, Little Boat, KazAnova, Ahmet, Kohei-san, Akiyama-san, owruby!</p> |
Instant Gratification | 0.975+ Private Solution (72nd) with a dumb way to incorporate GMM cluster membership | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to thank Kaggle for organizing this fun competition to beta test the new Kernels-only format. IMO Kaggle did a great job making this "synthetic" and "trial" competition still fun and exciting.</p>
<p>I joined this competition late, after the community had already reverse engineered how the dataset was generated (using <code>sklearn.datasets.make_classification</code>).
I created my own synthetic dataset locally using <code>make_classification</code> with <code>flip_y=0.05</code> and the same number of <code>n_informative</code> per magic split as with the original. I noticed that my local CV scores were too high compared to those of the original dataset, so I started tweaking the parameters of <code>make_classification</code> and figured out that <code>n_clusters_per_class=3</code> causes my local CV scores to closely agree with those of the original.
Based on that, I used <code>sklearn.mixture.GaussianMixture</code> with <code>n_components=6</code> and <code>max_iter=10000</code> to figure out the cluster membership for the training set. Like other competitors, I initially tried to use this info to map the membership back to the training set target, but my initial experiments did not give great results. Instead, I simply concatenated the cluster membership to the training data (a mindless approach - basically it was just a single, extra int input column ranging from 0 to 5 along with the original data with unmodified target) and trained QDA. Still, this simple modification gave me a boost from 0.97002 Public LB to 0.97327. To improve my Public LB to 0.97426, I did several rounds of outlier removal based on model predictions and also averaging predictions based on models trained via different random seeds. My best Public LB 0.97426 gave me 0.97518 Private LB. It turns out that my best Private LB was 0.97529 (0.97424 Public LB) but I did not end up submitting it; this would have only given me one extra position in the Private LB, so I don't feel too unlucky. And I feel good about making the 0.975+ club, though I was at pretty much the bottom of this club :)</p>
<p>All in all, I had way more fun and learned more than I initially thought I would!
Hats off to the community who collective have done a lot of great detective work and openly sharing their findings... especially @cdeotte who was clearly the MVP of this competition in my mind!</p> |
Instant Gratification | 1st Place Solution Sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It is really a surprise to me that we finished first, a bit of luck lol</p>
<p>I looked into the problem in the last few days and the sharing really helps a lot, especially Chris's <a href="https://www.kaggle.com/cdeotte/pseudo-labeling-qda-0-969">post</a> explaining QDA and Pseudo Labeling, and Infinitewing's <a href="https://www.kaggle.com/infinitewing/create-validation-datas-by-make-classification">post</a> explaining how many clusters are in each class. Many thanks!</p>
<p>Our solution is kind of simple and direct,</p>
<h3>Step 1</h3>
<p>use QDA to get cv predictions</p>
<h3>Step 2</h3>
<p>add pseudo labels to testing data and flip y with not matched labels for training data to generate new training data</p>
<h3>Step 3</h3>
<p>use GMM to divide each batch to 6 clusters, with a simple method to evaluate the clustering performance, and transform clustering labels to class labels</p>
<h3>Step 4</h3>
<p>repeat step 2 and step 3 once.</p>
<p>We didn't do much tuning except for some threshold selections and got 0.9754 for CV, 0.9744 for public LB and 0.97598 for private LB.</p>
<p>The full code is <a href="https://www.kaggle.com/infinite/v2-all-gmm">here</a>.</p>
<p>So just keep it simple and happy kaggling. :)</p> |
Predicting Molecular Properties | 14th place solution (GCN from scratch in Kaggle kernel) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>For this competition, I implemented a GCN using keras.</p>
<p><strong>Input features</strong>: </p>
<p>1/ Atom types (one-hot encoding)
2/ Bond adjacency
3/ Bond types (one-hot encoding, generated using Open Babel)
4/ Bond vectors, normalized
5/ Inverse distance between all atom pairs, together with powers up to 4
6/ Cosine of bond angles for pairs of atoms 2 apart (for which we have to predict the coupling)
7/ Cosine of dihedral angles for pairs of atoms 3 apart (for which we have to predict the coupling)
8/ A flag on atoms whose dihedral contribution come from more than one path because they are part of a loop (in which case the max dihedral angle is retained, which is probably suboptimal)</p>
<p>Most features were created "offline", and pickled in a compact form to avoid exhausting the Kaggle instance memory. The data is "decompressed'' per-batch by a keras generator object. All the information regarding pairs of atoms are input in a 29x29 tensor, where 29 is the maximum number of atom per molecule. Entries for non-existing atoms are just zero'd, and masks are used where necessary. </p>
<p><strong>Data augmentation</strong>:</p>
<p>Random rotations to the bond vectors, to enforce independence from the reference frame. </p>
<p><strong>Network structure</strong>:</p>
<p>The network consists of 4 sections.</p>
<p>1/ The vertex (atoms) and edge (atoms x atoms) information is processed by 4 layers of width 1 convolutions (i.e., the same fully connected network is apply to each vertex independently, and similarly for the edges).
2/ 6 graph convolution layers are applied (transforming the vertex data, but leaving the edge data constant).
3/ A special layer passes the information from the vertices to the edges.
4/ 4 layers of width 1 convolutions are applied to the edges, to produce the magnetic coupling predictions.</p>
<p>Originally I only had 2 (body), 3 (head) and 4 (tentacles) :)</p>
<p><strong>Extra data</strong>:</p>
<p>The extra training data not present on the test data was used as follows.</p>
<p>1/ Mulliken charges, and shieleding tensor eigenvalues were predicted by two width 1 convo from the output of the the 3rd GC layer, and the prediction input into the 4th GC layer.
2/ The magnetic contributions are predicted all together at the output of the network and summed to provide the final predictions.</p>
<p>Unfortunately, only the eigenvalues of the shielding tensor can be used because the tensor are not provided in the correct reference frame.</p>
<p><strong>Regularization</strong>:</p>
<p>All layers are followed by ReLU nonlinearities, and most preceded by a Batch Normalization layer.</p>
<p><strong>Result</strong>:</p>
<p>This produces a score of -2.6 after about 27 hours runtime of the script (kernel) (3 times 9 hours continuous runs, with weights saved in-between). Averaging the predictions from about 10 runs yield the -2.9 final score.</p>
<p><strong>Potential improvement</strong>:</p>
<p>I was working on implementing a GCN with operations equivariant to rotations (as in <a href="https://arxiv.org/abs/1802.08219">https://arxiv.org/abs/1802.08219</a> and <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106275#">https://www.kaggle.com/c/champs-scalar-coupling/discussion/106275#</a>), but didn't have enough time to finish.</p>
<p><strong>Script</strong>:
<a href="https://www.kaggle.com/cedben/squid?scriptVersionId=19764703">https://www.kaggle.com/cedben/squid?scriptVersionId=19764703</a></p>
<p>[edited: network description]</p> |
Open Images 2019 - Visual Relationship | 13th place solution summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Visual Relationship
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all. Conglatulations to the winners and thank you Google AI to host this interesting competition this year again.</p>
<p>Like most of previous years solutions, I split the problem into two parts, <code>non-is</code> and <code>is</code> relationships as they have quite different characteristics.</p>
<h3>1. non-is relationship</h3>
<p>For this task, I focused on the relationship between two objects such as <code>Man on Chair</code>, <code>Cat under Table</code>.
My approach has two steps. Detect objects and then find out relationships for every possible triplets.</p>
<p><strong>Object Detection</strong></p>
<p>There are 57 objects such as <code>Man</code>, <code>Oven</code> to be a part of triplets. I used cascade-rcnn from <a href="https://github.com/open-mmlab/mmdetection/tree/master/mmdet">mmdet</a> for these 57 classes with a bit of modification such as adding test time augumentations.</p>
<p><a href="https://github.com/appian42/kaggle/blob/master/openimages/cascade-rcnn.conf">This</a> is the .conf file I feeded to mmdet. Noticeble changes from default parametersz are </p>
<ul>
<li>2 more anchor boxes in RPN to capture high aspect ratio objects.</li>
<li>CosineAnnealing instead of StepLR.</li>
<li>score threshold 0.0001 for RCNN instead of 0.05.</li>
<li>NMS treshold 0.4 instead of 0.5.</li>
<li>max_per_image 400 instead of 100.</li>
</ul>
<p>I under-sampled frequent classes such as Man, Woman, Chair and used only 150,000 images to shorten the train time at the cost of accuracy.</p>
<p><strong>triplet relationships</strong></p>
<p>There are 287 triplet relationships.
I took a similar approach to <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64630">anokas's solution</a> last year as it's extremely simple and easy to implement.</p>
<p>Some changes I made was</p>
<ul>
<li>Merged similar triplets into the same class based on some engineered features such as IOU, IOF so that rare triplets can be trained thanks to more frequent triplets.</li>
<li>After merging, there are 90 classes out of 287 triplet relationships and 4-fold LightGBM models were separetely trained for each of these classes.</li>
</ul>
<p>Averaged AUC is 0.9641. After separating merged classes into original classes, averaged AUC is 0.9623.</p>
<p><strong>Submission</strong></p>
<ul>
<li>0.31692 (public)</li>
<li>0.24060 (private)</li>
</ul>
<h3>2. is relationship</h3>
<p><strong>Object detection</strong></p>
<p>I used cascade-rcnn to directly detect attributed objects such as <code>Table Wooden</code>, <code>Bench Plastic</code>. There are 42 classes for this task. This approach is similar to <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64642">toshif's solution</a> last year but I failed to make the model as good as he did. </p>
<p><strong>Submission</strong></p>
<ul>
<li>0.07346 (public)</li>
<li>0.07130 (private)</li>
</ul>
<h3>3. submissions combined</h3>
<p>I just put them together.</p>
<ul>
<li>0.38469 (public)</li>
<li>0.30781 (private)</li>
</ul>
<h3>4. Possible improvements</h3>
<ul>
<li>Use full image size.</li>
<li>Use all dataset.</li>
<li>Use external dataset (such as COCO, Objects365 if it improves any).</li>
<li>Train CNN model for triplet relationships just like <a href="https://www.kaggle.com/c/google-ai-open-images-visual-relationship-track/discussion/64651">tito's solution</a> last year. Some triplet relationships such as <code>Man holds Violin</code>, <code>Man plays Violin</code> are hard to differentiate because simple features such as IoU, IoF could not really capture the difference but CNN could.</li>
</ul>
<p>Thanks for reading!</p> |
Predicting Molecular Properties | NN and LGB solution for top 5% (and some cool graphs) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi guys, I created two kernels explaining how we used LGB and NN models to get a silver medal in this competition. I know it is not as cool as all the graph NNs, but might be nice for someone :)</p>
<p>1) <a href="https://www.kaggle.com/felipemello/features-for-top-5-lb-with-nn-or-lgb">Features for top 5% LB with NN or LGB</a> show how we calculated the features we used. The main one that wasn't shared publicly is calculating forces resultants on the axis of atom0 to atom1. Features based on this principle were the most important to my model. Here is one:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2689207%2F7af99ef7b86015faae42eca6866673ac%2Fforce_resultant.png?generation=1567102497246402&alt=media" alt=""></p>
<p>2) <a href="https://www.kaggle.com/felipemello/nn-and-lgb-tricks-and-pipeline-for-top-5-lb">NN and LGB tricks and pipeline for top 5% LB </a> describes the pipeline used. There were a few tricks that greatly boosted our score, like using <a href="https://www.kaggle.com/speedwagon/permutation-importance">permutation importance</a> to select features and setting distance for LGB as yukawa distances (exp(-dist)/dist).</p>
<p>We also added some features extracted from a NN layer as features to the LGB, and they look awesome. It always impress me how beautiful neural networks are. Here is an example:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2689207%2F75927fece7206a6c97c0279d8e938dcb%2Ffeat_nn_6.png?generation=1567102931116829&alt=media" alt=""></p>
<p>It took a while to upload everything and detail the code, so, if you enjoy reading it somehow, consider upvoting as a reward for my time :)</p>
<p>Hope you guys enjoy it. Happy Kaggling.</p> |
Instant Gratification | Instant solution summary, 3th place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In the spirit of the competition I prepared this summary before the competition ended, so it is an instant (well, almost) solution summary. I hope my final placing will not be too embarrassing.</p>
<p>My final model <a href="https://www.kaggle.com/zaharch/instant-success-gmm-3th-place">kernel is here</a>. It is a Gaussian Mixture model with several tweaks to accommodate for the make_classification() special structure. In general, QMM here works completely unsupervised on both train and test data, and the train set labels are used to assign each cluster to a 0/1 class.</p>
<p>I started from <a href="https://www.kaggle.com/christofhenkel/graphicallasso-gaussianmixture">this awesome kernel</a> by <a href="https://www.kaggle.com/christofhenkel">@christofhenkel</a>. </p>
<ol>
<li>First I modified M step in GMM to round means estimations to -1 or 1 values (just after calculating them), to align it with hypercube vertices of make_classification and assuming class-sep equals default value of 1. This change alone with 3 clusters per class gave me 0.974+.</li>
</ol>
<p>The following 3 additional changes each gave me small improvements</p>
<ol>
<li><p>Running GMM with 2, 3 and 4 clusters per class and choosing the "best performing" one. This is a weak point of my model, I tried many approaches to make this identification better but it didn't work too well. At the end, I am not even completely sure that the number of clusters per class is a random number. My best model gives [46, 367, 99] distribution for 2 to 4 clusters per class respectively. </p></li>
<li><p>Also on M step of GMM I re-weight probabilities of sample points in clusters to make total probabilities per cluster equal. This is motivated by the fact that make_classification() creates clusters with equal number of points.</p></li>
<li><p>Increased covariance regularization in QMM from reg_covar=0.001 to 0.1</p></li>
</ol>
<p>Now, I am really looking forward to see what other people have done. 80% of the work was published online, but this last 20% is still a meaningful challenge. Finally, thanks to Kaggle for giving us this beautiful puzzle :)</p>
<p>P.S. I jumped from 43 public to 3th private. Well, I didn't expect anything like that to happen. I wrote the post above before the results were known, and I publish it as it is. </p> |
Instant Gratification | 5th solution of Instant Gratification | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I have been lucky for the last few weeks, got 5th of this competition and 40th of the lanl competition. I entered this competition last Thursday and make not much progress until yesterday. I would like to special thanks to CPCP! I won't come to the right idea with his help!
And also thank to Chris and Dieter! I used the code templete from theirs to try my own idea!</p>
<p>My model is GMM. The trick is to have 2 gaussian components for each class. Here is my code:</p>
<p><a href="https://www.kaggle.com/waylongo/gaussianmm">5th ranking solution</a></p> |
Open Images 2019 - Object Detection | 32nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>solution in pdf</p> |
Predicting Molecular Properties | 24th (-2.633) - haha nice - pairs of pairs of ... | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Hey everyone,</h1>
<h2>this was our initial approach:</h2>
<p>For each connection we would like to find features that describe the influence other connections have on said connection. To do this we use a neural network that looks at pairs of connections. Its input are simply the connection types and all six Euclidean distances. We do this for all connections with shared weights, resulting in a tensor of size (1, channels, size, size-1). We efficiently implement the network using <strong>1x1 2D-convolutions</strong> to process an entire molecule at once. Once we have calculated these features, we sum them up (see picture), concatenate with distances and connection types of single connections and use another neural network, implemented as <strong>1D-convolutions</strong> with a kernel size of one to get the scc predictions. It is probably easier to understand with this picture:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3356694%2Fba61a244d36f37fbe8b0ddf5e05f94e5%2Fimplementation.png?generation=1567077987502246&alt=media" alt=""></p>
<p>Our solution is invariant to
- the order of connections fed to the network
- translations and orthorgonal transformations</p>
<p>and</p>
<ul>
<li>it does not require any padding</li>
<li>works with differently sized molecules.</li>
</ul>
<p>Summing and weight sharing enforce additivity of pair features. If there are two connections in the vicinity of a connection, that have the same physical influence (e.g. symmetric connections), their pair features should be the same! At least we hope so!</p>
<p>Details of training:
- L2 loss
- Adam
- no regularization/no dropout/no batchnorm
- learning rate initially 1e-4, reduced to 1e-6 in multiple steps
- 100 to 200 epochs</p>
<p>Connection types are one-hot encoded, distances are standardized and scc are standardized per type. We used up to 1024 hidden neurons in up to 5 layers and ReLU activations in both networks. Training took ~2 weeks on colab (mostly on a single K80) and gave us a score of <strong>-2.126</strong>. </p>
<p>We believe that this approach has a fundamental flaw. We only look at pairs and not at higher order interactions. What can we do? Looking at triplets is too expensive.</p>
<h2>Improved approach:</h2>
<p>Similarly to how we build a matrix of pairs from single connections, we use the pair features (after summing) of the previously explained network to build another matrix! Pairs of pairs. We also concatenate the original features to the matrix (types and distances). We again use <strong>1x1 2D-convolutions</strong> and sum it up. The final NN is given both the summed features of the pairs and pairs of pairs network. Training was done similarly to our initial approach and resulted in a score of <strong>-2.473</strong>.</p>
<p>The cool thing about this architecture is that you can repeat the process of building matrices over and over again, increasing the effective field of view. We did it one more time. Sadly we ran out of time in the end. 2 weeks of training on kaggle kernels were insufficient.</p>
<p>Our best submission is a blend of submissions (<strong>-2.633</strong>).</p>
<p>We did very little hyper parameter and architecture search and are quite surprised that it worked quite well. There are many improvements to be made.</p> |
Predicting Molecular Properties | 62nd Place Solution. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Code Specification</h1>
<p>I'll try to briefly summarize different components of my solution.</p>
<h2>Model Structure</h2>
<p>I created 3 models. ( 4th one was a <a href="https://www.kaggle.com/ioohooi/chemistry-of-best-models-1-839">public kernel </a> )
1. Catboost (LB: -1.95)
2. LightGBM (LB: -1.87)
3. GNN (LB: -2.05)</p>
<h3>Features for Catboost and LightGBM</h3>
<p><code>Bond detection</code>
Initially I inferred bonds(<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/find_edges.py">Code</a>). However, I later used <a href="http://openbabel.org/wiki/Main_Page">openbabel</a> for bond detection. </p>
<p><code>Neighbor detection</code>
I defined neighbors in 5 ways:
a. Using the <strong>edge data</strong>, I was able to define 1st neighbor atoms, 2nd neighbor atoms and so on. I was also able to figure out intermediate atoms in path from atom_index_0 to atom_index_1. (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/compute_intermediate_atoms.py#L121">Code</a>)
b. Using distance between atoms and standard bond length, neighbors were inferred which were close enough to be considered to be bonded (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/neighbor_features_distance.py#L104">Code</a>)
c. Based on atom_index. For example, neighbors of atom_index 5 will be 3,4,6,7. (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/neighbor_features_atom_index.py">Code</a>)
d. k-nearest atoms were considered as neighbors. <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/edge_features.py#L182">Code</a>
e. I divided the region around center of the bond by creating right circular cones with different base angles. Cone's axis was the bond. It gave me neighbors which were in <strong>different angular neighborhood</strong> ([0°-60°], (60°-120°) and so on). <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/conical_segmented_features.py">Code</a></p>
<p><code>Neighbor based features</code>
I created aggregate of Electronegativity, Valency, Lone pairs, bond angles,mass and distance for neighbors generated from above 5 ways.</p>
<p><code>Angle based features</code>
a. Dihedral angle and some other angles were also computed for 2,3 bond apart atom pairs. (3JHH,3JHN,3JHC,2JHH,2JHN,2JHC)
b. Aggregate of angles for neighbors. </p>
<p><code>Estimation of partial charge</code>
Using electronegativity information and co-ordinates of atoms, I computed electronegativity vectors normalized by distance between the atoms. Doing a vector sum of these electronegativity vectors over bond neighbors yielded me an estimate of partial charge on each atom. <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/edge_features.py#L95">Code</a>. Later in the competition, I started to use sum of electronegativity difference normalized by distance as a measure of partial charge (no vector addition).</p>
<p><code>Potential based features</code>
Used yukawa,coulomb potential using the estimated charge. <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/atom_potentials.py#L14">Code</a>.</p>
<p><code>Other features</code>
I used some of the atom features generate by openbabel. Using edge data, I was able to also infer cycle length and other related features (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/cycle_features.py#L23">Code</a>). I also used bond energies as feature between bonds of the molecule(<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/bond_features.py#L31">Code</a>). Carbon hybridization was used as a feature. For 3 bond distant atom pairs which had 2 sp2 hybridized carbon atoms as intermediates, Cis/Trans configuration was used as a feature (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/intermediate_atom_features.py#L144">Code</a>). I also tried to estimate how much pi bonds have electron 'donor' tendency. (<a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/pi_donor.py">Code</a>)</p>
<h3>Features for GNN</h3>
<p>I had to recompute the edge features as they were to be computed between each possible atom pairs in a molecule. I ended up coding up a numpy heavy implementation of the top features again which I had computed for tree based models. Earlier features were computed mostly using pandas.</p>
<h1>Journey</h1>
<p>I started this competition after completing a <a href="https://www.kaggle.com/c/competitive-data-science-predict-future-sales">playground competition</a>. In the beginning I started working on it without looking at discussions and public kernels . Such arrogance :D . With time, however, improvements started saturating. Then I skimmed through the discussions ;) .
<img src="https://i.chzbgr.com/full/6493143296/hA923C6DD/" alt="">
Again, after sometime, progress plateaued and with all humility I started understanding different ideas present in the public domain. <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/96436#latest-565548">https://www.kaggle.com/c/champs-scalar-coupling/discussion/96436#latest-565548</a> and <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/93928#latest-588773">https://www.kaggle.com/c/champs-scalar-coupling/discussion/93928#latest-588773</a> were the among the most helpful to me.
<code>When you know that something will most probably work, it is much easier to code. ;)</code>
At some point, I saw a comment stating that one should use mse as the loss instead of mae. Initially, I had tried both of them but had not got much difference. Now however, I got a 0.2 jump !! At this point I was around -1.5 LB.
Then I got wind of GNN from <a href="https://www.kaggle.com/fnands/1-mpnn">https://www.kaggle.com/fnands/1-mpnn</a> .It was a nice experience reading up the GNN paper and understanding the code of the kernel. This collaborative nature of kaggle is just amazing. One can learn so much if only one is willing to look around and work. In my case, memory was the main issue with GNN. My features were not fitting into the memory. I changes all features to float16 and selected top few features . Then I started getting somewhere. 5 fold CV helped a lot to remove the jitter from the prediction. I did about 40-50 epochs in each fold with hand curated learning rate scheduling. On average, starting from 1e-3, I reduced LR every 8 iterations by a factor of .5 . </p>
<p>Looking back, I loved this journey and will definitely participate in more kaggle competitions.</p>
<h1>Technical Takeaways from the competition</h1>
<ol>
<li>I learnt to use <strong>numpy</strong> much better and learnt to avoid pandas <strong>groupby,stack,unstack</strong> operations as much as possible. I was able to reduce my feature generation time about 10-50 times in different cases. When you think about it, you will be surprised how much can be done using matrix multiplications. <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/gnn_common_utils_molecule_properties.py#L75">Distances</a>, <a href="https://github.com/ashesh-0/kaggle_competitions/blob/master/predicting_molecular_properties/numpy_angle_computation.py">angles</a>, aggregated features for neighbors and much more. It started when I looked at this amazing kernel <a href="https://www.kaggle.com/cpmpml/ultra-fast-distance-matrix-computation">https://www.kaggle.com/cpmpml/ultra-fast-distance-matrix-computation</a> </li>
<li>I've a few redundant features due to which unnecessary time was spent in their computation, codebase got quite big and I was running into memory issues. In future, I'll try to <strong>add one feature at a time</strong></li>
<li>By looking at the brief summary of top solutions, I see that there was <strong>much less feature engineering</strong> for neural network based solutions. In future, I'll start on NN based models quite early into the competition.</li>
<li>It is worth the time to do KFold CV. Results improve significantly.</li>
</ol>
<p>It was a good competition. The experience taught me so many things and I'm greatful for kaggle and the competition organizers for this. I'll update it with notebooks soon.</p> |
Predicting Molecular Properties | 15th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First off, I would like to thank Kaggle, the host, and my team <a href="/senkin13">@senkin13</a> , <a href="/alexpartisan">@alexpartisan</a> , <a href="/chenlongwang">@chenlongwang</a> . It has been an interesting journey for me and certainly very rewarding to get my first gold. And Congrats to all who has worked hard for this competition!</p>
<p>I was mainly working on GNN while my teammates have worked on NN/lgb/feature engineering/stacking. So I will be focusing on GNN and leave the rest to them. </p>
<p>I mostly used <a href="https://github.com/rusty1s/pytorch_geometric">pytorch_geometric</a> for GNN, which is an amazing library. Network structures I have tried but failed to work include NNConv (<a href="https://arxiv.org/abs/1704.02901">https://arxiv.org/abs/1704.02901</a>), SchNet (<a href="https://arxiv.org/abs/1706.08566">https://arxiv.org/abs/1706.08566</a>). I guess the reason is that those networks does not update edge information and this competition aims to predict edge attributes. I tried various ways (concat/add/multiply) to combine node information of the last layer into edge to make predictions but they do not work as well.</p>
<p>Our major break through is based on Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals (<a href="https://arxiv.org/pdf/1812.05055.pdf">https://arxiv.org/pdf/1812.05055.pdf</a>), in which both edge and node information are updated together. On top of the edge prediction, I add MLP per type to make the final prediction. </p>
<p>As for training, we tried both training each type separately and all types together and found that training all types together gives more stable results. We also tried fine-tune each type after training all types together first. But we found that it overfits very fast and leads to worse results.</p>
<p>One question I was never able to answer is that, there are two types of graph, one based on chemical bond and one based on coupling that we seek to predict. While they have the same node, edge connection is very different. Not having any background knowledge in this domain, we were not sure which one to train GNN on and left it to trial and error. Maybe someone with domain knowledge could shed some light on this.</p>
<p>Again, Congrats to all winners and look forward to their solution.</p>
<p>Best,</p>
<p>Zhenlan,</p> |
Predicting Molecular Properties | 5th place solution : DL guys | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Thanks</h1>
<p>First of all, a big Thank You to the organizers for this outstanding competition. This competition is very challenging and interesting in many points of view.</p>
<p>Secondly, congratulations to all teams which completed this competition, whether you are in the medals or not. We learned a lot from your solutions and your discussions. We didn’t have any expertise in neither chemistry nor in graph neural net before this competition, this experience has been very enlightening for the 3 of us.</p>
<p>Also big kudos to the top 4 teams, your usage of Transformers are quite eye opening (honestly we did think about it but was not audacious/confident enough to test it out). Maybe this competition will open a new paradigm of Deep Learning for molecular properties - Transformer is all you need :D</p>
<p>A last thank you to my teemmates and coworkers Lam Dang and Thanh Tu Nguyen :) It was very fun competing with you.</p>
<h1>Solution</h1>
<p>Without further ado, here is a highlight of our solution:
- On macro level our best submission is a 2 layer stacking:
- The base level consists of different variant of the general Graph Neural Net with edge, node and global representation with some variations (cf. Architectures below)
- It was implemented with pytorch and pytorch_geometric.
- The 2nd level is some metamodel trained on our validation set of 5000 molecules : 1 linear stacking model and 1 LGBM (cf. Stacking section below)
- The final submission is a blend of 2 meta model</p>
<h2>Architecture:</h2>
<p>The final architecture is based on the paper <a href="https://arxiv.org/abs/1812.05055">https://arxiv.org/abs/1812.05055</a>.
We tried different variations to improve this architecture, here is a summary of what worked and what didn’t work:
- Normalization: We found that LayerNorm worked better that BatchNorm for this data and helps improve convergence
- Softplus vs ReLU: Softplus did provide a ~ 0.1 boost of logMAE for our models vs a ReLU baseline
- Edge to node message gating: We found that adding some gating mechanism to the edge representation before the scatter_mean (see torch_geomrtric) for node update helps
- Edge to edge convolution: Guillaume implemented something that seem to work very well. He noticed after a feature importance test that the most important one was by far was the angle between an edge and the edge with the closest atom to the first edge. To integrate this angle feature for more than the closest edge, we updated each edge with a convolution of the edge in question and its neighboring edges in the graph (more specifically the neighboring edges that chemically connects two atoms), and putting in this convolution the angle of the edge vectors. This architecture tweek made our architecture 5 times slower but gave us a 0.15 improvement compared to the best model without it.
- 1 prediction tail per type: All types share a GNN “body”, but we found that having different MLP for each type helps.
- In some variants, before feeding into output MLP layers, we pool all the edges and nodes in the chemical bond path from atom_0 to atom_1. It seems to have helped in the beginning of the competition but our best model did not use it.
- For our architectures, we found that having a representation of the link between atom_0 and atom_1 is important. Also including the global representation as inputs of the top layers is important</p>
<h2>Stacking:</h2>
<ul>
<li>Our single best model is the one with edge to edge convolution which gives us -2.9. But we have various models around -2.7 which are the variants of it. By stacking all (20 models) we got -3.13 on LB. </li>
<li>Another thing we found out at the last day helps improve our score from -3.13 to - 3.15 is adding checkpoints of our models to stacking pipeline. So finally, we have 50 predictions to do stacking.</li>
</ul>
<p>Our final result is a blend of LGBM and HuberRegressor.
- LGBM: 20 GroupKFold on all bond types together
- HuberRegressor: 20 GroupKFold on every bond type separately. </p>
<h2>Computation:</h2>
<p>We have :
- 1 GTX 1080ti x 2 months + 1 RTX 2080ti x 1 month
- 1 RTX 2080ti x 3 month
- 1 V100 x 2 month (rented) + 4 V100 x 2 weeks (rented)</p> |
Predicting Molecular Properties | 42nd Solution, explanations and apologies | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Fisrt of all, I'd like to apologize for what happened last week, the leaked -2.163 submission ... also thank my mates and <a href="/inversion">@inversion</a> for understanding the situation and reacting quickly.</p>
<p>It's surreal, I know, days later I read the posts and many people didn't understand why it had happened or what reasons I would have to do it. It's very surrealistic, I am sure the first kaggler that filters a top50 solution while in the bathroom pooping (not a joke). Now when I remember it I laugh, but at that moment I felt really really bad. I submit, go to the bathroom, come back and see my teammates are calling me desperate...</p>
<p>I write this post in order to prove our solution is legit, explain it and show our code.</p>
<p>I didn't hide my mistake and communicated it as quickly as I could to the community and administrators, and I pointed out who had used it and forked it.</p>
<p>I don't know what happened that night, I think I edited a kernel that was already public, or because I am used to sharing kernels I didn't read the message when I invited my mates and clicked to accept... I don't know.</p>
<p>A lot of people shared different opinions, and I tought about it... I remember the post <a href="https://www.kaggle.com/product-feedback/83351">Disallow Deleting Topics</a> by <a href="/brandenkmurray">@brandenkmurray</a> , there are a lot of deleted posts in these forums and using posts you can share almost the same information as kernels (code, pictures, descriptions etc) ... (it's just a opinion, I think it's also a big problem and I thought about it after reading all the messages).
I resent the fact that many people think I leaked it on purpose or that it's a case of private sharing, because I hate private sharing and believe in fair competitions, but at the same time I understand those persons because this is surreal.</p>
<p>I just wanted to apologize with the community, hosts and my mates... and share my point of view.
I hope you have obtained great results, or at least according to the time you have dedicated to the competition, and I also hope that you have not been affected by the leak.</p>
<p>best,
Marcos.</p>
<hr>
<h3>Base</h3>
<p>The -2.163 submission was a simple median of 4 files, the names of the files were self-descriptive, like <em>lgbm1945.csv</em>, the models were :</p>
<p>| model | LB |
|---|---|
|lgbm | -1.945 | |
| mpnn | -1.884 | |
| semGCN | -1.830 | |</p>
<p><strong>why didn't we open sourced the solution?</strong>
Team decision, I would like but we had 3 big problems:
1. We couldn't reproduce results</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2779868%2F2818de1aed80ee79651f428d131e7c53%2FScreenshot%20from%202019-08-29%2000-11-39.png?generation=1567030811963445&alt=media" alt=""></p>
<p>The people who forked the kernel can check that we used those subs for stacking, but we did those subs 1 month ago... (I wasn't even member of the team) we couldn't reproduce those results, semgcn is a python script (no checkpoints, no github, just a local run).
2. I read about that idea on Friday (next day afternoon) and I was very busy, I couldn't prepare all the code + continue competition... and then all the problems with kaggle kernels + we couldn't reproduce results.
3. If we open sourced the solution, people would use the submission without running the code and using the excuse: nono, we generated this file using the open source solution (at least that what we tought)</p>
<h3>Features</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2779868%2Fc7a27bb40ebfeb48f2f68ddd76e57f97%2FScreenshot_from_2019-08-25_09-42-58.png?generation=1567037089996198&alt=media" alt=""></p>
<p>I read the posts and papers, watched some youtube videos and my knowledge in biochemistry and bioinformatics.
- Distances and Angles (bond angles and dihedral angles). The distance between C-C bond is very important.
- Bond type
- Number of atoms, number of neighbors of each ...
- Hybridization (aromatization, atom__type, electronegativity, valences) in the theory are equivalent.
> C3 = carbon sp3 = more electronegative than carbon2</p>
<p>Important <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/104241#latest-606224">Is 1JHC really one class</a>
> Giba: The two groups are easily splited setting a threshold in 1J coupling distance to 1.065.</p>
<p>The reason is this:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2779868%2F4cef58f29e1062c9271e1b17e9bc80f9%2FScreenshot_from_2019-08-25_09-43-22.png?generation=1567037089834080&alt=media" alt=""></p>
<p>So we started to count those substituents using this <a href="http://rdkit.org/docs_temp/source/rdkit.Chem.Fragments.html">rdkit.Chem.Fragments</a> (Number of aromatic nitrogens, Number of carboxylic acids etc)</p>
<h3>Things that didn't work</h3>
<ul>
<li><p><strong>SchNet</strong> I improved this amazing baseline and we could get -1.61 with only 25 epochs, but we faced 2 problems:</p>
<ol><li>Time, needs loong epochs, and we only had 1 kaggle gpu (MPNN was running on GCP already)</li>
<li>Initialization, random_state, nobbie error... the results vary greatly depending on the initial configuration and we forgot to set random seed.</li></ol></li>
<li><p><strong>Karplus Ecuation</strong></p></li>
</ul>
<p>Bothner-By equation
JHH = 7 -cos Θ + 5 cos 2Θ</p>
<p>where Θ is the torsion angle ... the problem wasn't obtain those angles, the problem was that I checked openbabel and RDKit and were not well calculated! For example we have CH4 (1st molecule)
Even the bond angles, I tried with water H20 and insted of 104.5 I obtained random results like: 122, 97...
I read about it <a href="https://www.rdkit.org/docs/GettingStartedInPython.html">here</a> and in the case of RDKit, it uses an algorithm based on distance geometry for conorming molecules from 3D (xyz), probably that's the reason :(</p>
<p>> Note that the conformations that result from this procedure tend to be fairly ugly. They should be cleaned up using a force field. This can be done within the RDKit using its implementation of the Universal Force Field (UFF).</p>
<h3>Final submission and postprocessing</h3>
<p>| model | LB |
|---|---|
|lgbm | -2.103 | |
| mpnn | -2.24 | |
| NN | -2.11 | |</p>
<p>btw, we just realized about this (private scores):</p>
<p>| single model | LB private |
|---|---|
|lgbm | -2.252 | |
| NN | -2.134 | |</p>
<p>Median stacking + <strong>Lasso</strong> regression in order to smooth/round the predictions</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2779868%2F642260d3a87b56078ba639a7a6cbf216%2FScreenshot%20from%202019-08-29%2002-11-15.png?generation=1567037508316091&alt=media" alt=""></p>
<h3>Models and Code</h3>
<ul>
<li>We used the thread <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/93972#latest-602628">Which graph CNN is the best (with starter kit at LB -1.469)?</a> as baseline for the GNN, all credits to <a href="/hengck23">@hengck23</a> (<strong>note:</strong> we did several changes).</li>
<li>NN based on <a href="https://www.kaggle.com/xwxw2929/keras-neural-net-and-distance-features">Keras Neural Net and Distance Features</a> by <a href="/xwxw2929">@xwxw2929</a> </li>
<li>LGBM based on <a href="https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481">Distance - is all you need. LB -1.481</a> by <a href="/criskiev">@criskiev</a> </li>
</ul>
<p>other important kernels:
- <a href="https://www.kaggle.com/toshik/schnet-starter-kit">SchNet Starter Kit</a> by <a href="/toshik">@toshik</a>
- <a href="https://www.kaggle.com/sunhwan/using-rdkit-for-atomic-feature-and-visualization">Using RDKit for Atomic Feature and Visualization</a> by <a href="/sunhwan">@sunhwan</a>
- <a href="https://www.kaggle.com/jmtest/molecule-with-openbabel">Molecule with OpenBabel</a> by <a href="/jmtest">@jmtest</a> </p>
<p><strong>Code and datasets</strong></p>
<ul>
<li><p><a href="https://github.com/mv-lab/CHAMPS-Kaggle19">github</a></p></li>
<li><p>Our single model LGBM has private score -2.242
<a href="https://www.kaggle.com/jesucristo/single-lgbm-2-242-top54">Single LGBM -2.242 = Top54 </a></p></li>
<li><p><a href="https://www.kaggle.com/jesucristo/mol-features">dataset with features</a></p></li>
<li><p><a href="https://www.kaggle.com/jesucristo/molsubs">dataset with submissions</a></p></li>
</ul> |
SIIM-ACR Pneumothorax Segmentation | 5th place solution && code updated | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all !
My solution is based on semi-supervision, and I added a 2-class classifier for pneumothorax in the network.</p>
<p>Network : Unet with Aspp
Backbone : se50 & se101
Image size: (1024, 1024)
Optimizer : Adam
Loss : 1024 * BCE(results, masks) + BCE(cls, cls_target)
Semi-supervision: mean-teacher[1-2] with NIH Dataset (0.874 ---> 0.880)</p>
<p>Scores:
stage1 0.8821
stage2 0.8643</p>
<p>[1] <a href="https://github.com/CuriousAI/mean-teacher">https://github.com/CuriousAI/mean-teacher</a>
[2] <a href="https://arxiv.org/pdf/1703.01780.pdf">https://arxiv.org/pdf/1703.01780.pdf</a></p>
<p>code : <a href="https://github.com/earhian/SIIM-ACR-Pneumothorax-Segmentation-5th">https://github.com/earhian/SIIM-ACR-Pneumothorax-Segmentation-5th</a></p> |
SIIM-ACR Pneumothorax Segmentation | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all winners. Many thanks to Kaggle and the organizers for holding this amazing competition. And special thanks to my teammates.</p>
<p>Our solution has two parts, classification and segmentation.</p>
<p><strong>Classification:</strong>
This part is used to classify whether an image in related with pneumothorax or not.
Our model is a multi-task model based on unet with a branch for classifying.
Data: all data
Cls loss: BCE + focal loss
Seg loss: BCE
Augmentation: hflip, scale, rotate, bright, blur
Backbone: seresnext 50, seresnext101, efficientnet-b3
Ensemble: stacking</p>
<p><strong>Segmentation:</strong>
There are two models used to segment, unet and deeplabv3.
Data: data with pneumothorax
Loss: dice loss
Augmentation: same as classification
Backbone: seresnext50, seresnext101, efficientnet-b3, efficientnet-b5
Ensemble: average</p>
<p>I’m cleaning source code, and will update it soon.</p>
<p>update
code: <a href="https://github.com/yelanlan/Pneumothorax-Segmentation-2nd-place-solution">https://github.com/yelanlan/Pneumothorax-Segmentation-2nd-place-solution</a></p> |
SIIM-ACR Pneumothorax Segmentation | 12th place solution: UNet with mask scoring head | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations everyone!
And thank you for my teammates <a href="/masaakikob">@masaakikob</a> and <a href="/card7077">@card7077</a> !</p>
<p>I found something interesting and would like to share it. <br>
The final submission was made only from my models.</p>
<h1>1. My strategy</h1>
<ul>
<li>Predict Segmentaion
<ul><li>img size 896</li></ul></li>
<li>Predict Class
<ul><li>img size 768</li></ul></li>
<li>Integrate Predicts
<ul><li>Remove low probability masks</li></ul></li>
</ul>
<h1>2. Model & Dataset Overview</h1>
<h2>2.1 Data flow</h2>
<p>MS-EUNet means "Mask Scoring UNet with EfifcientNet B4" in this term. <br>
TTA is horizontal flip.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2Fabc6c556d74d9c3ba141bcea3d3afc01%2F20190905_SIIM_13th_solution.png?generation=1567763118431030&alt=media" alt=""></p>
<h2>2.2 About Dataset</h2>
<p>| Dataset | Description |
|:--|:--|
| Stage1 Train | Kaggle stage 1 train set |
| Stage1 Test | Kaggle stage 1 test set |
| Stage2 Train | Kaggle stage 2 train set(= Stage 1 Train + Stage1 Test) |
| CheXpert | External Data. Removed ambigous labels. All image: 210k, pneumothorax images: 17k |
| CheXpert_1 | Sampled from CheXpert, 50k, pneumothorax : not pneumothorax = 1 : 2 |
| CheXpert_2 | Sampled from CheXpert, 68k, pneumothorax : not pneumothorax = 1 : 3 |
| DatasetSeg_1 | Only pneumothorax data in "Stage1 Train", 3k |
| DatasetSeg_2 | Only pneumothorax data in "Stage2 Train", 3k |
| DatasetCls_1 | Stage1 Train + CheXpert_1, 67k |
| DatasetCls_2 | Stage1 Train + CheXpert_2, 67k |
| DatasetCls_all | Stage1 Train + CheXpert, 220k |</p>
<h2>2.3 Which model was trained on which dataset</h2>
<p>| Model | Used Dataset |
|:--|:--|
| SEResNext(Model1) | DatasetCls_1 |
| SEResNext(Model2) | DatasetCls_2 |
| LightGBM | DatasetCls_all |
| MS-EUNet | DatasetSeg_2 |
| MS-EUNet(with ASPP) | DatasetSeg_1 |</p>
<h1>3. Segmentation</h1>
<h2>3.1 Mask Scoring Head(little different from original)</h2>
<ul>
<li>This is based on Mask Scoring R-CNN (MS R-CNN)
<ul><li><a href="http://openaccess.thecvf.com/content_CVPR_2019/papers/Huang_Mask_Scoring_R-CNN_CVPR_2019_paper.pdf"></a><a href="http://openaccess.thecvf.com/content_CVPR_2019/papers/Huang_Mask_Scoring_R-CNN_CVPR_2019_paper.pdf">http://openaccess.thecvf.com/content_CVPR_2019/papers/Huang_Mask_Scoring_R-CNN_CVPR_2019_paper.pdf</a></li></ul></li>
<li>I was interested in applying this idea to UNet architecture</li>
<li>I repeated the experiment. As a result, this was a useful case for my model</li>
</ul>
<p>```python
class UEfficientNetB4WithMaskScoreHead(nn.Module):
def <strong>init</strong>(self, dropout_rate=0.3, input_size=256):
super(UEfficientNetB4WithMaskScoreHead, self).<strong>init</strong>()
#############
# ~~~~~~~~~ #
#############
kwargs = {'mask_ch': 1, 'num_classes': 1, 'mask_feature_size': 32} # 32 is better than 56
self.mask_iou_head = ROIMaskScoreHead(in_feature_ch=in_feature_ch, **kwargs)</p>
<pre><code>def forward(self, x):
#############
# ~~~~~~~~~ #
#############
deconv1 = self.decoder2(upconv2)
upconv1 = torch.cat([deconv1, conv1], 1) # size: input_size//2
upconv1 = self.dropout(upconv1)
deconv0 = self.decoder0(upconv1)
upconv0 = self.dropout0(deconv0)
output_mask = self.output_layer(upconv0)
# Mask score calc
output_score = self.mask_iou_head(upconv1, output_mask)
return output_mask, output_score
</code></pre>
<p>```</p>
<h2>3.2 Loss function for Mask Scoring UNet</h2>
<p>```python
class BCEDiceLoss(nn.Module):
"""
Loss defined as alpha * BCELoss - (1 - alpha) * DiceLoss
"""
def <strong>init</strong>(self, alpha=0.5):
super(BCEDiceLoss, self).<strong>init</strong>()
self.bce_loss = nn.BCEWithLogitsLoss()
self.dice_loss = DiceLoss()
self.alpha = alpha</p>
<pre><code>def forward(self, logits, targets):
bce_loss = self.bce_loss(logits, targets)
dice_loss = self.dice_loss(logits, targets)
loss = self.alpha * bce_loss + (1. - self.alpha) * dice_loss
return loss
</code></pre>
<p>class DiceScoreL2Loss(nn.Module):
"""
Loss for mask scoring UNet</p>
<pre><code>Caution: This loss for 1 class !!!!!
"""
def __init__(self):
super(DiceScoreL2Loss, self).__init__()
self.smooth = 1e-8
self.sum_dim = (2, 3)
def forward(self, dice_logits, logits, masks_gt):
dice_preds = torch.sigmoid(dice_logits)
# BATCH x 1 x H x W
masks_preds = torch.sigmoid(logits)
# BATCH x 1
intersect = (masks_preds * masks_gt).sum(self.sum_dim).float()
union = (masks_preds.sum(self.sum_dim) + masks_gt.sum(self.sum_dim))
# Calc dice(BATCH x 1)
dice_gt = (2. * intersect + self.smooth) / (union + self.smooth)
# Calc L2 Loss
cond = torch.abs(dice_gt - dice_preds)
loss = 0.5 * cond ** 2
return loss.mean()
</code></pre>
<p>loss_func1 = BCEDiceLoss(alpha=0.7)
loss_func2 = DiceScoreL2Loss()
loss = loss_func1(logits, masks) + 0.2 * loss_func2(logits_score, logits, masks)
```</p>
<h2>3.3 Training params</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2F9718ce185aad18cc6c69f027d1f5e4c6%2F2019-09-05%2015.04.40.png?generation=1567763379014753&alt=media" alt=""></p>
<h2>3.4 Ablation Study</h2>
<ul>
<li>Dataset: Stage2 Train set(Stage1 Train + Stage1 Test)</li>
<li>Only pneumothorax data.</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2Fdd40b74d16c4c6d5b0cbeabcb1765f8d%2F2019-09-06%2018.10.57.png?generation=1567763417111811&alt=media" alt=""></p>
<h1>4. Classification</h1>
<h2>4.1 Train 2 binary classifier(seresnext101) by DatasetCls_1 and DatasetCls_2</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2Fc84aa628cf79664ac9dd4febc7e41d57%2F2019-09-06%2014.01.34.png?generation=1567763240265633&alt=media" alt=""></p>
<p><code>python
optimizer = optim.Adam(model.parameters(), lr=args.init_lr)
milestones = [int(args.num_epochs * r) for r in [0.6, 0.7, 0.8, 0.9]]
scheduler = lr_scheduler.MultiStepLR(
optimizer, milestones, gamma=0.5, last_epoch=-1)
</code></p>
<h2>4.2 Get Embeddings with TTA</h2>
<p>Concat all embeddings, I can got 8,192 dimensional features.</p>
<h2>4.3 Add dicom info</h2>
<ul>
<li>Sex: 0 or 1</li>
<li>Age: No change</li>
<li>ViewPoint: 0 or 1</li>
</ul>
<p>I can make 8,195 dimensional features for each images in DatasetCls_all.</p>
<h2>4.4 Train binary classifier(LightGBM) with optuna</h2>
<h1>Last</h1>
<p>P.S. We're hiring!: <a href="https://lpixel.net/en/careers/">https://lpixel.net/en/careers/</a></p> |
Open Images 2019 - Object Detection | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Open Images 2019 - Object Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>TL;DR</strong>:
- I'm using mmdetection framework and it's really convenient;
- Best single model (cascade rcnn with imagenet pretrained resnext101) + TTA (horizontal flip, multi-scale testing (600, 900), (800, 1200), (1000, 1500), (1200, 1800), (1400,
2100)) achieves 0.499 public;
- Split datasets into 6 subsets by frequency and then finetune on them for 1-2 epochs. --> appr. 0.05+ increase;
- Parent class expansion gives appr. 0.01+ increase;
- Weighted ensemble (from <a href="https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/64633#latest-590283">ZFTurbo's solution last year</a>) of all my high score submissions --> 0.608 final public score.</p>
<p><strong>1. Single model</strong>
At the very beginning, I was going to attend the "Visual Relationship Detection" track. But then I realized that I didn't have a good object detection model for that one. So I started with faster rcnn+resnext101, it takes me about 20 days to train 24 epochs and results in 0.446 on public lb.
Similarly, I trained cascade rcnn+resnext101, cascade rcnn+senet154 for 12 and 8 epochs respectively.
I just leave these models training for several weeks, do my daily work and give up "visual relation detection".
The best single model is cascade rcnn+resnext101, which was accidently trained for 19 epochs (6 epochs longer than planned). So maybe I should train longer for each model :).
<strong>Conclusion</strong>: my single models are weak. They should be trained longer.</p>
<p><strong>2. Finetune</strong>
Since the classes are very unbalanced, I split the dataset classes into 6 subsets simply according to frequency and finetune on them using faster rcnn+resnext101 model:
- Classes 0-50, appr. 1411368images, 2 epochs, lr 0.001
- Classes 51-100, appr. 308352 images, 2 epochs, lr 0.001
- Classes 101-200: appr. 208096 images, 2 epochs, lr 0.001
- Classes 201-300, appr. 93140 images, 2epochs, lr 0.001
- Classes 301-400, appr. 45840 images, 1epoch, lr 0.001
- Classes 401-500, appr. 19316 images, 1epoch, lr 0.001
Last two weeks before final deadline, I found one huge bug in my code.
After solving this bug, ensembling the predictions of finetuned models gave appr. 0.05+ increase on public LB.
<strong>Conclusion</strong>: solving class imbalance problem is the key to top silver or gold medal.</p>
<p><strong>3. TTA and Final Ensemble</strong>
Some teams merged and I may have the chance for solo gold.
So I did the followings:
- TTA for each model: horizontal flip, multi-scale testing with (600, 900), (800, 1200), (1000, 1500), (1200, 1800), (1400, 2100) image size;
- Expand parent class for each prediction after inference.
- Weighted ensemble (from <a href="https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/64633#latest-590283">ZFTurbo's solution last year</a>) of all models.</p>
<p><strong>Other tricks:</strong>
- Increase the number of limited boxes for each image, even though they are of low confidence. I choose 600 as upper limit in my final submission.
- I wasn't able to finish 12 epochs for my cascade rcnn+senet154 model. Nevertheless, ensembling it gives slight improvement.</p>
<p><strong>Not work for me:</strong>
Soft-NMS: tried to use it for ensemble, wasn't going well. </p>
<p><strong>Planned but not implemented:</strong>
- Use mask annotations from the segmentation track;
- Multi-scale training and mixup augmentation.</p>
<p><strong>Final words:</strong>
Let's play fair.
Peace&love. 👍 </p> |
SIIM-ACR Pneumothorax Segmentation | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone.
Firstly thanks to the host and Kaggle for this interesting competition, and congrats to all the winners. My solution is quite chaotic, so thanks also to everyone who can read it to the end:)</p>
<h2>First attempts(not very successful):</h2>
<p>At the beginning I decided to use a two-stage approach in this competition - train separate classification(pneumothorax/non-pneumothorax) and segmentation models.
Therefore, I trained several common classification models: resnet34, resnet50, seresnex50, seresnext101, dpn98, they all gave approximately the same accuracy ~0.9. And I could not improve this result in any way. After that I trained several unet models (only on images containing pneumothorax) with different backbones and got the score in the gold zone. But this approach was not very stable and results on LB did not correlate with my local CV.
And after several unsuccessful attempts to improve this solution, I was disappointed and stopped participating in this competition for about a month.</p>
<h2>Final solution:</h2>
<p>After returning to the competition, I decided to try a slightly different approach, which eventually has became my final solution. It also has two steps:</p>
<h3>Classification:</h3>
<p>For this purpose at that time I decided to use Unet model. And determined whether there are pneumothorax in the pictures by the threshold of the number of pixels. Based on my validation the best threshold was 2000.
Backbone: seresnext50
Data: For this step I used all images and balanced batches(pneumothorax/non-pneumothorax) it greatly accelerated convergence
Data splits: 5 folds and 10 folds stratified by pneumothorax area.
Input size: 768x768
Loss: BCE
Augmentations: hflips, rotations(up to 10 degree), random brightness, contrast and gamma, blur
Lr scheduling: reduce lr on plateau with patience=3 epochs.</p>
<p>This approach gave a pretty good result, but there were remained a rather large number of False Negative examples. Therefore, I decided to use classification models that I trained in the beginning and marked additional pictures as containing pneumothorax if all models attributed them to such.</p>
<h3>Segmentation:</h3>
<p>A core element at this stage was also Unet model) But this time, model predictions were made only in pictures that were identified as containing pneumothorax in the сlassification stage.
Backbone: seresnext50
Data: only images containing pneumothoraxes
Data splits: 5 folds and 10 folds stratified by pneumothorax area
Input size: 928x928, 768x768
Loss: BCE + Dice
Augmentations: same as in the classification stage
Lr scheduling: reduce lr on plateau with patience=5 epochs.</p>
<p>Update:
Code <a href="https://github.com/scizzzo/kaggle-siim-pneumothorax">https://github.com/scizzzo/kaggle-siim-pneumothorax</a></p> |
SIIM-ACR Pneumothorax Segmentation | 1st place solution (with code and configs) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all thanks Society for Imaging Informatics in Medicine (SIIM) and Kaggle for a very interesting and challenging competition. It was a pleasure to participate</p>
<p>This was my first experience with image segmentation, and if someone had told me beforehand how it would all end, I would consider him crazy)</p>
<p>My pipeline: <a href="https://github.com/sneddy/kaggle-pneumathorax">https://github.com/sneddy/kaggle-pneumathorax</a></p>
<h2>Model Zoo</h2>
<ul>
<li>AlbuNet (resnet34) from <a href="https://github.com/ternaus/TernausNet">[ternausnets]</a></li>
<li>Resnet50 from <a href="https://github.com/SpaceNetChallenge/SpaceNet_Off_Nadir_Solutions/tree/master/selim_sef/zoo">[selim_sef SpaceNet 4]</a></li>
<li>SCSEUnet (seresnext50) from [<a href="https://github.com/SpaceNetChallenge/SpaceNet_Off_Nadir_Solutions/tree/master/selim_sef/zoo">selim_sef SpaceNet 4]</a></li>
</ul>
<h2>Main Features</h2>
<h3>Triplet scheme of inference and validation</h3>
<p>Let our segmentation model output some mask with probabilities of pneumothorax pixels. I'm going to name this mask as a basic sigmoid mask. I used triplet of different thresholds: <em>(top_score_threshold, min_contour_area, bottom_score_threshold)</em></p>
<p>The decision rule is based on a doublet <em>(top_score_threshold, min_contour_area)</em>. I used it instead of using the classification of pneumothorax/non-pneumothorax.
- <em>top_score_threshold</em> is simple binarization threshold and transform basic sigmoid mask into a discrete mask of zeros and ones.
- <em>min_contour_area</em> is the maximum allowed number of pixels with a value greater than top_score_threshold</p>
<p>Those images that didn't pass this doublet of thresholds were counted non-pneumothorax images.</p>
<p>For the remaining pneumothorax images, we binarize basic sigmoid mask using <em>bottom_score_threshold</em> (another binariztion threshold, less then <em>top_score_threshold</em>). You may notice that most participants used the same scheme under the assumption that <em>bottom_score_threshold = top_score_threshold</em>.</p>
<p>The simplified version of this scheme:
<code>python
classification_mask = predicted &gt; top_score_threshold
mask = predicted.copy()
mask[classification_mask.sum(axis=(1,2,3)) &lt; min_contour_area, :,:,:] = np.zeros_like(predicted[0])
mask = mask &gt; bot_score_threshold
return mask
</code></p>
<h3>Search best triplet thresholds during validation</h3>
<ul>
<li>Best triplet on validation: (0.75, 2000, 0.3).</li>
<li>Best triplet on Public Leaderboard: (0.7, 600, 0.3)</li>
</ul>
<p>For my final submissions I chose something between these triplets.</p>
<h3>Combo loss</h3>
<p>Used [<a href="https://github.com/SpaceNetChallenge/SpaceNet_Off_Nadir_Solutions/blob/master/selim_sef/training/losses.py">combo loss]</a> combinations of BCE, dice and focal. In the best experiments the weights of (BCE, dice, focal), that I used were:
- (3,1,4) for albunet_valid and seunet;
- (1,1,1) for albunet_public;
- (2,1,2) for resnet50.</p>
<h3>Sliding sample rate</h3>
<p>Let's name portion of pneumothorax images as the sample rate.</p>
<p>The main idea is controlling this portion using sampler of torch dataset. </p>
<p>On each epoch, my sampler gets all images from a dataset with pneumothorax and sample some from non-pneumothorax according to this sample rate. During train process, we reduce this parameter from 0.8 on start to 0.4 in the end.</p>
<p>Large sample rate at the beginning provides a quick start of the learning process, whereas a small sample rate at the end provides better convergence of neural network weights to the initial distribution of pneumothorax/non-pneumothorax images.</p>
<h3>Learning Process recipes</h3>
<p>I can't provide a fully reproducible solution because during learning process I was uptrain my models <strong>A LOT</strong>. But looking back for the formalization of my experiments I can highlight 4 different parts:
- <strong>part 0</strong> - train for 10-12 epoches from pretrained model with large learning rate (about 1e-3 or 1e-4), large sample rate (0.8) and ReduceLROnPlateau scheduler. The model can be pretrained on imagenet or on our dataset with lower resolution (512x512). The goal of this part: quickly get a good enough model with validation score about 0.835.
- <strong>part 1</strong> - uptrain the best model from the previous step with normal learning rate (~1e-5), large sample rate (0.6) and CosineAnnealingLR or CosineAnnealingWarmRestarts scheduler. Repeat until best convergence.
- <strong>part 2</strong> - uptrain the best model from the previous step with normal learning rate (~1e-5), small sample rate (0.4) and CosineAnnealingLR or CosineAnnealingWarmRestarts scheduler. Repeat until best convergence.
- <strong>second stage</strong> - simple uptrain with relatively small learning rate(1e-5 or 1e-6), small sample rate (0.5) and CosineAnnealingLR or CosineAnnealingWarmRestarts scheduler.</p>
<p>All these parts are presented in the corresponding experiment folder</p>
<h3>Augmentations</h3>
<p>Used following transforms from [<a href="https://github.com/albu/albumentations">albumentations]</a>
<code>python
albu.Compose([
albu.HorizontalFlip(),
albu.OneOf([
albu.RandomContrast(),
albu.RandomGamma(),
albu.RandomBrightness(),
], p=0.3),
albu.OneOf([
albu.ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03),
albu.GridDistortion(),
albu.OpticalDistortion(distort_limit=2, shift_limit=0.5),
], p=0.3),
albu.ShiftScaleRotate(),
albu.Resize(img_size,img_size,always_apply=True),
])
</code></p>
<h3>Uptrain from lower resolution</h3>
<p>All experiments (except resnet50) uptrained on size 1024x1024 after 512x512 with frozen encoder on early epoches. </p>
<h3>Second stage uptrain</h3>
<p>All chosen experiments were uptrained on second stage data</p>
<h3>Small batch size without accumulation</h3>
<p>A batch size of 2-4 pictures is enough and all my experiments were run on one (sometimes two) 1080-Ti.</p>
<h3>Checkpoints averaging</h3>
<p>Top3 checkpoints averaging from each fold from each pipeline on inference</p>
<h3>Horizontal flip TTA</h3>
<h2>Best experiments:</h2>
<ul>
<li>albunet_public - best model for Public Leaderboard</li>
<li>albunet_valid - best resnet34 model on validation</li>
<li>seunet - best seresnext50 model on validation</li>
<li>resnet50 - best resnet50 model on validation</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F536790%2F583cf98d504a9726cca2da9472a5d718%2Fdashboard.png?generation=1567837634005064&alt=media" alt="Experiments dashboard"></p>
<h2>Final Submission</h2>
<p>My best model for Public Leaderboard was albunet_public (PL: 0.8871), and a score of all ensembling models was worse.
But I suspected overfitting for this model, therefore, both final submissions were ensembles.</p>
<ul>
<li>The first ensemble believed in Public Leaderboard scores more and used more "weak" triplet thresholds.</li>
<li>The second ensemble believed in the validation scores more but used more "strict" triplet thresholds.</li>
</ul>
<h3>Private Leaderboard:</h3>
<ul>
<li>0.8679</li>
<li>0.8641</li>
</ul>
<p>I suspect that the best solution would be ensemble believed in the validation scores more, but used more "weak" triplet thresholds.</p> |
SIIM-ACR Pneumothorax Segmentation | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we would like to thank SIIM and ACR for this interesting competition.</p>
<p>Our final solution based on EncodingNet (ResNets, 512 and 1024 size) and UNet (EfficientNet4, se-resnext50, SENet154 with 512, 640 and 1024 sizes).
Our models were trained and tuned independently before team merge. This gave us a lot of diversity in solutions.</p>
<p>Best augmentations were related to crops and rotations. We didn’t use contrast and brightness transformations.
Loss: BCE+Dice (tried Focal, didn’t work)
Lower image sizes reduced score a lot so full sized model should be better.
Tricks:
classification on top of EncondingNet with heavy TTA (11 methods)
small segments of predicted masks was deleted</p>
<p>Also I noticed that Horizontal Flip augmentation reduced my local CV, but didn’t check it on LB.</p>
<p>Link to github:
<a href="https://github.com/yura03101995/siim_pneumo">https://github.com/yura03101995/siim_pneumo</a></p>
<p>UPDATE:
For validation we used 4 folds. Folds were generated using <code>StratifiedKFold(n_splits=8)</code> along <code>isPneumothorax</code>. For each <code>solo model</code> we did
4-fold-blend model. Augmentations we used</p>
<p><code>
AUG = Compose(
[
HorizontalFlip(p=0.5),
OneOf(
[
ElasticTransform(
alpha=300,
sigma=300 * 0.05,
alpha_affine=300 * 0.03
),
GridDistortion(),
OpticalDistortion(distort_limit=2, shift_limit=0.5),
],
p=0.3
),
RandomSizedCrop(min_max_height=(900, 1024), height=1024, width=1024, p=0.5),
ShiftScaleRotate(rotate_limit=20, p=0.5)
],
p=1
)
</code></p>
<p><strong>Model1</strong>:
<strong>EncNet</strong> (from pytorch-encoding) with <strong>Resnet50</strong> pretrained on Pascal dataset.
First, we trained 4-fold-blend model on 512 x 512 resolution with AUG.
Second, we trained 4-fold-blend model on 1024 x 1024 resolution with AUG .
Finally, we avereged two 4-fold-blend models. </p>
<p><strong>Loss</strong>: Dice
<strong>Augmentation</strong>: AUG
<strong>TTA</strong>: Flip, Clahe and <code>np.arange(-20, 21, 5)</code> angles rotations (total 10 TTA)</p>
<p><strong>Model2-3</strong>:
<strong>Two Unet</strong>-like models with <strong>SEResnext50</strong> and <strong>SEResnet152</strong> backbones were trained on
1024 x 1024 resolution. Each model also was averaged along 4 folds.</p>
<p><strong>Loss</strong>: Dice <br>
<strong>Augmentation</strong>: AUG
<strong>TTA</strong>: Flip</p>
<p><strong>Model4</strong>:
<strong>Unet</strong>-like model with <strong>EfficientNetB4</strong> backbone trained on <strong>640 x 640</strong> resolution.
Also were blended along 4 folds.</p>
<p><strong>Loss</strong>: weighted Dice + BCE
<strong>Augmentation</strong>: AUG
<strong>TTA</strong>: no TTA</p>
<p><strong>Final model</strong>:
Average of <strong>Model1</strong>, <strong>Model2</strong>, <strong>Model3</strong> and <strong>Model4</strong></p>
<p><strong>Classification threshold</strong>: ~0.32
<strong>Segmentation threshold</strong>: ~0.375</p> |
SIIM-ACR Pneumothorax Segmentation | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, kagglers.</p>
<p>My solution is based on UNet with a deep supervision branch for empty mask classification.</p>
<p><img src="https://raw.githubusercontent.com/amirassov/kaggle-pneumothorax/master/figures/model.png" alt="model"></p>
<ul>
<li>Model: UNet.</li>
<li>Backbone: ResNet34 backbone with frozen batch-normalization.</li>
<li>Preprocessing: training on random crops with <code>(512, 512)</code> size, inference on <code>(768, 768)</code> size.</li>
<li>Augmentations: ShiftScaleRotate, RandomBrightnessContrast, ElasticTransform, HorizontalFlip from albumentations.</li>
<li>Optimizer: Adam, batch_size=8</li>
<li>Scheduler: CosineAnnealingLR</li>
<li>Additional feature: the proportion of non-empty samples linearly decreased from 0.8 to 0.22 (as in train dataset) depending on the epoch. It helped to converge faster.</li>
<li>Loss: <code>2.7 * BCE(pred_mask, gt_mask) + 0.9 * DICE(pred_mask, gt_mask) + 0.1 * BCE(pred_empty, gt_empty)</code>. Here <code>pred_mask</code> is the prediction of the UNet, pred_empty is the prediction of the branch for empty mask classification.</li>
<li>Postprocessing:
<code>if pred_empty &gt; 0.4 or area(pred_mask) &lt; 800: pred_mask = empty.</code>
Parameters are selected on the validation set.</li>
<li>Ensemble: averaging the 4 best checkpoints over 8 folds, horizontal flip TTA.</li>
<li>Hardware: 4 x RTX 2080</li>
</ul>
<p>Сode: <a href="https://github.com/amirassov/kaggle-pneumothorax">https://github.com/amirassov/kaggle-pneumothorax</a></p> |
SIIM-ACR Pneumothorax Segmentation | [8th place] Solution Overview + Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all of the winners and participants. </p>
<p><a href="/felipekitamura">@felipekitamura</a> and I are happy to share our solutions. </p>
<p>Full documentation: <a href="https://docs.google.com/document/d/108xK_J4WVTtuuwMfxBINR7FLXYRNxldnAQV5cLfO-1g/edit?usp=sharing">https://docs.google.com/document/d/108xK_J4WVTtuuwMfxBINR7FLXYRNxldnAQV5cLfO-1g/edit?usp=sharing</a></p>
<p>Code:
<a href="https://github.com/i-pan/kaggle-siim-ptx">https://github.com/i-pan/kaggle-siim-ptx</a></p>
<p>Data split: 10% holdout for ensemble, 10-fold CV on remaining 90%, stratified by pneumothorax size</p>
<p>Architecture: DeepLabV3+</p>
<p>Backbone: ResNet50/101 and ResNeXt50/101 with group normalization</p>
<p>Loss: weighted BCE (trained on all images) or soft Dice (trained on positives only)</p>
<p>Optimizer: Vanilla SGD, momentum 0.9</p>
<p>Training: batch size 4, 1024 x 1024 and batch size 1, 1280 x 1280 for pure segmentation (did not retrain on stage 2)</p>
<p>Schedule: cosine annealing, 100 epochs, 5 snapshots, initial LR 0.01 to 0.0001</p>
<p>Ensemble:
- 12 models total (x3 snapshots/model)
- 4 trained on positives only with soft Dice
- 8 trained on all images with weighted BCE
- I used 4 of the models trained on all images as "classifiers"
- Max pixel value was taken as classification score, averaged across 4 models
- Multiplied pixel-level scores from 4 models trained on positives only by this classification score, then averaged
- Final ensemble: multiplied score as above averaged with pixel-level scores based on other 4/8 models trained on all images
- Hflip TTA</p>
<p>Post-processing:
- Remove images with total mask size <2048, 4096 pixels </p>
<p>What did not work/insights:
- Ensembling model with Felipe: he got to 0.8750 with Unet, EfficientNetB4 kernel using 512 x 512 trained on soft Dice loss, all images. We weren't able to properly ensemble our models and ended up dropping it in favor of my slightly better ensemble.
- I personally tried lower resolutions and Unet, LinkNet, PSPNet, EncNet, HRNet, etc. All did worse than DeepLab, 1024 x 1024
- Whole-image classifiers: did not work as well as segmentation
- SGD worked better than Adam, Adabound optimizers
- Hflip TTA + small mask removal helped
- Weighted BCE loss was most stable for me, Lovasz, soft Dice were unstable or did not converge at all</p>
<p>Local: 0.8737
Stage 1 LB: 0.8780 (remove 4096 pixels)
Stage 2 Private: 0.8627 (remove 2048 pixels)</p> |
SIIM-ACR Pneumothorax Segmentation | 33 place solo model solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>All with the end of the competition and congratulations to the winners.</p>
<p>We tried different architectures, starting with resnet18, resnet34, vgg16, seresnext50, dpn92 and densenet169.</p>
<p>The best results were shown by dpn92 and densenet169. As the final solution, we chose densenet169 by the top LB. dpn92 lagged just a little bit behind, but the model's weight was very large (approximately 500MB per fold) so that they did not risk to load them.</p>
<p><strong>We did not retrain the model in the second stage on the new train.</strong></p>
<p><strong>Details of the final decision:</strong></p>
<p>Trained on 5 folds of 50 epochs 3 times.</p>
<ol>
<li>IMAGE_SIZE - 1024</li>
<li>Used a sampler: 60% with masks.</li>
<li>Loss: BCE + DICE + Focal.</li>
<li>OPTIMIZER: Adam, LR = 0.0001</li>
<li>SCHEDULER: CosineAnnealingLR</li>
<li>BATCH_SIZE: 4</li>
<li>GRADIENT CLIPPING: 1</li>
</ol>
<p>Transforms, we used albumentations:
1. Horizontalflip
1. Randomcontrast
1. Randomgamma
1. Randombrightness
1. ElasticTransform
1. Griddistion
1. OpticalDistortion
1. ShiftScaleRotate</p> |
SIIM-ACR Pneumothorax Segmentation | The 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners, thanks Imaging Informatics in Medicine (SIIM),American College of Radiology (ACR), @RadiologyACR, Society of Thoracic Radiology (STR), @thoracicrad and Kaggle
for hosting such a competition which may aid in the early recognition of pneumothoraces and save lives.</p>
<p><strong>Challenges:</strong>
1. Relative large image size.
2. The quality of the external data sets are not so good, but make full use of them may help.</p>
<p><strong>Solution:</strong></p>
<p><img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/siim_acr/slides/pipeline.png" alt="pipeline"></p>
<p><strong>Data:</strong>
As the image sizes are little big, to save memory and computation, I first trained an UNET model to predict lungs from 1024x1024 images, all my models were based on the cropped lungs, 576x576 cropped images were good enough for my models.
<img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/siim_acr/slides/lung_seg.png" alt="lung segmentation unet"></p>
<p>In deep learning tasks, more data are often important, so I tried to make full use of CheXpert and NIH datasets, we could not use them directly after I had read the papers related as the labels of them are not accurate enough, I guessed that I could use pseudo labels, so I used a resnet34 UNET model which trained on competition data (0.858 on public LB) to predict CheXpert positive samples, and selected the positive samples which also predicted by my model as positive samples when training pseudo label models, as time was limited after the postponed DOGs-GAN competition, I did not predict the negative samples of the CheXpert, just treated them as negative. As to NIH dataset, I did not use the labels they provided, just believe my models predictions.
<img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/siim_acr/slides/data_strategy.png" alt="data strategy">
I kept ratio of positive and negative samples equal, when training pseudo label models, I kept the ratio of the pseudo samples be 0.5 of normal samples.</p>
<p><strong>Models</strong></p>
<p>Experiments were done mainly on UNET based on resnet34 and SE-resnext50 backbones, I selected them because resnet34 is light enough to do experiments and SE-resnext50 is deep enough for this competition, I did not have enough time and resources to train larger and deeper models.
<img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/siim_acr/slides/seresnext.png" alt="seresnext">
My final models were 3 SE-Resnext50 models: <br>
m1:704x704 images with no pseudo
m2:576x576 images with CheXpert Pseudo
m3:576x576 images with CheXpert and NIH Pseudo </p>
<p>Although my final submission were ensemble of these three models, but 3 of 10 folds of m2 could reached 0.8809 on public LB and 0.8642 on private LB. The ensemble did not bring too much improvements in this competition.</p>
<p>Attention: CBAM
Loss: Lovasz Loss, I tried Active Contour Loss also, but did not bring local CV increase.
No classification model,No classification loss, I thought pixel level labels were enough and if we introduced classification model, it's hard to select thresholds on the private dataset.
No threshold search, just used 0.5
Optimizer: Adam with 0.0001 learning rate, no learning rate change during training
Epochs: 15
EMA of model parameters after 6 epoch.
Batch Size: 3 per GPU when trained on 576x576 images, so the batch size was 9 when using 3 1080Ti, no accumulations of batch-size, no Synch-BN was used. 2 per GPU on 704x704 images.</p>
<p><a href="https://github.com/bestfitting/kaggle/tree/master/siim_acr">https://github.com/bestfitting/kaggle/tree/master/siim_acr</a></p> |
SIIM-ACR Pneumothorax Segmentation | 19th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to the winners.
This 2 month long journey felt like eternity
because at the end I have no improvement for quite long time,
but fortunately thank you for wonderful team member <a href="/ildoonet">@ildoonet</a> joined
at the end, and shed some light with his hypercolumns Unet with efficient b7 backbone.</p>
<p>Architecture:</p>
<p>We used 5 Hypercolumns U Net models, with Resnet34 and Efficient B7(by <a href="/ildoonet">@ildoonet</a> ) backbone.</p>
<p>Model 1: </p>
<p>4 folds of Hypercolumns U Net Resnet34 with bce+dice</p>
<p>I found that bce+dice loss surpass bce loss in terms of convergence speed and dice score.
And rather it seems extreme, we used 5 phases cyclicLR scheduled training and
reduced the last layer's max learning rate at each phase(lr/2,lr/2,lr/4,lr/8,lr/16)
and took best dice score model as an ensemble candidate to minimize uncertainty comes from the updated stage2 dataset.</p>
<p>The best dice score cv results for stage2 retraining were (0.8560/0.84519/0.8367/0.8536)</p>
<p>model 2:</p>
<p>4 folds of same configuration with self attention</p>
<p>Self attentioned model get lower dice scores(0.842/ 0.8535/ 0.8504/ 0.8412) but
There was an improvement when ensembling self attentioned model in stage 1.</p>
<p>model 3, 4, 5:</p>
<p>model 3,4,5 are 4 folds models of <a href="/ildoonet">@ildoonet</a> , Hypercolumns U Net models with Efficient U Net B7 backbone.
It reached its full performace faster compared with model1,2 so we used 3 phases cyclicLR scheduled training.
model3,4 has different learnin rate configuration 8e-3 and 1.6e-2.
The best dice score cv were (0.8452,0.8481,0.8445,0.8550) for model 3 , (-,0.8482, -, 0.8417) for model4(some logs was missed)</p>
<p>and model 5 trained by <a href="/ildoonet">@ildoonet</a> 's configuration, I believe he will explain on his own write up
but single model can scored private LB score of 0.8536.</p>
<p>TTA :</p>
<p>We used only Horizontal Flip.</p>
<p>Postprocessing:</p>
<p>We removed noises with mean probability of the prediction for each cells.</p>
<p>First with fixed probability threshold(0.21), define a cell with its connectivity.
Then by its mean probability over 0.35 remains as pneumothorax, and below this probability discarded.</p>
<p>In stage1, this boosted my model 1's 4 folds ensembled model(mean dice score 0.84515, Phase4) at stage1 LB 0.8729
compared to similar dice score(mean 0.84485, Phase3) with thresholded noise removal at stage1 LB 0.8658.
(Unfortunately I lost my direct comparison when LB was reseted at stage2)</p>
<p>Scores :
0.8568(Stage2), 0.8763(Stage1)</p> |
Predicting Molecular Properties | 19th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Everyone,</p>
<p>Ending final validation, I share my solution.
My solution is what I call <em>Triplet Update</em>.
My model has hidden layers for combinations of triple atoms.
This model is generalization of <em>Edge Update</em>, and similar to Higher-order Graph Neural Networks in a sense.</p>
<p>Code: <a href="https://github.com/toshi-k/kaggle-champs-scalar-coupling">https://github.com/toshi-k/kaggle-champs-scalar-coupling</a></p>
<p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-champs-scalar-coupling/master/img/concept.png" alt="conceptual diagram"></p> |
The 3rd YouTube-8M Video Understanding Challenge | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: The 3rd YouTube-8M Video Understanding Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Overview</h1>
<p>I will first give an overview of our approach and explain each component in greater details. To begin with, we divide our models into <strong>video models</strong> and <strong>segment models</strong>. The job of video models is to propose plausible videos that might contain correctly labelled segments. The segment models are used to re-rank the proposed segments by the video models. For both models, we decided to measure mAP (mean average precision), mean positive recall and mean negative recall. Mean positive recall is defined to be the number of positive segments retrieved within the top 100k divided by the total amount of positive segments. Same for mean negative recall. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1221384%2Fdb5caaa8ff057f736c4ed7c3757ba9dc%2FEnsemble%20(1" alt="">.jpg?generation=1571178611116954&alt=media)</p>
<p>As shown in the diagram above, blue rectangles indicate regular models. Red rectangles indicate meta models that trained using other models. Models that are in the purple area are trained with YouTube-8M v2 dataset only. Models in the yellow area maybe trained with both v2 and v3 or v3 only. Green squares show the public leaderboard mAP. The orange square encloses candidates generated by the video model. The final submission is an average blend of all v3 models after a filtering process. </p>
<h1>Video Models and candidates generation</h1>
<p>Since video models are used to propose potential solutions, the only metric we measure for video models is positive recall. We decided to use the last year's 1st place model (Skalic et al. 2018) as our video model due to its high positive recall (99.7%). In order to generate candidates, we predict the top 100k most plausible videos that might contain positive segments, then break down the videos into segments to form the candidates pool. </p>
<h1>Segment Models</h1>
<p>Segment models are used to re-reank the segments proposed by the video models. Therefore, we measure its mAP within the 100k segments proposed by the video model.
Here are the segment models that we used: </p>
<p>| Model | Public LB |
| ------------- |:-------------:|
| segment LSTM | 81.3 |
| group LSTM | 80.8 |
| NetVLAD_LF | 81.2 |
| netFV_LF | 81.0 |
| NetVLAD_RI_LF | 80.6 |
| transformer | 79.1 |
| transformer_AG | 80.9 |
| ConvS1_RI | 81.3 |
| Conv_LF_RI | 80.8 |
| AttConv_RI | 81.2 |
| ConvS1 | 81.4 |
| CCRL XGB | 81.3 |
| CCRL XGB2 | 80.8 |
| CCRL XGB3 | 80.9 |</p>
<p>In the table above, RI means the model is randomly initialized instead of training on YouTube-8m v2 data first. LF indicates late fusion. AG indicates that the model is an average of many randomly intialized runs. More information about these models will be shown in the workshop paper. </p>
<h1>Post-processing filter</h1>
<p>For our final submission, we first average all the score predictions of the above models. After score averaging, we perform a filtering process to smooth out the predictions. The idea of filtering is to use the neighbouring segments' predictions for the current segment prediction. The filter we used is a size 3 kernel with value 1 in every dimension. We apply the filter over the time dimension and use a non-decreasing function on top. </p>
<p>| Method | Public LB |
| ------------- |:-------------:|
| average predictions | 84.3 |
| average predictions + filtering | 84.4 |</p>
<h1>Discussion</h1>
<p>We propose many high performing segment models and a novel learning framework with class as input, dubbed CCRL. More details of the models will be presented in the workshop paper and the code in Tensorflow 2.0 will be released soon! </p> |
Recursion Cellular Image Classification | Our solution: 9th place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Dear all, thank you for excellent spirit of competition and astonishing results! Without such favorable pressure we wouldn’t be able to achieve such scores.</p>
<p>I will split description into two main parts: pipeline and prediction with post-processing.</p>
<p><strong>Pipeline:</strong>
- <strong>6-channel images.</strong>
- <strong>Super Resolution.</strong> During exploratory data analysis we suddenly realize that the size of individual cells is rather small and convolutional layer might have a hard time learning all patterns. So we use simple bicubic superresolution technique. That really boost our scores. We checked up to 960 * 960. We have tried to use GAN’s (ESRGAN in particular) for this task but its performance was worse compared to vanilla bicubic interpolation.
- <strong>3Fold by experiments.</strong>
- <strong>Add all controls to train.</strong> We tried to add controls only from train experiments, but it didn’t work.
- <strong>Mixed Precision:</strong> Due to the large size of images we decided to use only Mixed Precision learning because of inevitable graphics card’s memory shortage.
- <strong>Gradual layer unfreezing:</strong> We found out that gradual layer unfreezing was crucial condition for our models not to diverge. First 5 epochs we unfreeze 20% of layers from head each epoch. Presumably it was because of Mixed Precision - some sort of gradient explosion or something similar.
- <strong>Metric learning - CosFace loss:</strong> We took my favorite metric learning loss. We also tried ArcLoss and AdaCos but they fail to converge or achieve worse results in Mixed precision mode.
- <strong>Cyclic Linear Lr Annealing.</strong>
- <strong>Cyclic Linear Scale annealing:</strong> This idea is inspired by AdaCos. <a href="https://arxiv.org/abs/1905.00292">https://arxiv.org/abs/1905.00292</a> where authors decrease scale as model achieve high scores. S_max = 64, S_min = 24.
- <strong>Classes for distinct cell types are distinct as well:</strong> So we predict vector of 1108 * 4 + control classes. It improved convergence drastically.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F362676%2F318492d44eeea2d6e21b5ec53a34ffac%2FScreenshot%20from%202019-09-24%2020-27-00.png?generation=1569525109617780&alt=media" alt=""></p>
<ul>
<li><p><strong>Unsupervised Domain Adaptation by Backpropagation:</strong> <a href="https://arxiv.org/abs/1409.7495">https://arxiv.org/abs/1409.7495</a>. We took plate and cell type as domain label and schedule gradient reversal layer coefficient as lr and Scale for CosFace.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F362676%2F81d59df47d3bf1faf271f11d74245223%2FScreenshot%20from%202019-09-24%2020-55-48.png?generation=1569525156026046&alt=media" alt=""></p></li>
<li><p><strong>Backbones:</strong> seresnet50 (img_size up to 960), senet154 (img_size up to 672), xception (img_size up to 860), inceptionresnetv2 (img_size up to 512), densenet161 (img_size up to 840), densenet201 (img_size up to 720). </p></li>
<li>SGD optimizer with momentum. We tried Adam, Lookahead and Ranger but they didn’t improve validation score.</li>
<li>Gradient normalization.</li>
<li>Batch size = 16 with accumulation up to 32.</li>
<li>Focal loss: gamma = 32.</li>
</ul>
<p><strong>Prediction:</strong>
- <strong>Prediction balancing.</strong> This technique can be used when classes are equally balanced. Excellent implementation: <a href="https://www.kaggle.com/c/quickdraw-doodle-recognition/discussion/73803#latest-438270">https://www.kaggle.com/c/quickdraw-doodle-recognition/discussion/73803#latest-438270</a>.
- <strong>Progressive prediction.</strong> Idea is simple: for every experiment each class can be presented only once. So if we predict some class with high probability there is a little chance that this class can be predicted further with lower probability. We take 50% most probable classes, remove them from consideration and predict the next sample and so on.
- <strong>Leak.</strong></p>
<p><strong>What didn’t work:</strong>
- Mix-up
- ArcFace
- AdaCos
- Deformable ConvNet - <a href="https://arxiv.org/pdf/1811.11168.pdf">https://arxiv.org/pdf/1811.11168.pdf</a>
- Deep Supervision
- Weight Standardisation
- Concatenation of two sites together to make 12-channel image</p> |