Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
Freesound Audio Tagging 2019
6th place solution fastai
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here are the write-up and code for my solution!</p> <p><strong>Blog post:</strong> <a href="https://link.medium.com/Kv5kyHjcIX">https://link.medium.com/Kv5kyHjcIX</a> <strong>Code:</strong> <a href="https://github.com/mnpinto/audiotagging2019">https://github.com/mnpinto/audiotagging2019</a></p> <p><strong>Summary:</strong> * Models: xresnets * Image size: 256x256 * Mixup sampling from a uniform distribution * Horizontal and Vertical Flip as new labels (total 320 labels) * Compute loss only for samples with F2 score (with a threshold of 0.2) less than 1. * Noisy data: ~3500 "good noisy samples" used the same way as curated data * TTA: Slice clips each 128px in the time axis (no overlap), generate predictions for each slice and compute the <code>max</code> for each class. * Final submissions: 1) Average of 2 models: public LB 0.742, private LB 0.74620; 2) Average of 6 models: public LB 0.742, private LB 0.75421.</p> <p><strong>Additional observations:</strong> * I found better results when using random crops of 128x128 and rescale to 256x256, comparing to random crops of 128x256 and rescale to 256x256, I was expecting the opposite. * I wonder why max_zoom=1.5 works; I would not expect so.</p> <p><strong>Acknowledgements:</strong> <a href="/daisukelab">@daisukelab</a> thanks for the code to generate the mel spectrograms! Thanks to everyone that contributed in the discussions or with kernels. And finally, thanks to the organizers for this great competition!</p>
Freesound Audio Tagging 2019
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone! </p> <p>At first, congratulations to all participants and to all who got any medals or any new ranks!</p> <p>Below I want to outline my solution. My approach is quite similar to many approaches that were previously shared on the forum, so I will focus only on that tricks and methods that I find important or which weren't mentioned in other's solutions.</p> <p><strong>Models</strong></p> <p>I used two types of models, both are based on convolutions. The first type uses 2d convolutions and works on top of mel-scale spectrograms, while the second uses 1d-convolutions on top of raw STFT representations with relatively small window size like 256, so it's only 5 ms per frame or so. Both types of models are relatively shallow and consist of 10-12 convolutional layers (or 5-6 resnet blocks) with a small number of filters. I use a form of deep supervision by applying global max pooling after each block (typically starting from the first or second block) and then concatenating maxpool outputs from each layer to form the final feature vector which then goes to a 2-layer fully-connected classifier. I also tried using RNN's instead of a max pooling for some models. It made results a bit worse, but RNN seemed to make different mistakes, so it turned out to be a good member of the final ensemble.</p> <p><strong>Frequency encoding</strong></p> <p>2d convolutions are position-invariant, so the output of a convolution would be the same regardless of where the feature is located. Spectrograms are not images, Y-axis corresponds to signal frequency, so it would be nice to assist a model by providing this sort of information. For this purpose, I used a linear frequency map going from -1 to 1 and concatenated it to input spectrogram as a second channel. It's hard to estimate now without retraining all the models how much gain I got from this little modification, but I can say It was no less than 0.005 in terms of local CV score.</p> <p><strong>This is not really a classification task</strong></p> <p>Most teams treated the problem as a multilabel classification and used a form of a binary loss such as binary cross entropy or focal loss. This approach is definitely valid, but in my experiments, it appeared to be a little suboptimal. The reason is the metric (lwlrap) is not a pure classification metric. Contrary to accuracy or f-score, it is based on <em>ranks</em>. So it wasn't really a surprise for me when I used a loss function based on ranks rather than on binary outputs, I got a huge improvement. Namely, I used something called LSEP (<a href="https://arxiv.org/abs/1704.03135">https://arxiv.org/abs/1704.03135</a>) which is just a soft version of pairwise rank loss. It makes your model to score positive classes higher than negative ones, while a binary loss increases positive scores and decreases negative scores independently. When I switched to LSEP from BCE, I immediately got approximately 0.015 of improvement, and, as a nice bonus, my models started to converge much faster.</p> <p><strong>Data augmentation</strong></p> <p>I used two augmentation strategies. The first one is a modified MixUp. In contrast to the original approach, I used OR rule for mixing labels. I did so because a mix of two sounds still allows you to hear both. I tried the original approach with weighted targets on some point and my results got worse.</p> <p>The second strategy is augmentations based on audio effects such as reverb, pitch, tempo and overdrive. I chose the parameters of these augmentations by carefully listening to augmented samples.</p> <p>I have found augmentations to be very important for getting good results. I guess the total improvement I got from these two strategies is about 0.05 or so. I also tried several other approaches such as splitting the audio into several chunks and then shuffling them, replacing some parts of the original signals with silence and some other, but they didn't make my models better.</p> <p><strong>Training</strong></p> <p>I used quite large audio segments for training. For most of my models, I used segments from 8 to 12 seconds. I didn't use TTA for inference and used full-length audio instead.</p> <p><strong>Noisy data</strong></p> <p>I tried several unsupervised approaches such as <a href="https://arxiv.org/abs/1807.03748">Contrastive Predicting Coding</a>, but never managed to get good results from it.</p> <p>I ended up applying a form of iterative pseudolabeling. I predicted new labels for the noisy subset using a model trained on curated data only, chose best 1k in terms of the agreement between the predicted labels and actual labels and added these samples to the curated subset with the original labels. I repeated the procedure using top 2k labels this time. I applied this approach several times until I reached 5k best noisy samples. At that point, predictions generated by a model started to diverge significantly from the actual noisy labels. I decided to discard the labels of the remaining noisy samples and simply used model prediction as actual labels. In total, I trained approximately 20 models using different subsets of the noisy train set with different pseudolabeling strategies.</p> <p><strong>Inference</strong></p> <p>I got a great speed-up by computing both STFT spectrograms and mel spectrograms on a GPU. I also grouped samples with similar lengths together to avoid excessive padding. These two methods combined with relatively small models allowed me to predict the first stage test set in only 1 minute by any of my models (5 folds). </p> <p><strong>Final ensemble</strong></p> <p>For the final solution, I used 11 models trained with slightly different architectures (1d/2d cnn, rnn/no-rnn), slightly different subsets of the noisy set (see "noisy data" section) and slightly different hyperparameters. </p> <p>Source code is available: <a href="https://github.com/ex4sperans/freesound-classification/tree/master">https://github.com/ex4sperans/freesound-classification/tree/master</a></p> <p>I'll add a detailed README soon.</p>
Freesound Audio Tagging 2019
A Naive Solution (28th in Private LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I'd like to thank organizers and all participants, and congratulate winners on excellent results ! For me, this is the first image(sound) competition in Kaggle and I have learned a lot.</p> <p>Many top teams unveiled their nice solutions(also scores). So I'm ashamed to publish my solution here, but I will do for a memento. </p> <hr> <h1>Local Validation</h1> <p>I created 80class-balanced 5 folds validation and have evaluated the model performance with BCE. At the early stage in this competition, I have checked the correlation of BCE and LWLRAP. Below figure shows an almost good correlation. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F479538%2F01d7ab43832e5eb5eec8a139fdb69c19%2Ffig1.png?generation=1561761823509063&amp;alt=media" alt=""></p> <h1>Features</h1> <ol> <li>Remove silent audios</li> <li>Trim silent parts</li> <li>Log Mel Spectrogram ( SR 44.1kHz, FFT window size 80ms, Hop 10ms, Mel Bands 64 )</li> <li>Clustered frequency-wise statistical features</li> </ol> <p>I did not do nothing special. But I will explain about <strong>4</strong>. <br> I thought CNN can catch an audio property in frequency space like Fourier Analysis, but less in statistics values ( max, min, ... ). And labels of train-noisy are unreliable, so I created frequency-wise statistical features from spectrograms and compute cluster distances without using label information(Unsupervised). The procedure is following, </p> <p>a. Compute 25 statistical values (Max, Min, Primary difference, Secondary difference, etc... ) and flatten 64 x 25 (=1600) features b. Compute 200 cluster distances with MiniBatchKMeans ( dimensional reduction from 1600 to 200 )</p> <p>This feature pushed my score about 0.5 ~ 1% in each model. </p> <h1>Models, Train and Prediction</h1> <p>In this competition, we do not have a lot of time to make inferences ( less than 1 GPU hour ). So I selected 3 relatively light-weight models. </p> <ul> <li>Mobile Net V2 with/without clustered features</li> <li>ResNet50 with/without clustered features</li> <li>DenseNet121 with/without clustered features</li> </ul> <p>My final submission is the ensemble of above 6 models ( 3 models x 2 type of features ). Here is a model pipeline. The setting of train and prediction with TTA is written in this figure. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F479538%2Fd21b6e837c1be18cc4a66bbbc82c3174%2Ffs2019_model-pipeline_01.png?generation=1561760818137707&amp;alt=media" alt=""></p> <h1>Performance</h1> <p>I used the weighted geometric averaging to blend 6 model predictions. Below table shows each performance and the blending coefficients. Those coefficients are computed with optimization based on Out Of Fold predictions. LWLRAP values are calculated with 5 fold OOF predictions on train-curated data.</p> <p>| Model | LWLRAP on train-curated | Blending Coefficient | | --- | --- | --- | | MobileNetV2 with clustered features( cf ) | 0.84519 | 0.246 | | MobileNetV2 without cf | 0.82940 | 0.217 | | ResNet50 with cf | 0.84490 | 0.149 | | ResNet50 without cf | 0.83006 | 0.072 | | DenseNet121 with cf | 0.84353 | 0.115 | | DenseNet121 without cf | 0.83501 | 0.201 | | <strong>Blended</strong> | 0.87611 ( Private 0.72820 ) | --- |</p> <p>Thank you very much for reading to the end. <br> See you at the next competition !</p>
Freesound Audio Tagging 2019
[155th place] My very first competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks a lot for the organizers - it was a fun task, especially a noisy dataset, which is full of hilarious gems. Kaggle has rolled out new UI during early days of competition - resulting in completely missing kernel code and invisible cells. New version of PyTorch was released</p> <p>Those are just my outtakes, moving forward to the new competitions: - <strong>Build a team as early as possible</strong>. Sometimes it was very hard to motivate myself to stay up the whole night after the full working day at the day job. There are plenty of tasks of various scale and everyone would find his place. - <strong>Good data == Good model / Bad data == bad model</strong>. I've heard about this dozen of times. But it's very easy to get tunnel vision on network structure or training schedule and completely forget about input data transformations. - <strong>Cross-validation. It was actually the first time I've used KFold</strong>. The problem was that I've stacked all predictions by simply computing the mean value. There are plenty of more advanced tactics that would've given slightly better results. - <strong>Regularization is magic (sometimes)</strong>. Training a network is some kind of art :) I was using vanilla Pytorch and it requires a bit more in-depth knowledge comparing to Fast.ai. That's just my impression. - <strong>Octave</strong> convolution doesn't seem to work well on spectrograms (at least I didn't see any improvements). I haven't tried <strong>CoordConv</strong>, but it would be interesting to compare results.</p>
Freesound Audio Tagging 2019
My write up.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's finally weekend, that I have time to write a quick post. It might be disappointing if you were come in here to see if there is any magic or tricks.</p> <p>As many of you know there is quite some difference even between the curated training set and the public test set. For example, those almost duplicated 'Mr. Peanut' clips were very bad and would confuse the model on the test set. For the noisy set, the differences are much larger. The slams seem to be basketball court(slam dunk?), the tap is mostly tip tap dance, the surf and wave clips are much windy. It also has notably much more human speakings in non-human sounds labeled clips. I also tried many methods to use the noisy set, but with limited success reaching only .72.</p> <p>My .76 model is fine-tuned to overfit the test set. I spent my memorial day holiday weekend, hand made a tuning set with cherry-picked clips from the curated set and noisy set that sounds most similar to the public test set and used this dataset to fine tune my models. It is kind of cheating, for that, it is equivalent to me sneak peek at the exam paper and giving my model a targeted list of past quiz questions to practice right before the exam. It will do well on this particular exam, but might not do well on a new one. </p> <p>I did not use this method as my final submission, because I don't think it solves the problem the competition is about. I did a similar thing in my previous <a href="https://www.kaggle.com/c/inclusive-images-challenge/">competiton</a> which I missed a top spot for not submitting a fine-tuned model.</p> <p>So quickly about my solution without the fine-tuning. It is Conv nets trained with mixup and random frequency/time masks on curated set and pseudo-labeled datasets. </p> <p>I have two pseudo-label sets. Here is how I generated them. For each audio clip, I run my model over sequences of windows on it. For each window, I calculate a ratio of the top k activations vs the sum of all activations. I call this ratio the signal-noise ratio. My first pseudo-label set consists of those windows with the highest ratios and associated most confident labels. My second pseudo-label set consists of full clips, I average activations of windows with highest ratios to be the labels. During training, I am zipping the curated training set and the two pseudo label sets, calculate loss separately and linearly combine them to do backward flow. </p> <p>On inference time, I do the same scanning through windows, calculating ratios, etc to generate predictions as well. I found it better than averaging random crops. But this may simply due to the fact that my model learned to focus on those strong short signals from the first pseudo label set.</p>
Freesound Audio Tagging 2019
9th place solution: smaller and faster
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I would like to congratulate all winers and participants of this competition and thank kaggle and the organizers for posting this interesting challenge and providing resources for participating in it. Also, I want to express my highest gratitude and appreciation to my teammates: <a href="/suicaokhoailang">@suicaokhoailang</a>, <a href="/theoviel">@theoviel</a>, and <a href="/yanglehan">@yanglehan</a> for working with me on this challenge. It was the first time when I worked with audio related task, and it was a month of quite intense learning. I would like to give special thanks to <a href="/theoviel">@theoviel</a> for working quite hard with me on optimization of our models and prepare the final submissions.</p> <p>The code used for training our final models is available now at <a href="https://www.kaggle.com/theoviel/9th-place-modeling-kernel">this link</a> (private LB 0.72171 score, top 42, with a single model (5 CV folds) trained for 7 hours). Further details of our solution can be found in our <a href="https://storage.googleapis.com/kaggle-forum-message-attachments/563780/13697/DCASE2019_Challenge.pdf">DCASE2019 technical report</a>.</p> <p>The <strong>key points</strong> of our solutions are the following (see details below): (1) Use of <strong>small and fast models with optimized architecture</strong>, (2) Use of <strong>64 mels</strong> instead of 128 (sometimes less gives more) with <strong>4s duration</strong>, (3) Use <strong>data augmentation and noisy data</strong> for pretraining.</p> <p><strong>Stage 1</strong>: we started the competition, like many participants, with experimenting with common computer vision models. The input size of spectrograms was 256x512 pixels, with upscaling the input image along first dimension by a factor of 2. With this setup the best performance was demonstrated by DenseNet models: they outperformed the baseline model published in <a href="https://www.kaggle.com/mhiro2/simple-2d-cnn-classifier-with-pytorch">this kernel</a> and successfully used in the competition a year ago, also Dnet121 was faster. With using pretraining on full noisy set, spectral augmentation, and MixUp, CV could reach ~0.85+, and public score for a single fold, 4 folds, and ensemble of 4 models are 0.67-0.68, ~0.70, 0.717, respectively. Despite these models are not used for our submission, these experiments have provided important insights for the next stage.</p> <p><strong>Stage 2</strong>: <strong>Use of noisy data</strong>: It is the main point of this competition that organizers wanted us to focus on (no external data, no pretrained models, no test data use policies). We have used 2 strategies: (1) pretraining on full noisy data and (2) pretraining on a mixture of the curated data with most confidently labeled noisy data. In both cases the pretraining is followed by fine tuning on curated only data. The most confident labels are identified based on a model trained on curated data, and further details can be provided by <a href="/theoviel">@theoviel</a>. For our best setup we have the following values of CV (in stage 2 we use 5 fold scheme): training on curated data only - 0.858, pretraining on full noisy data - 0.866, curated data + 15k best noisy data examples - 0.865, curated data + 5k best noisy data examples - 0.872. We have utilized all 3 strategies of noisy data use to create a variety in our ensemble.</p> <p><strong>Preprocessing</strong>: According to our experiments, big models, like Dnet121, work better on 128 mels and even higher image resolution, while the default model reaches the best performance for 64 mels. This setup also decreases training time and improves convergence speed. 32 mels also could be considered, but the performence drops to 0.856 for our best setup. Use of 4s intervals instead of traditional 2s has gave also a considerable boost. The input image size for the model is 64x256x1. We tried both normalization of data based on image and global train set statistics, and the results were similar. Though, our best CV is reached for global normalization. In final models we used both strategies to crease a diversity. We also tried to experiment with the fft window size but did not see a significant difference and stayed with 1920. One thing to try we didn't have time for is using different window sizes mels as channels of the produced image. In particular, <a href="https://arxiv.org/abs/1706.07156">this paper</a> shows that some classes prefer longer while other shorter window size. The preprocessing pipeline is similar to one described in <a href="https://www.kaggle.com/daisukelab/creating-fat2019-preprocessed-data">this kernel</a>.</p> <p><strong>Model architecture</strong>: At stage 2 we used the model from <a href="https://www.kaggle.com/mhiro2/simple-2d-cnn-classifier-with-pytorch">this kernel</a> as a starting point. The performance of this base model for our best setup is 0.855 CV. Based on our prior positive experience with DensNet, we added dense connections inside convolution blocks and concatenate pooling that boosted the performance to 0.868 in our best experiments (model M1): ``` class ConvBlock(nn.Module): def <strong>init</strong>(self, in_channels, out_channels, kernel_size=3, pool=True): super().<strong>init</strong>()</p> <pre><code> padding = kernel_size // 2 self.pool = pool self.conv1 = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding), nn.BatchNorm2d(out_channels), nn.ReLU(), ) self.conv2 = nn.Sequential( nn.Conv2d(out_channels + in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding), nn.BatchNorm2d(out_channels), nn.ReLU(), ) def forward(self, x): # x.shape = [batch_size, in_channels, a, b] x1 = self.conv1(x) x = self.conv2(torch.cat([x, x1],1)) if(self.pool): x = F.avg_pool2d(x, 2) return x # x.shape = [batch_size, out_channels, a//2, b//2] </code></pre> <p>``` The increase of the number of convolution blocks from 4 to 5 gave only 0.865 CV. Use of a pyramidal pooling for 2,3 and 4-th conv blocks (M2) gave slightly worse result than M1. Finally, our ultimate setup (M3) consists of 5 conv blocks with pyramidal pooling reached 0.872 CV. DenseNet121 in the same pipeline reached only 0.836 (DenseNet121 requires higher image resolution, 256x512, to reach 0.85+ CV). From the experiments, it looks for audio it is important to have nonlinear operations before size reduction by pooling, though we did not check it in details. We used M1, M2, and M3 to create variability in our ensemble. Because of the submission limit we checked performance of only a few models in public LB, with the best single model score 0.715.</p> <p>This plot illustrates architecture of M3 model: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2F4c15c1ec24d235c7834cc1ae94a3ca38%2FM3.png?generation=1561588228369529&amp;alt=media" alt=""></p> <p>Here, all tested models are summarized: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2Ffaca75bb8fb9dc910a648d1c141313be%2Fmodels.png?generation=1561746099636890&amp;alt=media" alt=""></p> <p>And here the model performance for our best setup: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2Ff7014dbf8de75685fff13c683384f068%2Fp.png?generation=1561588425442446&amp;alt=media" alt=""></p> <p><strong>Data augmentation</strong>: The main thing for working with audio data is MixUp. In contrast to images of real objects, <a href="https://towardsdatascience.com/whats-wrong-with-spectrograms-and-cnns-for-audio-processing-311377d7ccd">sounds are transparent</a>: they do not overshadow each other. Therefore, MixUp is so efficient for audio and gives 0.01-0.015 CV boost. At the stage 1 the best results were achieved for alpha MixUp parameter equal to 1.0, while at the stage 2 we used 0.4. <a href="https://arxiv.org/abs/1904.08779">Spectral augmentation</a> (with 2 masks for frequency and time with the coverage range between zero and 0.15 and 0.3, respectively) gave about 0.005 CV boost. We did not use stretching the spectrograms in time domain because it gave lower model performance. In several model we also used <a href="https://arxiv.org/abs/1905.09788">Multisample Dropout</a> (other models were trained without dropout); though, it decreased CV by ~0.002. We did not apply horizontal flip since it decreased CV and also is not natural: I do not think that people would be able to recognize sounds played from the back. It is the same as training ImageNet with use of vertical flip.</p> <p><strong>Training</strong>: At the pretraining stage we used one cycle of cosine annealing with warm up. The maximum lr is 0.001, and the number of epochs is ranged between 50 and 70 for different setups. At the stage of fine tuning we applied ReduceLROnPlateau several times to alternate high and low lr. The code is implemented with Pytorch. The total time of training for one fold is 1-2 hours, so the training of entire model is 6-9 hours. Almost all our models were trained at kaggle, and the kernel is at <a href="https://www.kaggle.com/theoviel/9th-place-modeling-kernel">this link</a>. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2F51ade3b68a7ab201b15fb9736d618a7b%2Flr.png?generation=1561588570346398&amp;alt=media" alt=""></p> <p><strong>Submission</strong>: Use of 64 mels has allowed us also to shorten the inference time. In particular, each 5-fold model takes only 1-1.5 min for generation of a prediction with 8TTA. Therefore, we were able to use an ensemble of 15 models to generate the final submissions. The final remark about the possible reason of the gap between CV and public LB: <a href="https://arxiv.org/pdf/1904.05635v1.pdf">this paper</a> (Task 1B) reports 0.15 lwlrap drop for test performed on data recorded with different device vs the same device as used for training data. So, the public LB data may be selected for a different device, but we never know if it is true or not.</p>
Freesound Audio Tagging 2019
Main features of final solution [22 on public]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>1.) Using power(0.125) instead of log in Spectrograms; 2) Selection balanced (or unbalanced) sample of well predicted by current model noisy files (1500, 2500, 5000, 10000). Priority for rarely predicted files on the validation; 3) Different parameters of the spectrogram extraction (hop, n_mels). We used: - rate: 44100, n_mels: {128, 256}, n_fft: 2560, hop_length: {512, 694, 1024}, fmin: 20; - dynamic hop preprocessing: hop choose depending on the length of the sound (min_hop:16, max_hop: 1024);</p> <p>4) Mixup with beta distribution (0.2, 0.2); 5) Class_weights in BCE calculated by the ratio of the number of classes predicted as top1 on validation; 6) Train with different seeds and choosing the best model fold by fold ^); 7) The ensemble of 7 relatively uncorrelated models.</p>
Freesound Audio Tagging 2019
13rd solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi,</p> <p>As I have made a slide about my solution, I would like to post it here. Thank organizer and staffs for this great competition.</p> <p>Here is the brief summary.</p> <p>Inference:</p> <ol> <li>Raw audio to log mel.</li> <li>Apply sliding window to the log mel and predict each windows by CNN.</li> <li>Feed temporal prediction given step2 to RNN</li> <li>Feed temporal prediction given step2 to Rakeld + ExtraTrees</li> <li>Average step3 and step4.</li> </ol> <p>Training:</p> <ol> <li>Train CNN by using only curated data.</li> <li>Apply sliding window to the log mel in noisy dataset and predict each windows by CNN.</li> <li>Feed temporal prediction given step2 to RNN. Then, I got pseudo labels for noisy dataset.</li> <li>Average pseudo labels with original noisy labels by 0.75:0.25.</li> <li>Train CNN by using all data.</li> <li>Train some models to follow the inference steps.</li> </ol> <p>Details in Training CNN:</p> <ul> <li>Densenet 121</li> <li>Spec Aug</li> <li>Mixup</li> <li>CLR + SWA + Snapshot ensemble.</li> </ul> <p>I intensively tried SSL approach such as ICT, but it did not work.</p>
Freesound Audio Tagging 2019
Solution ready on the github
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is for you while waiting for 2nd stage result :)</p> <p><a href="https://github.com/daisukelab/freesound-audio-tagging-2019">https://github.com/daisukelab/freesound-audio-tagging-2019</a></p> <p><img src="https://github.com/daisukelab/freesound-audio-tagging-2019/raw/master/images/data_example.png" alt="title_picture"></p> <p>Thank you for everybody organized this competition, and congratulations to the winners.</p> <p>It was wonderful opportunity once again to dig deeper into audio data for scene understanding application, I could gain intuition and understanding for real world sound-scene-aware applications which I'm hoping to have in electric appliances around us in the future.</p> <p>I'd share what I have done and how was that during this competitions other than public kernels.</p> <p>Topics Summary: - Audio preprocessing to create mel-spectrogram based features, 3 formats were tried. - Audio preprocessing also tried background subtraction; focusing on sound events... not sure it worked or not. - Tried to mitigate curated/noisy audio differences; inter-domain transfer regarding <em>frequency envelope</em>. - Soft relabeling, by using co-occurrence probability of labels. - Ensemble for multi-label problem; per-class per-model 2D weighting for better averaging.</p>
Freesound Audio Tagging 2019
7th place solution with commentary kernel
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Freesound 7th place solution</h1> <p>Thank you for all the competitors, kaggle teams, and the host of this competition! We enjoyed a lot during the competition and learned many things. </p> <p>We especially want to thank <a href="/daisukelab">@daisukelab</a> for his clear instruction with great kernels and datasets, <a href="/mhiro2">@mhiro2</a> for sharing excellent training framework, and <a href="/sailorwei">@sailorwei</a> for showing his Inception v3 model in his public kernel.</p> <p>The points where the solution of our team seems to be different from other teams are as follows.</p> <p>keypoint : Data Augmentation, Strength Adaptive Crop,Custom CNN, RandomResizedCrop</p> <p>The detailed explanation is in the following kernel, so please read it. </p> <p><a href="https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution">https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution</a> </p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/564097/13702/Shirogane_solution.png" alt="pipeline"> </p> <h2>Data Augmentation</h2> <p>We created 7 augmented training dataset with <a href="http://sox.sourceforge.net/">sox</a>.</p> <ul> <li>fade</li> <li>pitch * 2</li> <li>reverb</li> <li>treble &amp; bass * 2</li> <li>equalize </li> </ul> <p>We trained a togal of 4970 * 8 samples without leaks.</p> <h2>Crop policy</h2> <p>We use random crop, because we use fixed image size(128 * 128). Random crop got a little better cv than the first 2 seconds cropped. At first, it wa cropped uniformly as mhiro's kernel.</p> <h3>Strength Adaptive Crop</h3> <p>Many sound crip has important information at the first few seconds. Some sample has it in the middle of crip. However, due to the nature of recording, it is rare to have important information at th end of sounds. The score drops about 0.03~0.04 when learning only the last few seconds.</p> <p>Then, We introduce <strong>Strength Adaptive Crop</strong>. We tried to crop the place where the total of db is high preferentially. </p> <p>This method is very effective because most samples contain important information in places where the sound is loud.  </p> <p>CV 0.01 up <br> LB 0.004~0.005 up </p> <p>Detailed code is in <a href="https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution">this kernel</a>.</p> <h2>model structure</h2> <ul> <li>InceptionV3 3ch </li> <li>InceptionV3 1ch </li> <li>CustomCNN </li> </ul> <p>CustomCNN is carefully designed to the characteristics of the sound. The details are in <a href="https://www.kaggle.com/hidehisaarai1213/freesound-7th-place-solution">this kernel</a></p> <h2>Augmentation in batch</h2> <ul> <li>Random erasing or Coarse Dropout</li> <li>Horizontal Flip</li> <li>mixup</li> <li>Random Resized Crop (only InceptionV3) </li> </ul> <h2>Training strategy</h2> <h3>TTA for validation</h3> <p>When RandomResizedCrop is used, val score fluctuate,so if val tta is not used, an appropriate epoch can not be selected. So, we used tta for validation to ensure that validation can be properly evaluated. </p> <h3>Stage 1 : pretrain with noisy data(warm-up)</h3> <p>We used noisy data to 'pre-train' our model. LB : about 0.01 up</p> <h3>Stage 2 : Train with curated data 1</h3> <p>We used curated data to 'finetune' the models, which were 'pre-trained' with noisy data.</p> <h3>Stage 3 : Train with curated data 2(Inception only)</h3> <p>We used stage 2 best weight to stage 3 training without random resized crop. We don't know why, but lwlrap goes up without random resized crop in Inception model.</p> <h3>score</h3> <p>Accurate single model public score not measured. | | public|private | |-----------|------------|------------| |Inception 3ch|0.724 over|0.73865| |Inception 1ch|??|0.73917| |CustomCNN|0.720 over|0.73103|</p> <h2>Ensemble</h2> <p>(Inception 3ch + Inception 1ch + CustomCNN) / 3 </p> <p>private score:0.75302</p>
Freesound Audio Tagging 2019
Video of the 18th place solution and general discussion of the competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Early this month I have told about the competition on ML trainings. You can find the video following the link below. I have described the general approach to audio processing and said about my solution as well as ideas used by the winners. <a href="https://youtu.be/m0-JHQeLf3k">https://youtu.be/m0-JHQeLf3k</a> </p> <p>The talk is a part of the meetup organized by ML Trainings (<a href="https://www.facebook.com/groups/1413405125598651/">https://www.facebook.com/groups/1413405125598651/</a>) – regular Moscow based meetups backed by ods.ai community. Usually the discussions about completed competitions are held in Russian, but recently some of them have been started to be held in English, so you can find some English videos on the youtube channel as well (and there will be more in the future).</p>
Freesound Audio Tagging 2019
13th place solution [public LB] - brief summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The second year to take part in freesound audio tagging, thanks for the organization for the interesting and close to reality audio competition. Thanks for <a href="/daisukelab">@daisukelab</a> <a href="/mhiro2">@mhiro2</a> <a href="/ceshine">@ceshine</a> <a href="/jihangz">@jihangz</a> , and many kagglers for the starter kernels and discussions.</p> <p>Earlier single model kernel without cv is as below: <a href="https://www.kaggle.com/sailorwei/fat2019-2d-cnn-with-mixup-lb-0-673">https://www.kaggle.com/sailorwei/fat2019-2d-cnn-with-mixup-lb-0-673</a> Discussion about single model LB: <a href="https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/91881#latest-547036">https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/91881#latest-547036</a> <a href="https://github.com/sailor88128/dcase2019-task2">https://github.com/sailor88128/dcase2019-task2</a></p> <p>Final submission: - On curated dataset, 5 fold cv, inception V3, single model LB is 0.691 and 5 model average is 0.713; - On curated dataset and noisy dataset, with specific loss function, 5 fold cv, 8 layer cnn, 5 model average 0.678; - Geometric average of these two models, 0.73+.</p> <p>Techniques we ever used: 1. PCEN spectragram, didnot try many parameters, not sure if it work for this competition; 2. mixup, about 0.01 increase in LB; 3. SpecAugment shows no increase, while RandomResizedCrop cannot be explained for audio with a better lb increase; 4. CV and tta, about 0.05 increase for the primary single model; 5. Mixmatch, got a much lower LB with the same model, need to check the code then; 6. Schedule of lr, cycliclr and cosineannealing do not show much difference.</p> <p>Others: 1. While lots of kagglers said shallow cnn works well, we didnot get effective shallow cnn. We ever tried alexnet or vgg, all with low local lwlrap :( 2. Different cnns were used at first, but we didnot save them well for the final model ensemble; 3. Dataset with noisy label are not well used. We just used a weight loss function to train the noisy labeled data together with curated data set. The mixmatch we realize for the task maybe wrong in detail and should be checked.</p>
Google Landmark Retrieval 2019
1st/3rd place solution by Team smlyaka
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>(Updated on June 10: Priv LB was updated. Finally, our team won 1st on the retrieval challenge!) (Updated on June 12: Our solution code and paper are available. We plan to refactor them after the CVPR conference. If you are interested in the code, I strongly recommend you to read it a few weeks later. Also, our model parameters and cleaned dataset will be available. Reproducing our work is very welcome! (code) <a href="https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution">https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution</a>, (paper) <a href="https://arxiv.org/abs/1906.04087">https://arxiv.org/abs/1906.04087</a>, (poster) <a href="https://www.dropbox.com/s/y3c3ovdiizz59j4/cvpr19_smlyaka_slides.pdf?dl=0">https://www.dropbox.com/s/y3c3ovdiizz59j4/cvpr19_smlyaka_slides.pdf?dl=0</a>)</p> <hr> <p>Congratulations to all participants and winners! And special thanks to Filip Radenovic for his great paper and open-source code. We learned a lot from them. We two worked very hard and enjoyed this competition very much.</p> <p>Here is a brief overview of our solution. (Now we are working hard for preparing the detail description toward the paper submission deadline.)</p> <ol> <li>Automated data cleaning with using train labels, global descriptors, and spatial verification (SV). For preparing a cleaned train set, we selected only the images that it's topk neighbors in descriptor space are same landmark_id and verified by DELF+SV. This dataset cleaning is essential to train networks since the original train set is too noisy to train networks properly. To train with the cleaned dataset significantly boosts our score compared to only training with train data from previous year's competition.</li> <li>We use FishNet-150, ResNet-101, and SEResNeXt-101 as backbones trained with cosine-based softmax losses, ArcFace and CosFace. Besides, various tricks are used such as aspect preserving of input image, cosine annealing, GeM pooling, and finetuning at full resolution on the last epoch with freezing Batch Normalization.</li> <li>For the recognition task, accumulating top-k (k=3) similarity in descriptor space and inliers-count by spatial verification helps a lot.</li> <li>For the retrieval task, reranking with using (3) significantly improved our score in stage2.</li> </ol> <p>DBA, alpha-QE, Iterative DELF-QE (proposed by Layer6 AI team last year), Diffusion [Yang+'19] or Graph traversal (EGT, rEGT) [Chang+'19] work in stage1, but they don't work well in stage2.</p> <p>Our experimental results for the retreival/recognition are attached. In table 1, all results are based on euclidean search only. No use of PCA/whitening, DBA, QE, Diffusion or Graph Traversal.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/546392/13423/table_1.png" alt="Table1"> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/546392/13424/table_2.png" alt="Table2"> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/546392/13425/table_3.png" alt="Table3"></p> <h2>Post-processing for the recognition task</h2> <p>Supplement to the Table 3. We treated the frequently occurring landmark as distractors and updated their confidence score in a post-processing step. Specifically, the corresponding landmark's confidence score is calculated by multiplying the frequency by -1. We treated landmarks that appeared more than 30 times as distractors.</p> <h2>Things doesn’t work</h2> <ul> <li><a href="https://arxiv.org/abs/1905.00292">AdaCos: Adaptively Scaling Cosine Logits for Effectively Learning Deep Face Representations</a></li> <li><a href="https://arxiv.org/abs/1903.10663">Combination of Multiple Global Descriptors for Image Retrieval</a></li> <li>Label smoothing</li> <li>SPoC, MAC, R-MAC, Compact bilinear pooling</li> <li>Data Augmentation from AutoAugment paper (imagenet setting)</li> <li>Warmup learning rate scheduling</li> <li>Under Sampling</li> <li>Remove distractor by Places365 indoor/outdoor classification</li> <li>Meta classifier</li> <li>Gaussian SARE</li> <li>... and many other methods!</li> </ul> <p>Best,</p> <p>Team smlyaka.</p> <p>.... And I want to thank my team mate, lyakaap! Many congrats lyakaap for becoming Grandmaster ;-)</p>
Google Landmark Retrieval 2019
15th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>15th Place Solution</h1> <p>Congratulations to the winners! And I would like to express my gratitude to all the people involved in this competition.</p> <p>This is our 15th place solution. I am busy working on the iMaterialist Competition, so I'll explain our solution only with a brief diagram. "Reference" is a collection of links that we referred to when implementing.</p> <h1>Training</h1> <p>We trained models using 1st stage train dataset.</p> <p>Reference: - <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">https://github.com/filipradenovic/cnnimageretrieval-pytorch</a> - <a href="https://github.com/SeuTao/Humpback-Whale-Identification-Challenge-2019_2nd_palce_solution">https://github.com/SeuTao/Humpback-Whale-Identification-Challenge-2019_2nd_palce_solution</a></p> <p><img src="https://gist.githubusercontent.com/chmod644/20feec5b9cc5bcc66491c698fec5e860/raw/891ad2e9c48daebe61a41d142b71082cf9269ff6/ffecd280-8798-11e9-9520-d94c7c62797d.png" alt="network_and_loss"></p> <h1>Submit</h1> <p>Reference: - <a href="https://www.kaggle.com/c/landmark-retrieval-challenge/discussion/57855">https://www.kaggle.com/c/landmark-retrieval-challenge/discussion/57855</a> - <a href="https://pdfs.semanticscholar.org/79bf/2aedcd89c116225f8e4b72fa66f7df1f7d97.pdf">https://pdfs.semanticscholar.org/79bf/2aedcd89c116225f8e4b72fa66f7df1f7d97.pdf</a> - <a href="https://github.com/facebookresearch/faiss">https://github.com/facebookresearch/faiss</a> - <a href="https://github.com/ducha-aiki/manifold-diffusion">https://github.com/ducha-aiki/manifold-diffusion</a></p> <p><img src="https://gist.githubusercontent.com/chmod644/20feec5b9cc5bcc66491c698fec5e860/raw/e05e65c53f6506f236253d59c1bba7d97c743cac/1f2f3400-8788-11e9-96d7-337836617bd9.png" alt="inference"></p>
Google Landmark Recognition 2019
20th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hola,</p> <p>TL;DR: SE-ResNext50, softmax 92k classes, 224x224 crops and data cleaning on the test set using pretrained Mask R-CNN and ImageNet classifier. The code is here: <a href="https://github.com/artyompal/google_landmark_2019">https://github.com/artyompal/google_landmark_2019</a></p> <p>So, my solution is simple, it's a single model (SE-ResNext50), single fold. My code is very similar to my kernel: <a href="https://www.kaggle.com/artyomp/resnet50-baseline">https://www.kaggle.com/artyomp/resnet50-baseline</a>. I used 224x224 crops from 256x256 images. I trained only on classes with &gt;= 10 samples (92740 classes). Surprisingly, it wasn't much slower than training on 18k or 35k classes (maybe 20% slower).</p> <p>This is the config of my model: <a href="https://github.com/artyompal/google_landmark_2019/blob/master/models/v1.3.8.seresnext50_92740_classes.yml">https://github.com/artyompal/google_landmark_2019/blob/master/models/v1.3.8.seresnext50_92740_classes.yml</a> So, it adds a bottleneck layer with 1024 neurons, uses SGDR from <a href="https://arxiv.org/pdf/1608.03983.pdf">https://arxiv.org/pdf/1608.03983.pdf</a> and rectangular crops from <a href="https://arxiv.org/pdf/1812.01187.pdf">https://arxiv.org/pdf/1812.01187.pdf</a>. </p> <p>It resulted in 0.128 on public LB after almost 5 days of training on a single 1080Ti.</p> <p>Obviously, such a low score was caused by both a noisy train set and a noisy test set. I had no time to clean the train set and re-train, so I only cleaned the test set.</p> <p>The first stage of cleaning was just No Landmark Detector from this kernel: <a href="https://www.kaggle.com/rsmits/keras-landmark-or-non-landmark-identification">https://www.kaggle.com/rsmits/keras-landmark-or-non-landmark-identification</a>. I just got rid of predictions where any of top3 results were "not landmark".</p> <p>The second stage. I noticed that there's a lot of portraits, selfies, and pictures of cars in the test set. So I took the pretrained Faster R-CNN from the new TorchVision release (checkout object_detection.py) and got rid of images with too many humans or cars (confidence &gt;0.5, total area &gt; 0.4). Check this out: <a href="https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub2.py">https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub2.py</a></p> <p>The third stage. I noticed that there's a lot of random images: dogs, helicopters, flowers and so on. So I took a pretrained ImageNet classifier. I used ResNet50 (I wanted to use something better, but my GPUs were busy doing something else, so I needed a simple network; ideally, I'd take some smarter model). It's in the image_classifier2.py. So I rejected images with any of those classes and confidence &gt; 0.7 (<a href="https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub3.py">https://github.com/artyompal/google_landmark_2019/blob/master/patch_sub3.py</a>)</p> <p>The first stage gave me 0.02 on the public LB, but didn't help at all on the private LB. I wonder if the public/private split was made by class? The private part of the stage-2 test set seems to have cleaner data.</p> <p>In total, before cleaning I had 0.128 on public LB, 0.144 on private LB. After cleaning I got 0.162 on public LB, 0.148 on private LB.</p> <p>What didn't work: I implemented KNN on top of SE-ResNext50 features and it worked okay locally (GAP 0.2 on all classes), but it performed like crap on the leaderboard (about 0.00002 or something). I must have a bug somewhere.</p> <p>PS: thanks to <a href="/pudae81">@pudae81</a> for this repo: <a href="https://github.com/pudae/kaggle-humpback">https://github.com/pudae/kaggle-humpback</a>. I borrowed some code from there.</p> <p>Thanks for reading this and I hope this helps in future competitions! Artyom</p>
Google Landmark Retrieval 2019
Share your approaches?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to break the silence here and started to share what we have done, and hopefully more others will share as well, especially from the top teams that we can learn :)</p> <p>In fact, we have nothing fancy in the end (all fancy things didn't work for us this time , unfortunately :( We followed similar paths of last winner's approach, and final model is ensemble of three (inception v3 + Atten&amp;Gem pooling, as well as David's Xception and SEResNet101 with VLAD pooling, each has 512 in dim) </p> <p>We used DBA for each model to enhance image vectors quality, and finally go through QE (1 round) using the concatenated vectors from three models, which boosted performances quite a bit. </p> <p>We tried different poolings, different ensemble approaches, but they didn’t give much difference or improvement, and the best one seemed to be GeM (with some CNN Attention) and VLAD pooling, with one round of DBA and one round of QE, that’s it. When we trained models, we started with classification training, followed by fine tuning using Siamese fashion, which further improves a bit more. </p> <p>However, I guess the biggest (possible) mistake was that we only trained using old train data set only, and ignore the set of new train, mainly because of the stage 1 LB shows so much correlation with old train, and we wrongly assumed that stage 2 would be similar, and final Stage 2 turned out to be from image set whose distribution was so different from old train (much more noisy) and much closer to new. </p> <p>In the end, we found a very good approach to remove most of the distractors images from test + index, but surprisingly for us, simply removing those distractors didn’t help at all for our current pipeline (DBA + QE). It could be because non-distractors will match well with non-distractors, and won’t be affected much by distractors that we are trying to remove. </p> <p>I found out this excellent QE code was very useful - <a href="https://github.com/fyang93/diffusion">https://github.com/fyang93/diffusion</a>, only that they shouldn’t use ANN at all even for large dataset, which unnecessarily increases computations. </p>
Google Landmark Retrieval 2019
Our solution: 0.10439 private 0.08832 public (code available)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our solution consists of two steps, either single model (our best single model achieves 0.06335 public 0.08813 private), and ensembling multiple models which achieves 0.10439 private 0.08832 public.</p> <h2><a href="https://github.com/antorsae/landmark-retrieval-2019/blob/master/submission-trained.ipynb">Single model CNN extraction + NN search</a></h2> <p>This notebook extracts the last convolution layer of a given architecture applied GeM pooling and performs L2 normalization.</p> <p>It uses augmentation (images are LR flipped) so both the index and queries features are duplicated.</p> <p>Once features are extracted, it performs regular nearest neighbour search using PCA, whitening and L2 normalization and the resuls are ready for a submission.</p> <p>It also saves the results of nearest neighbour search for ensembling.</p> <h2><a href="https://github.com/antorsae/landmark-retrieval-2019/blob/master/ensemble.ipynb">Ensembling</a></h2> <p>This notebook takes the results of nearest neighbours search for each query (and flipped LR queries) and aggregates distances of different runs (different architectures) and builds a submission accordingly.</p> <p>For flipped LR images, we pick the minimum distance.</p> <h2>Some results</h2> <p><img src="https://user-images.githubusercontent.com/1516746/58861894-bc6a6980-86af-11e9-8261-21b4ebc26bb3.png" alt="image"> <img src="https://user-images.githubusercontent.com/1516746/58861581-f7b86880-86ae-11e9-8dd8-de50cf35af97.png" alt="image"> <img src="https://user-images.githubusercontent.com/1516746/58861667-34845f80-86af-11e9-9493-c012847fe0d0.png" alt="image"></p> <p>All code here: <a href="https://github.com/antorsae/landmark-retrieval-2019">https://github.com/antorsae/landmark-retrieval-2019</a></p>
Google Landmark Recognition 2019
Team JL Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and all other participants. Here is a summary of our solution.</p> <p>Our final solution consists of the following 2 retrieval models (1 global model fine-tuned on the training set and 1 local model that is publicly available) and 3 steps.</p> <p>Model-1: A global CNN retrieval model. By this we mean a model that takes an input image and produces a global descriptor by taking the output of the last convolutional layer and apply a global pooling method followed by L2 normalization, as common for retrieval tasks. For simplicity, we will not include all the details here (more information can be added later if needed). In the hindsight, we think it is possible that a simple ResNet+GeM model should do the job just fine, but we tried a couple of combinations with backbones such as ResNet, ResNeXt, SE-ResNet, SE-ResNeXt, SENet with attention and pooling methods such as GeM, RMAC, MAC, SPoC, as well as multiple concatenations of them. We take the "raw output" as descriptor (i.e. without any post-processing techniques such as DBA, QE, or Diffusion, except for concatenations in which case PCA and whitening is used to reduce the dimension to 2048 or 4096) and used the public leaderboard of the retrieval challenge for validation.</p> <p>Model-2: A local CNN retrieval model. This is just the Detect-to-Retrieval (D2R) model released by Andre Araujo at this post <a href="https://www.kaggle.com/c/landmark-recognition-2019/discussion/92942#latest-536570">https://www.kaggle.com/c/landmark-recognition-2019/discussion/92942#latest-536570</a></p> <p>Step-1: Use Model-1 to match all test images (about 120K) vs. all train images (about 4.13M). For each test image, accumulate similarites for labels in the top-5 matched images, and take the label of the highest score as prediction. This achieves private/public 0.25138/0.21534.</p> <p>Step-2: Same as Step-1, but for each test image, keep the top-20 matched images from the training set. Then a second match was performed using Model-2, and accumulate D2R scores for labels in the top-5 closest images and take the label of the highest scores as before. This achieves private/public 0.31870/0.26782.</p> <p>Step-3: This is perhaps the most important step in which, as everyone knows, something needs to be done to separate landmark images from distractors. Instead of training a separate classifier using external data, we made the following assumptions. (1) All the landmark images already have correct predictions, and the reason why score is low is because the evaluation metric is GAP and a lot of the distractor images are placed higher than them in the ranked list of predictions. Of course the first part of the assumption is not true, and we tried some methods to replace the labels of some of the potentially wrong predictions, but it didn't work. (2) In the ranked list of predictions, all top-500 images are real landmark images with correct predictions, and all images below rank 20000 are distractors. So if there is a distractor in the top-500, or if there is a real image below 20000, then let them be there and there is nothing we can do about it. With these two assumptions, we perform the following to the Step-2 prediction list. Starting from the rank 1 test image, match vs all other test images below until rank 20000 using Model-2, if the D2R score is greater than a certain threshold (23 in our case), then pull it up (along with its predicted label) and place it just below rank 1. It is possible that the image being pulled up has different predicted label, but we don't change it. We do this for two searches, where the first search expands top-500 to about 770, and the second search expands 770 to about 800. Since the second search only adds about 20 to 30 new images, we stop it there. This boosts the score to private/public 0.36787/0.31626. Next, we do the same to the Step-1 prediction list. this time with top-300 images (again two searches) because Step-1 score was lower than Step-2, and it expands the list to about 520 images. Finally, we merged the two new prediction lists as follows (New-List-1 has about 800 "safe images" in the top, whereas New-List-2 has about 520 "safe images" in the top.) Starting from rank 1 image in New-List-1, place it straight to rank 1 in the merge-list. Then take rank 1 image in New-List-2 and place it as rank 2 in the merge-list if it's not already there, otherwise go back to New-List-1. Continue this interleaving process until all "safe images" have been absorbed, and append the rest using New-List-1. The merge-list has about 840 "safe images", i.e. New-List-2 only adds about 40 "safe images" to New-List-1, but it could potentially change the ranks of some of the "safe images" in New-List-1 in the insertion process. This slightly increased the score to private/public 0.37601/0.32101, which was our final score.</p> <p>P.S. We also trained another DNN model with only 1 hidden layer using features extracted by Model-1 as input and applied the steps as above to add more "safe images" to the predictions from Step-3. This scores 0.37936 on the private leaderboard, but unfortunately we did not select it because it degraded the public leaderboard score to 0.32100.</p>
Google Landmark Recognition 2019
2nd Place and 2nd Place Solution to Kaggle Landmark Recognition and Retrieval Competition 2019
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for the great challenge, our solution has been released to arxiv. Further, we will open our source code. Thanks.</p> <p>arxiv link: <a href="https://arxiv.org/pdf/1906.03990.pdf">https://arxiv.org/pdf/1906.03990.pdf</a> source code: <a href="https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark">https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark</a></p>
Freesound Audio Tagging 2019
4th solution: Multitask Learning, Semi-supervised Learning and Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We wrote up a technical report of our solution, so you can check our solution in detail when it's published. Here I'd like to describe a brief summary of our solution. Our score is public 0.752 (3rd)/private 0.75787 (4th before pending). <br> For this competition, we used 3 strategies mainly. <br> 1. Multitask learning with noisy labels. 2. Semi-supervised learning (SSL) with noisy data. 3. Averaging models trained with different time windows. <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/556685/13619/Fig1_3.png" alt="Multitask pipeline"></p> <h2>Models</h2> <ul> <li>ResNet34 with log-mel</li> <li>EnvNet-v2 with waveform.</li> </ul> <h2>Preprocessing</h2> <ul> <li>128 mels</li> <li>128 Hz (347 STFT hop size)</li> </ul> <p>Log-mel is converted from power to dB after all augmentations applied. Thereafter, it is normalized by the mean and standard deviation of each data.</p> <h2>Augmentations</h2> <h3>Log-mel</h3> <ul> <li>Slicing</li> <li>MixUp</li> <li>Frequency masking</li> <li>Gain augmentation</li> </ul> <p>We tried 2, 4 and 8 seconds (256, 512 and 1024 dimensions) as a slicing-length and 4 seconds scores the best. Resizing, warping, time masking and white noise don't work. - Additional slicing</p> <p>Expecting more strong augmentation effect, after basic slicing, we shorten data samples in a range of 25 - 100% of the basic slicing-length by additional slicing and extend to the basic slicing-length by zero paddings. </p> <h3>Waveform</h3> <ul> <li>Slicing</li> <li>MixUp</li> <li>Gain augmentaion</li> <li>Scaling augmentation</li> </ul> <p>We tried 1.51, 3.02 and 4.54 seconds (66,650, 133,300 and 200,000 dimensions) as a slicing length and 4.54 seconds scores the best. </p> <h2>Training</h2> <h3>ResNet</h3> <ul> <li>Adam</li> <li>Cyclic cosine annealing (1e-3 -&gt; 1e-6)</li> <li>sigmoid and binary crossentropy</li> </ul> <h3>EnvNet</h3> <ul> <li>SGD</li> <li>Cyclic cosine annealing (1e-1 -&gt; 1e-6)</li> <li>(sigmoid and binary crossentropy) or (SoftMax and KL-divergence)</li> </ul> <h2>Postprocessing</h2> <p>Prediction using the full length of audio input scores better than prediction using slicing TTA. This may be because important components for classification is concentrated on the beginning part of audio samples. Actually, prediction with slices of the beginning part scores better than prediction with slices of the latter part. We found padding augmentation is effective TTA. This is an augmentation method that applies zero paddings to both sides of audio samples with various length and averages prediction results. We think this method has an effect to emphasize the start and the end part of audio samples.</p> <h2>Multitask learning</h2> <p>In this task, the curated data and noisy data are labeled in a different manner, therefore treating them as the same one makes the model performance worse. To tackle this problem, we used a multitask learning (MTL) approach. The aim of MTL is to get synergy between 2 tasks without reducing the performance of each task. MTL learns features shared between 2 tasks and can be expected to achieve higher performance than learning independently. In our proposal, an encoder architecture learns the features shared between curated and noisy data, and the two separated FC layers learn the difference between the two data. In this way, we can get the advantages of feature learning from noisy data and avoid the disadvantages of noisy label perturbation. <br> Multitask module's components: <br> <code>FC(1024)-ReLU-DropOut(0.2)-FC(1024)-ReLU-DropOut(0.1)-FC(80)-sigmoid</code> <br> we used BCE as a loss function. The loss weight ratio of curated and noisy is set as 1:1. By this method, CV lwlrap improved from 0.829 to 0.849 and score on the public LB increased + 0.021.</p> <h2>Semi-supervised learning</h2> <p>Applying SSL for this competition is difficult because of 2 reasons. 1. Difference of data distribution between labeled data and unlabeled data. 2. Multi-label classification task. </p> <p>In particular, the method that generates labels online like Mean teacher or MixMatch tends to collapse. We tried pseudo label, Mean Teacher and MixMatch and all of them are not successful. <br> Therefore, we propose an SSL method that is robust to data distribution difference and can handle multi-label data. For each noisy data sample, we guess the label using the trained model. The guessed label is sharpened by sharpening function proposed by MixMatch (soft pseudo label). As the temperature of sharpening function, we tried a value of 1, 1.5 or 2 and 2 was the best. Predictions of the trained model are obtained using snapshot ensemble with all the folds and cycle snapshots of 5-fold CV. We used MSE as a loss function. we set loss weight of semi-supervised learning as 20. <br> By soft pseudo labeling, The CV lwlrap improved from 0.849 to 0.870. On the other hand, on the public LB, improvement in score was slight (+0.001). We used predictions of all fold models to generate soft pseudo label so that high CV may be because of indirect label leak. However, even if we use labels generated by only the same fold model which has no label leak, CV was improved as compared to one without SSL. The model trained with the soft pseudo label is useful as a component of model averaging (+0.003 on the public LB).</p> <h2>Ensemble</h2> <p>For model averaging, we prepared models trained with various conditions. Especially, we found that averaging with models trained with different time window is effective. We used models below. In order to reduce prediction time, the cycles and padding lengths used for the final submissions and averaging weights were chosen based on CV. 1. ResNet34 slice=512, MTL 2. ResNet34 slice=512, MTL, SSL 3. ResNet34 slice=1024, MTL 4. EnvNet-v2 slice=133,300, MTL, sigmoid 5. EnvNet-v2 slice=133,300, MTL, SoftMax 6. EnvNet-v2 slice=200,000, MTL, SoftMax</p> <h2>Comparison</h2> <p>condition/CV lwlrap <br> model #1 / 0.868 <br> model #2 / 0.886 <br> model #3 / 0.862 <br> model #4 / 0.815 <br> model #5 / 0.818 <br> model #6 / 0.820 <br> #1 + #3 / 0.876 <br> #1 + #2 + #3 / 0.890 <br> #4 + #5 + #6 / 0.836 <br> submission 1 (#1 + #2 + #3 + #4 + #5 + #6) / 0.896 <br> submission 2 (no pad TTA) / 0.895 </p> <h2>Code</h2> <p>I did all model training on kaggle kernels. So I can share all of them. Please note that code readability is low. For example, DataLoader class is named "MFCCDataset" but it actually loads log-mel. <a href="https://www.kaggle.com/osciiart/freesound2019-solution-links?scriptVersionId=16047723">link to the summary of codes</a> <br> The GitHub repository is available at <a href="https://github.com/OsciiArt/Freesound-Audio-Tagging-2019">here</a>. </p> <h2>Public LB history log</h2> <p>model / CV / Public LB <br> ResNet18, MixUp, freq masking, slice=256 / 0.819 / 0.669 <br> <strong>ResNet34</strong>, MixUp, freq masking, slice=256 / 0.819 / 0.676 <br> ResNet18, MixUp, freq masking, slice=256, <strong>MTL</strong> / 0.824 / 0.690 <br> ResNet34, MixUp, freq masking, slice=256, MTL, <strong>snapshot ensemble cycle 1-4</strong> / 0.853 / 0.715 <br> ResNet34, MixUp, freq masking, gain, slice=256, MTL, snapshot ensemble cycle <strong>1-8</strong> / 0.862 / 0.718 <br> ResNet34 + <strong>EnvNet</strong> / 0.867 / 0.720 <br> ResNet34, MixUp, freq masking, gain, <strong>slice=512</strong>, MTL, snapshot ensemble cycle 1-8 (#1)/ 0.860 / 0.731 <br> <strong>ResNet34 #1</strong> + EnvNet / 0.866 / 0.735 <br> ResNet34 #1 + EnvNet, <strong>padding</strong> / 0.874 / 0.737 <br> ResNet34 #1 + <strong>ResNet34 (SSL, #2)</strong> + <strong>EnvNet #4 + #5 + #6</strong> / 0.887 / 0.743 <br> ResNet34 #1, #2 + EnvNet #4 + #5 + #6, <strong>padding TTA</strong> / 0.893 / 0.746 <br> ResNet34 #1, #2 + <strong>ResNet34(slice=1024, #3)</strong> + EnvNet #4 + #5 + #6, padding TTA / 0.896 / 0.752 </p>
Google Landmark Recognition 2019
31st place solution completely on kernels
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello</p> <p>Thought people might be interested to know how to run this entire competition on Kaggle kernels and still finish in good positions on LB. So here is my solution to it. I have completed both the Recognition and Retrieval challenges relying solely on Kaggle kernels. Thank you Kaggle!</p> <p>P.S. I had earlier authored a public <a href="https://www.kaggle.com/mayukh18/dataset-exploration-and-simple-on-the-fly-training">kernel</a> in Keras at the start which showed this can be done. Even Artyom did this in Torch (<a href="https://www.kaggle.com/artyomp/resnet50-baseline">https://www.kaggle.com/artyomp/resnet50-baseline</a>).</p> <h3>Approach for Kaggle Kernels</h3> <p>The solution was run completely on Kaggle Kernels. Since the dataset size is around 500 GB, taking advantage of the size of each tar file, the training was done on a download-train-discard methodology. In each mini-epoch, the data generators downloaded one tar file from the 500 tar files; extracted it; created batches from the valid images and at the end of the epoch deleted all the files. Each tar file downloaded in about ~30 secs, and a mini-epoch of about ~7000 images took a total of 100-110 secs(including everything).</p> <h3>Model Architecture</h3> <ul> <li>I used a ResNet50 with weights pretrained on imagenet. Chopped off of the final layer. Added a GlobalMaxPooling layer and a subsequent softmax layer at the end for the output. </li> <li>Used landmarks which have atleast 20 images.(~55k landmarks)</li> <li>Resized every image to 192x192 pixels. </li> <li>Used Adam optimizer with a rate of 0.0002 at the staring and reduced it after each ~150 mini-epochs. </li> <li>Trained the model for one full epoch(500 mini-epochs) after which it started overfitting. Could have tuned the hyperparameters better for the subsequent epochs but there was not much time left for experimentation.</li> <li>Did not use any external dataset for non landmark images.</li> <li>TTA somehow gave me worse results.</li> </ul> <p>Special mention to Dawnbreaker's <a href="https://www.kaggle.com/c/landmark-recognition-challenge/discussion/57152">solution</a> from 2018 version of the challenge which helped me improve my results.</p> <p>What didn't work? ResNet101, DenseNet etc. and more. Even ResNet50 with 128x128 images didn't give any good result. I initially started off with a ResNet101 but it failed to match the level of performance of the ResNet50. Taking this from a comment of Dawnbreaker in the above solution thread: &gt; "Actually I tried ResNet101 and got a higher val accuracy but a worse LB performance (LB:0.101). Eventually I think maybe an accurate prediction for the landmark images is not as important as an efficient rejection for the non-landmark images."</p> <p>Perhaps this makes sense. Add to the fact that here we were predicting on ~55k classes compared to last year's 15k, rejection seems even more important.</p> <p>The complete organized code(including the Keras generators) and pretrained model can be found here: (<a href="https://github.com/mayukh18/Google-Landmark-Recognition-Retrieval-2019">https://github.com/mayukh18/Google-Landmark-Recognition-Retrieval-2019</a>)</p> <p>Happy if anyone finds this useful. Cheers! Mayukh</p>
Google Landmark Retrieval 2019
9th Place Solution: VRG Prague
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Approach by Visual Recognition Group (VRG) Prague</h1> <p>Thanks to the organizers for the great challenge once again and congratulations for the winning teams!</p> <p>Similarly to last year, our method is based on a combination of CNN-based global descriptors trained on the training set of the first version <a href="https://www.kaggle.com/google/google-landmarks-dataset">Google Landmarks Dataset</a> used in the challenges last year. Unfortunately, I did not have enough time to dedicate to the challenge, so I did not manage to set up a successful training on the training set of this years <a href="https://github.com/cvdfoundation/google-landmark">Google Landmarks Dataset v2</a>.</p> <h3>Note</h3> <p>Performance is presented in the format private / public score.</p> <h3>Representation learning</h3> <p>Our network consists of the convolutional layers of ResNet50, ResNet101, and ResNet152 pre-trained on ImageNet, followed by generalized-mean pooling (<a href="https://arxiv.org/abs/1711.02512">GeM</a>), l2 normalization, a fully-connected (FC) layer, and a final l2 normalization. We train a network with contrastive or triplet loss (when training for a longer time, triplet loss resulted in a superior performance) on pairs selected from the <a href="https://www.kaggle.com/google/google-landmarks-dataset">Google Landmarks Dataset</a> training set with images resized to have the largest dimensions at most equal to 1024 (900 for ResNet152 due to memory restrictions). We randomly sample at most 50 positive pairs per landmark, while the hard negatives are re-mined for each epoch of the training. The FC layer is initialized by the parameters of <a href="https://arxiv.org/abs/1604.02426">supervised whitening</a> learned on such pairs. </p> <p>Training was done using our publicly available <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch">CNN Image Retrieval in PyTorch</a> toolbox. </p> <p>Validation was done on <a href="https://github.com/filipradenovic/revisitop">Revisited Oxford and Paris datasets</a>.</p> <p>During testing, we perform descriptor extraction at 3 scales (scaling factors of <code>1, 1/sqrt(2), sqrt(2)</code>), where the original scale corresponds to largest image size equal to 800 (the maximum image size of v2 dataset), sum-aggregate the descriptors, and l2 normalize. The <a href="https://arxiv.org/pdf/1811.11147.pdf">PCA whitening with shrinkage</a> is learned/applied on the multi-scale descriptors. The dimensionality of the final descriptor is 2048.</p> <h3>Simple NN-search</h3> <p>When using our trained networks with cosine similarity and nearest neighbor search the performance is: <code> ResNet101-GeM: 0.200 / 0.178 ResNet152-GeM: 0.196 / 0.167 ResNet50-GeM: 0.184 / 0.163 </code></p> <p>Additionally, we evaluated <a href="https://arxiv.org/pdf/1610.07940.pdf">ResNet101 with RMAC pre-trained</a>: <code> ResNet101-RMAC: 0.181 / 0.150 </code></p> <p>Concatenating these four representations and jointly PCA-reducing (with shrinkage) to 4096 dimensions results in: <code> concat[4096]: 0.213 / 0.186 </code></p> <h3>Iterative DBA and QE</h3> <p>We perform iterative DBA with an increasing number of images used in each iteration. More precisely, we use three iterations, starting with one nearest-neighbor image being used to average with the respective DB image, repeating nearest-neighbor search and adding one additional nearest neighbor in each step. Instead of a plain average, we use weighted average, where the weight is given as squared cosine similarity of the respective DB image and its neighbor. We denote this method as <code>2dba1:1:3</code>.</p> <p>We perform iterative QE in a similar way as DBA, but now in 10 itearation, denoting it as <code>2qe1:1:10</code>.</p> <p>Performance of our concatenated 4096-dimensional representation is now: <code> concat[4096] -&amp;gt; 2dba1:1:3 -&amp;gt; 2qe1:1:10: 0.254 / 0.229 </code></p> <h3>Graph-based QE and Diffusion</h3> <p>We tested two approaches for the final rank computation, graph traversal and diffusion.</p> <p>Our graph traversal algorithm explores the knn graph with a greedy algorithm where the next image to be retrieved is the one with the highest similarity to any already retrieved image or to the query. This approach is a special case of <a href="http://www.cs.toronto.edu/~mvolkovs/cvpr2019EGT.pdf">Explore-Exploit Graph Traversal for Image Retrieval</a> where only one image is always exploited. This yielded the score of <code>0.257 / 0.235</code>.</p> <p>For diffusion, we used the conjugate gradient method from the <a href="https://github.com/ducha-aiki/manifold-diffusion">Python implementation</a> of <a href="https://arxiv.org/pdf/1611.05113.pdf">Efficient Diffusion on Region Manifolds</a> which was patched to support sparse matrices on the input. This composes our final submission of <code>0.263 / 0.243</code> on private / public score.</p> <h3>[14. 06. 2019] UPDATE</h3> <p>I released our models as a part of our <a href="https://github.com/filipradenovic/cnnimageretrieval-pytorch#networks-with-whitening-learned-end-to-end">CNN Image Retrieval in PyTorch</a> toolbox, with the script to evaluate them on R-Oxford and R-Paris.</p> <p>Multi-scale performance in the following table:</p> <p>| Model | ROxf (M) | RPar (M) | ROxf (H) | RPar (H) | |:------|:------:|:------:|:------:|:------:| | <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet50-gem-w-83fdc30.pth">gl18-tl-resnet50-gem-w</a> | 63.6 | 78.0 | 40.9 | 57.5 | | <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet101-gem-w-a4d43db.pth">gl18-tl-resnet101-gem-w</a> | 67.3 | 80.6 | 44.3 | 61.5 | | <a href="http://cmp.felk.cvut.cz/cnnimageretrieval/data/networks/gl18/gl18-tl-resnet152-gem-w-21278d5.pth">gl18-tl-resnet152-gem-w</a> | 68.7 | 79.7 | 44.2 | 60.3 |</p>
Google Landmark Retrieval 2019
8th Place Solution: Clova Vision, NAVER/LINE Corp.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Approach by Clova Vision, NAVER/LINE Corp.</h1> <p>Thanks to the organizers for the awesome challenges (both recognition and retrieval), and congratulations to all participants! <br> We briefly explain our approach here, and hope the future participants can benefit from this :)</p> <hr> <h2>Quick summary of pipeline</h2> <ul> <li>[step 0.] Dataset Clustering </li> <li>[step 1.] Learning Representation</li> <li>[step 2.] Feature Ensemble</li> <li>[step 3.] DBA/QE + PCA/Whitening</li> <li>[step 4.] DELF / Diffusion </li> </ul> <hr> <h3>(1) Dataset Clustering</h3> <p>At the very beginning of the challenge, stage1 train/test/index datasets were all released, and the labels were provided for the train set only. We wanted to have the model biased to the test/index data distribution since it might help to achieve a high score in the challenge. To do so, we did clustering on stage1 test/index datasets and filtered out noisy clusters using several rules. The virtual class ids were given to each cluster and added to the train v1 dataset to use it for training. In this way, we got approximately 0.01 mAP increase on stage1.</p> <p>At the stage2, we found the newly released train/test/index dataset is super noisy comparing to the stage1. We also made clusters on train/test/index datasets to remove the "garbage images" based on the constructed clusters. </p> <p>Finally we train our model using the datasets below: </p> <ul> <li>TR1: train v1</li> <li>TR2: train v1 + test/index cluster v1</li> <li>TR3: train v1 + test/index cluster v2 (noise cluster)</li> <li>TR4: train v1 + test/index cluster v1 + train v2 (garbage removed) + test/index cluster v2 (garbage removed), </li> </ul> <p>where v1 is from stage1, v2 is from stage2. </p> <h3>(2) Learning Representation</h3> <p>We've tried to train various models by combining different backbones, losses, aggregation methods and train sets,</p> <ul> <li>training data: TR1, TR2, TR3, TR4</li> <li>backbones: resnet50, resnet101, seresneet50, seresnet101, seresnext50, seresnext101, efficientnet</li> <li>losses: xent+triplet, npair+angular</li> <li>aggregation methods: GeM, SPoC, MAC, </li> </ul> <p>which results in 4 x 7 x 2 x 3 = 168 combinations (or more…) <br> In general, GeM worked well. We didn't have much time to fully fine-tune the efficientnet (tensorflow released version) on the google-landmark-challenge 2019 datasets so we early stopped the training. </p> <h3>(3) Feature Ensemble</h3> <p>Based on the trained model, we extracted the features and found the best combinations for the ensemble. To test the result on a small dataset, we sampled validation set (VAL_V2) which is a subset of stage2 datasets. We run auto-search process which automatically concatenates 4 different off-the-shelf features which are randomly chosen every time, and evaluates the performance of the concatenated feature on VAL_V2. We found the mAP result on VAL_V2 is aligned with the result on real submissions, and this trick helped us saving submission trial and time. </p> <p>Finally, we found the best combinations: </p> <ul> <li>seresnet50 / SPoC / TR4 / npair+angular</li> <li>seresnext50 / SPoC / TR4 / npair+angular</li> <li>resnet101 / GeM / TR1 / xent+triplet</li> <li>seresnext50 / GeM / TR3 / xent+triplet</li> </ul> <p>The features were L2-normalized before the concatenation.</p> <h3>(4) DBA/QE + PCA/Whitening</h3> <p>DBA/QE is a well-known technique for boosting the performance. We found 10-nearest neighbors for each data points for this. Based on our model, we got 9~10% mAP increase at stage1. The PCA/Whitening improved mAP a bit more when the output dimension was set to 1024. </p> <h3>(5) DELF / Diffusion</h3> <p>We applied DELF in two ways: </p> <ul> <li>reranking top-100</li> <li>Diffusion with SV</li> </ul> <p>we found simply reranking the top-100 retrieved result was not really helpful. We tested only one case, but it showed +0.00018 performance increase after DELF reranking (it might depend on models though). <br> Instead, we used DELF matching score for graph construction for diffusion, and we got the better performance increase in this case. The gain was about 0.002 in mAP. </p> <p>For the last step, we applied diffusion on the features after DBA/QE + PCA/Whitening. It always improved the performance, but we performed a limited number of experiments because the diffusion process was very slow.</p> <hr> <h2>Our Thoughts on Frequently Asked Questions</h2> <p><strong>Q1. Our Best Single Model?</strong> <br> The stage2 mAP of best single model is approximately in range of [0.1718 ~ 0.1995] in public, [0.1878 ~ 0.2211] in private with 1024-dimensional feature. The high score model in public did not necessarily achieve a high score in private. </p> <p><strong>Q2. Local Descriptor vs Global Descriptor?</strong> <br> It's not a good idea to use DELF directly to retrieve images from 1M index images due to excessive computation. Suppose we have top-K ranked candidates retrieved by global descriptors, we can rerank the order using local descriptors with geometric verification.<br> In our case, the gain was very slight though. </p> <p><strong>Q3. Things that didn't work?</strong> </p> <ul> <li>Ensemble with features extracted from multi-resolution input images. (didn't work)</li> <li>Explore-Exploit Graph Traversal (EGT) (slow on our implementation) </li> <li>Detect-to-Retrieve (D2R) (cropped regions from D2R landmark detector were used for feature extraction, and the result was worse.)</li> </ul> <p>** for more detailed experimental results, please refer to the paper we upload to arxiv. (<a href="https://arxiv.org/abs/1907.11854" target="_blank">https://arxiv.org/abs/1907.11854</a>)</p>
Google Landmark Recognition 2019
27th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all congratulations to all gold and other medal winners. Also thanks to Kaggle/Google for hosting such an extremely challenging and interresting competition.</p> <p>I'll share a brief overview of the solution I used to get to a 27th place. I'll share my code later when I have some spare time to clean it up.</p> <p>I downloaded and resized all train and test images to 224 * 224 pixels. Using the pretrained models generated from the Places365 dataset I made a top-5 class prediction for all train and test images. Initially I started with a simple voting classification. For all train images I removed the ones that had 4 of 5 classes predicted as indoor (which can be loosely interpreted as non-landmark). For the test images I used a 3 out of 5 prediction to specify which ones where non-landmark. This removed several hundreds of thousands of images from the train set and also allowed me to specify tens of thousands of images from the test set as non-landmark.</p> <p>As model I used a pretrained Resnet50 with a custom head layer. I added a Generalized-Mean (GeM) pooling layer followed with a fully connected layer to trim down the number of parameters. I used basic augmentation for all images (flip lr, updown, shear and zoom). Head-only training I usually did 4 to 8 epochs (each epoch with 1/4 of the train images). Learning rate 0.0005 and a batch size of 128. Full training of the network for 6 to 18 epochs (each epoch with 1/6 of the train images). For full training batch size would be about 20 to 24 (depending on the amount of classes selected). Learning rate would drop from 0.0001 to 0.00003.</p> <p>Initially I started with using validation of about 10% of the train data. I noticed quickly that it took a lot of additional time and there was no noticable correlation between local metrics and the LB. So I just skipped validation completely...not something you would do in any production scenario but it worked OK for me in this competition.</p> <p>For generating my submissions I usually used 3 to 4 models from different epochs to create the predictions. I used TTA varying between 2 to 14. For 2 just a simple fliplr of the test image. When using 14 I also made several crops. In the end the simple fliplr seemed to give the most stable results surprisingly.</p> <p>In the last week I spent a couple of evenings to further optimize the landmark/non-landmark identification and apply that to the stage2 test images and also to further select train images. That way I was able to gain a 50% increase in my score in the last 3 days. The final model was trained on 83K of classes.</p> <p>Things that didn't work for me: I tried various Xception models. Training took however way longer then my Resnet50 models. Also I got the impression that Xception did not handle the large amount of classes very well. In the end I tried 4 or 5 different models but decided to stick with Resnet50. I also tried using the Hadamard classifier to be able to cope with the large amount of classes. The performance seemed quitte good but I was only able to get a model with a significant score on the LB if the number of classes that I used was not to large (&lt; 10K ). </p> <p>I'am looking forward to reading the Gold solutions from the winners.</p>
Google Landmark Recognition 2019
15th place Rapids.ai solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to winners. I'm by no means an expert in computer vision but I did my best for this competition. Here is a brief version of my solution and more code sharing will be done later.</p> <p><strong>The major takeaway: please check out the cuml's KNN:</strong> <a href="https://github.com/rapidsai/notebooks/blob/branch-0.8/cuml/knn_demo.ipynb">https://github.com/rapidsai/notebooks/blob/branch-0.8/cuml/knn_demo.ipynb</a> It is just so much faster. It just took <strong>less than 5 mins</strong> to find 5 neighbors of 110K test images from 4M train images.</p> <p>The challenge of this competition is the large data size and number of classes. Given 200K classes and 4 million images with extremely uneven distribution, a simple classification model is difficult to make progress. My best classification model only yields 0.04 GAP, far less than the 0.19+ with the following pipeline:</p> <ol> <li><p>Finetune a resnet152 and a densnet 161 for the most popular 10k landmarks classification. The validation accuracy is about 0.3.</p></li> <li><p>Extract bottleneck layers, which are 512 features in my case. And apply a KNN to find K=5 nearest images in train given a test image based on these 512 features.</p></li> <li><p>Repeat step 2 for different checkpoints of the two CNN models and aggregate the their KNN results. I used 10 saved CNN models to get 10 KNN models. At this point, each image in test has 10*5 = 50 candidate images from train. There are of course duplicates in these candidates.</p></li> <li><p>Make it a recommendation problem by predicting a pair: a test image and a candidate train class. The target to predict is binary: if the test image is from the class than the target is 1.</p></li> <li><p>Build features for such pair, like minimum distance from the test image to the images of a class, popularity of the class, so on so forth. I built 15 features in the end.</p></li> <li><p>Run a xgb model and a NN model for the above data and average their predictions. For each test image, select the highest prob class. </p></li> </ol> <p>To be continued!</p>
Freesound Audio Tagging 2019
96th Place Solution: validation of noisy data improve ~0.02 LB score
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Background:</strong> First time kaggler, no server with GPU locally, all computation was on Kaggle kernels. <strong>Final Workflow:</strong> 1. Curated-only 5-fold CNN model got the optimal LB sscore 2. Preprocess noisy data to get a raw selected noisy dataset, by clustering 7 features (both temporal and spectral features) 3. Validate the selected noisy dataset by curated-only model 4. Train the mixed model with the curated and validated noisy data, got around 0.02 score boost.</p> <p><strong>The detailed workflow pic is too big to display in the post, so here it is</strong> <a href="https://drive.google.com/file/d/1AaFDTsedh-3liF7K4V-QAyCWA8l8RtA_/view?usp=sharing">Final Setup</a> - <strong>Features for deep learning:</strong> After inspecting the distribution of the audio duration of both curated and test dataset, I've tried clip duration 1, 2, 3 seconds, mel channels 128, 180, 192, and other parameters combination. The best result came with the above config in the final setup. 0.01 and 0.02-3 LB boost from 1s, 3s to 2s respectively. 180 mel channels can improve the result by 0.02 LB, similar to 192 (choose the less feature to save computation time). - <strong>Noisy Data Preprocessing:</strong> - - This step is mainly for reducing the size of noisy dataset, for it's difficult to deal with all the noisy data in limited kaggle kernels. - - Thanks to Essentia: <a href="https://essentia.upf.edu/documentation/algorithms_reference.html">https://essentia.upf.edu/documentation/algorithms_reference.html</a> - - Found a critical error during writing this summary: use the wrong window size config(too short) for feature extraction, which may make this step useless and more like random pick of noisy data. - <strong>Random pick:</strong> The curated data would overfit if used all frames from the audio files. Tried max = 5 or 10, 5 gave the better result, 0.01 LB boost from max = 10. - <strong>Balance the data or not?</strong> There is no time left for me to do the test with balanced dataset, because I chose the wrong balance method at first by both undersampling and upsampling each class to the average count, which produced a worse result because undersampling would reduce the useful data. When I realized this at a pretty late stage, I decided to use the unbalanced dataset, considering the lwlrap from curated-only model didn't show much bias against the imbalance. But this is still an open question and any discussion is welcome. - <strong>Network Choice</strong> Tried several network setups, CRNN with GRU as well (produced competitive result with curated-only data, but no time to tune the config), 1d RNN network with raw data as input was built and tried but performed badly. So finally chose VGG16 and did some experiments on the last layers. Key takeway: 1. 2 FC layers with 2048 neurons produced worse local lwlrap than 1 FC with 512, and may produce memory error in kaggle kernel. Though not fully tested, it looks in my experiments that the single FC at last model stage produced better result. 2. Thanks to a newly published paper <em><a href="https://arxiv.org/pdf/1905.05928.pdf">Rethinking the Usage of Batch Normalization and Dropout in the Training of Deep Neural Networks</a></em>, my LB score boosted 0.03 by changing the unit order accordingly, that is BN-Dropout before the weight layer. 3. Dropout rate from 0.2 to 0.5 to 0.7, the LB score boosted by 0.031 and 0.001. Though 0.5 and 0.7 produced similar result, chose 0.7 may make it more robust to overfitting. 4. Batch size didn't show much difference in my experiments (64, 128,256 batch sizes). 5. The more training fold, the better performance, but it requires more training time. And 10-fold improved only 0.004 compared to 5-fold. So used 5-fold at last for both curated and mixed model. 6. The CRNN network showed different lwlrap per class result, and may be a complement to CNN. (But as a beginner in sound scene classification and my future focus is model interpretability, I'd rather not to use ensembling but focus on finding a single optimal network this time) - <strong>Data Augmentation:</strong> SpecAugment without time warping, and mixup are used. - <strong>Noisy Data Validation:</strong> *The confidence value = r * label_weight_per_class / lwlrap_per_class* The greater <em>r</em> is, the stricter the validation rule is. The best score is produced when r=5. - <strong>Test Setup:</strong> Found no improvement with any TTA, so simply use the average of the predicetions from all frames.</p> <p>Thanks to Kaggle and the competition organization. I've learnt a lot along this competition. Good luck to all in the second stage.p</p>
Freesound Audio Tagging 2019
[8th place solution] : SpecMix and warm-up pipeline
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello,</p> <p>I released on a <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">github repository</a> my solution. It is highly inspired from <a href="/daisukelab">@daisukelab</a> notebooks for preprocessing and <a href="/mhiro2">@mhiro2</a> for his simple CNN model.</p> <p>Keys points of this solution, 2 techniques I imagined (maybe they exist under another name 😄) - <strong>warm-up pipeline</strong> : try to use noisy dataset as pretrain model and as semi-supervised to increase diversity in generating a set of models. In another words, first use noisy set to warmup model training, then fine tune model with curated set and then use noisy set again in a semi-supervised way. - <strong>SpecMix</strong> : my new data-augmentation technique which takes what I consider the best from SpecAugment and mixup. It generates new samples by applying frequency replacement and time replacement on inputs and compute a weighted average on targets. More details in the github <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">README</a></p> <p>Inference is done using an ensemble of 2 models (<a href="/mhiro2">@mhiro2</a> simple CNN model + VGG16) on 10 folds CV.</p> <p>More details are available in the github <a href="https://github.com/ebouteillon/freesound-audio-tagging-2019">README</a>, as well code and weights.</p>
Freesound Audio Tagging 2019
1st place solution released on a Github
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone, thanks for the competition.</p> <p>Link to solution <a href="https://github.com/lRomul/argus-freesound">https://github.com/lRomul/argus-freesound</a></p> <p>Key points:</p> <ul> <li>CNN model with attention, skip connections and auxiliary classifiers </li> <li>SpecAugment, Mixup augmentations </li> <li>Ensemble with MLP second-level model and geometric mean blending </li> </ul> <p>More details you can find in the repository <a href="https://github.com/lRomul/argus-freesound/blob/master/README.md">README</a>.</p>
Google Landmark Recognition 2019
88th place write-up [fast.ai]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>88th place is not super exciting. However, you can surely learn from some of my mistakes and pick apart some of my code. </p> <p><a href="https://towardsdatascience.com/diving-into-googles-landmark-recognition-kaggle-competition-6975dbe11072?source=friends_link&amp;sk=2f6cf21fd799a1b02302243d487256b5">Here is the medium "friend" link so it's not behind a paywall. </a></p>
Google Landmark Retrieval 2019
26th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone,</p> <p>Ending final validation, I share my solution.<br> I trained two models. One uses raw images as input. The other uses DELF as input. I concatenated two vectors from two models for indexing. I also used query expansion and geometric verification.</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-google-landmark-retrieval-2019">https://github.com/toshi-k/kaggle-google-landmark-retrieval-2019</a></p> <p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-google-landmark-retrieval-2019/master/img/solution.png" alt="conceptual diagram"></p>
iMaterialist (Fashion) 2019 at FGVC6
16-th Solution: Mask-RCNN on Steroids
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><em>(Also known as Alex doesn't know how to code)</em></p> <p>After seeing first place's solution, I was shocked at how close I was to reaching the gold area... This competition has taught me a lot on hyper-parameter fine-tuning and architecture selection for instance segmentation, and the greatest (and most painful) of them all is the threshold for non-maximum-suppression- and I fell for it. This is my first serious attempt at aiming for grandmaster status, and though the outcome is not so satisfactory, for the most part I enjoyed the competition.</p> <p>For my solution, it is based on mmdetection, an awesome codebase for detection related tasks. </p> <h3>I started with Hybrid Task Cascade with ResNeXt-101-64x4d-FPN backbone, with the following modifications:</h3> <ol> <li><p>DCN-v2 for part of the backbone</p></li> <li><p>Removing mask fusion branch (since it takes too much memory and semantic information in this task is tricky to define)</p></li> <li><p>Multi-scale training with flipping and resizing range (512,512) -&gt; (1333,1333)</p></li> <li><p>Guided-anchoring RPN</p></li> <li><p>Increasing initial mask branch output from 14 pixels to 28 pixels</p></li> </ol> <h3>Then I modified the training progress a bit.</h3> <ol> <li><p>Default training schedule, 1 image per GPU for 4/8 GPUs</p></li> <li><p>Changing mask loss from BCE to symmetric lovasz:</p></li> </ol> <p><code>def symmetric_lovasz(outputs, targets): return (lovasz_hinge(outputs, targets) + lovasz_hinge(-outputs, 1 - targets)) / 2</code></p> <ol> <li><p>Changing smooth L1 loss to GHM-R loss</p></li> <li><p>Changing sigmoid-based losses to Focal Loss and GHM-C loss</p></li> </ol> <h3>Then, for inference:</h3> <ol> <li><p>Used soft-nms ( </p> <h1>but I didn't fine-tune the threshold :(</h1></li> <li><p>Multi-scale testing in mmdetection wasn't implemented, so I wrote one myself that merge predictions in multiple scales in proposals, bboxes and masks. The following scales are used (with flipping): [(1333,1333), (1333,800), (800,1333), (900,600)]</p></li> <li><p>Deleting masks with low confidence &amp; low pixels (with fine-tuned threshold)</p></li> <li><p>Heuristic that make sure that the same pixel are not assigned to two instances w. the same ClassId</p></li> <li><p>(I couldn't submit for the final minutes so I didn't have a chance to test this out) Binary classifier that predict if an object is fine-grained or not. Then, in final submission, if an instance is determined to be fine-grained, it is excluded. The rationale is as below:</p> <p>a) An included object that is fine-grained but the mask generated is not: 1 FP + 1 FN b) Simply not predicting that object: 1 FN</p></li> </ol> <p>And for this network, it's a se-resnext50 based image classifier with 4-channel input (image + binary instance mask) and one-hot category fusion in the fc-layer. Honestly I could be doing a DAE before the classifier (from dirty testing mask to imagined clean train mask) but I ran out of GPUs and time.</p> <h3>What didn't work:</h3> <ol> <li><p>Un-freezing bn / gn / synchronized bn</p></li> <li><p>Parallelized testing</p></li> <li><p>After-nms ensemble (like the one used in Top3)</p></li> <li><p>Attribute classification (my local F1 was &lt; 0.1)</p></li> </ol> <h3>Machines</h3> <p>I used a variety of rented cloud machines, from 1 * 2080 Ti, 1 * P6000 to 8 * 1080 Ti to 4 * P100. I've basically spent all of the prizes I earned in the Whales competition on this one, plus a few hundred bucks :(</p> <p>For your reference:</p> <ol> <li><p>For 1-image per GPU, you can fit an (1333,1333) image into an RTX 2080 Ti w.o. the enlargement of mask branch or GA-RPN</p></li> <li><p>But if you want to add GA-RPN, only GTX 1080 Ti would suffice (the pitfall: <strong>1080 Ti has a little bit more memory than it's RTX counterpart</strong>)</p></li> <li><p>My full setting fits barely in an P100.</p></li> </ol> <h3>Final words</h3> <p>I have to say goodbye to kaggle for the following months due to internships and school-related stuffs, so it's a little discouraging to know that I missed the last chance this year to getting a solo gold by such a small margin. But I guess the journey and the things I've learnt is more important than the recognition itself. So I will come back with full strength after a small break. Also congratulations to the winners, you guys did a fantastic job!</p>
Google Landmark Recognition 2019
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congratulations to everyone that participated! This was a rough competition, more than 4 million images (&gt; 600GB) and 200 thousand classes! And, of course, congratulations to the winners! Well deserved!</p> <h1>Summary</h1> <ul> <li><strong>Training set</strong>: Sub-set of 2018 Google Landmark Recognition</li> <li><strong>Descriptor</strong>: Ensemble of DenseNet121-GeM-512FC (512x512, ArcFace) and ResNet101-GeM-2048FC (768x~, ContrastiveLoss)</li> <li><strong>Landmark classifier</strong>: ResNet50 with T-scalling trained on OpenImages and Places. Remove images with less than 0.99 probability from train-set (4M-&gt;2.6M)</li> <li><strong>Inference</strong>: Extract features for train and test set and compute the cosine similarity matrix between test and class centers. Then, apply K-NN to retrieve the 12 nearest neighbours and apply softmax on them.</li> <li><strong>Re-rank 1</strong>: Multiply confidence score from inference by classification's score on test.</li> <li><strong>Re-rank 2</strong>: Perform K-NN search (K=1750) of test set on itself and apply softmax. Multiply last score by this one.</li> </ul> <h1>Long version</h1> <p>I believe two issues must be addressed to tackle this competition. The first is a similarity search problem, where we want to create a good descriptor that would allow us to correctly classify an image based on a K-NN search in a gallery (trainset). The second part is a detection/re-rank of non-landmarks in both train (gallery) and test (query) sets. This is paramount since the GAP metric used in this competition penalize us if we make a prediction for a non-landmark with higher confidence score than any other landmark images.</p> <h2>Similarity search</h2> <h3>Descriptor</h3> <p>I reckon a metric learning approach is much more natural to this task than a pure classification one. This is because we have a huge number of classes (&gt; 200k) and some of them have just a few examples. Furthermore, in practice we would not be able to re-train our classifier every time we need to add a new landmark to the gallery.</p> <p>With this mindeset, I spent most of the time (until the last 2 weeks) training several models with ArcFace loss. I tried several types of pooling schemes such as SPoC, GeM and REMAP (my own re-implementation). Also tried several types of backbones like DenseNet121, ResNet101 and SEResNeXt101.</p> <p>The ones that worked best for me were DenseNet121-GeM-512FC (512x512) and SEResNeXt101-REMAP-2048FC (384x384). However, SEResNeXt101-REMAP was really slow so I only used DenseNet for the second stage.</p> <p>I trained on a sub-set of last years' competition, it was around 80k images of 15k classes. A margin of 0.8 for <a href="https://arxiv.org/abs/1801.07698">ArcFace</a> loss was used and images were resized to the specified size. Weird as it may sound, I found it was better to squash it into a square than keeping the original aspect ratio with zero padding.</p> <p>I (unfortunately) spent most of my time here and the result were not so great. My best model only score around 0.33 on the image <strong>retrieval</strong> public LB (no QE, DBA or any re-rank). At this time I decided to give up ArcFace and go for the siamese approach.</p> <p>In the last weeks I trained a ResNet101-GeM-2048FC (768x~) following Radenovic's <a href="https://arxiv.org/abs/1711.02512">paper</a> and despite I didn't submit to the retrieval competition (deadline was near) I believe it was considerably better than the other models I had trained so far.</p> <p>My final descriptor was an 2560-dim. ensemble of both models. I L2-normalized both and concatenated as follows: [1.5 * ResNet, 1 * DenseNet]. I didn't have enough time nor RAM to apply PCA on the resultant descriptor.</p> <h3>Landmark classifier</h3> <p>Since there were a lot of non-landmark images both in test and train sets I trained a landmark classifier using the descriptor's train set as landmark examples. For the non-landmarks, I used images from OpenImages and Places datasets. I used classes such as: sky, forest, bathroom, bedroom, restaurant, Human faces, food, etc... It was important to use outdoor classes as negative examples, otherwise the model would be biased to predict any outdoor image as landmark. Finally, I used <a href="https://arxiv.org/pdf/1706.04599.pdf">T-scalling</a> to scale the predictions.</p> <p>For the train set I removed all the images with classification score less than 0.99, which resulted in around 2.6M images.</p> <p>For test set, on the other hand, I did not remove any images but used the scores to re-rank the final submission's confidence scores. I did this because, if the conf. score is well ranked, there is no penalty for making predictions on non-landmark images.</p> <h3>Out-of-distribution images</h3> <p>Since the descriptor was only training on landmark images it is very bad in differentiating between images that are non-landmarks, e.g. human faces or types of food. As consequence of this fact, non-landmark images are matched with very high confidence scores, decreasing the GAP metric. In order to overcome this issue, three re-rank steps were used (will be explained later).</p> <h3>Extract features</h3> <p>The CNNs previously mentioned are used to extract features (descriptors) for both train and test sets. Then for the train set, the class centers are computed by averaging all the image descriptors that belong to the same class. Finally, both test and class centers descriptors are L2 normalized.</p> <h3>Search</h3> <p>The K-NN search was performed on the cosine distance matrix using <a href="https://github.com/facebookresearch/faiss">Faiss</a> library. The top 12 NN were kept and softmax was applied on them.</p> <p>Even though we only need the first nearest neighbour, it was important to keep more of them to apply softmax and convert the cos. similarities into a probability distribution. By doing this we filter out some non-landmarks that have high similarity with a lot of classes in the gallery (train set). Without this or any other re-ranking steps, the score is likely to be close to zero even for a good descriptor.</p> <h3>Re-rank:</h3> <p>The second re-rank was very straightforward. The original confidence score was multiplied by the probability of being a landmark from the classifier. With this step many non-landmark images got their conf. score reduced.</p> <h3>Re-rank based on out-of-distribution similarity</h3> <p>For this step, the K-NN (K=1750) of the test set with itself was computed. The intuition here is that since our model is known to produce high similarity between out-of-distribution images, if we compute the similarity between the test set it is likely that non-landmark images will have much more similar neighbours than landmark ones. Thus, I apply the softmax and multiply by the confidence score from the last step.</p> <p>Hence, the final score is given by: <code>score = metric_learning_score * classifier_score * ood_score</code></p> <h1>Leaderboard progression (stage 2)</h1> <ul> <li><strong>DenseNet121-GeM-512FC (cls re-rank)</strong>: 0.22379</li> <li><strong>DenseNet121-GeM-512FC (cls and ood re-rank)</strong>: 0.23862</li> <li><strong>ResNet101-GeM-2048FC (cls and ood re-rank</strong>): 0.26994</li> <li><p><strong>Ensemble (cls and ood re-rank)</strong>: 0.28792</p> <h1>Resources</h1></li> <li><p><strong>GPU</strong>: 1x Nvidia 1080ti (11GB)</p></li> <li><strong>RAM</strong>: 32GB</li> <li><strong>HD</strong>: 1TB SSD</li> <li><strong>Processor</strong>: Intel i7 6800k</li> </ul> <h1>Acknowledgments</h1> <p>Thanks Kaggle and Google for hosting this competition. Thanks as well to all the community and <a href="/filipradenovic">@filipradenovic</a> for opening his 2018 Google-Landmark-Retrieval solution.</p>
iMaterialist (Fashion) 2019 at FGVC6
18 place write up (short meditation on results)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is our short meditation on our results at strategy:</p> <ol> <li><p>r101 mask rcnn 1024x1024 (1 fold, it took 5 days on our machine)</p></li> <li><p>ansemble predictions from multiple snapshots of r101 mask rcnn (select prediction with highest confidence, when multiple predictions overlap)</p></li> <li><p>Train several multiclass classifiers (xception,resnext,densenet) two detect if particular object classes present on image. Fins optimal treshold for discarding predictions from mask rcnn. <a href="https://github.com/musket-ml/classification_training_pipeline">Our Classification Pipeline</a></p></li> <li><p>Train a lot of segmentation networks at least one per class to refine masks from mask rcnn (<a href="https://github.com/musket-ml/segmentation_training_pipeline">Segmentation Pipeline</a>) -&gt; This was our main source of improvements.</p></li> </ol> <p>At the same time we were desperately trying to find solution for attributes. Initially nothing worked. But later we have used following approach:</p> <p>Take a mask crop and a full image, and feed them in two independent inputs of image classifier, with multiple outputs per independent attribute groups (we have found them by analizing attribute co occurances, and assuming that sometimes coocurences labeling errors).</p> <p>Train a lot of such classifiers, then take only those objects for which 75% of classifiers agree with the result of blend and also when most of them are doing confident predictions.</p> <p>This gave us very slight improvement of our score (~0.00500 in total) </p> <p><strong>Errors:</strong></p> <ul> <li>Spending our pretty limited resources on attributes was a big error.</li> <li>We did not actually realised and used full potential of mask rcnn :-(. Our initially high place on leaderboard, gave us false feeling that we used most of its potential. </li> </ul> <p><strong>Resources</strong></p> <p>We have used: 2x1080Ti on my machine, 2x1080Ti on Denis machine and 1080 on Konst machine. </p> <p>Regards, Pavel</p>
Freesound Audio Tagging 2019
Semi-supervised part of 20th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Freesound Audio Tagging 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First I want to thank researchers at CVSSP, University of Surrey, UK for their great baseline <a href="https://github.com/qiuqiangkong/dcase2019_task2">code</a>. My entire system is built upon it.</p> <p><strong>Pre-processing</strong> Nothing special. 32 kHz sampling rate, 500 hop size so that 64 fps. Silence trimming below <code>max - 55 dB</code>, mel size 128. Random excerpt 4-second patches from entire spectrograms. Shorter spectrograms are padded repeatedly.</p> <p><strong>Data augmentation</strong> 1. Spec-augment without time warping 2. Time reversal with 80 more classes (binary target with 160 classes in total) 3. I'll talk about MixUp later</p> <p><strong>Semi-supervised learning</strong> Almost the same as <a href="https://link.springer.com/chapter/10.1007/978-3-030-20873-8_26">https://link.springer.com/chapter/10.1007/978-3-030-20873-8_26</a>. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1605499%2Fcc2a1c6259b8b72194abef259d524448%2Ffreesound_model.png?generation=1561783764472053&amp;alt=media" alt=""></p> <p>Suppose V X y_V is a batch of curated data and W is a batch of noisy data without label, then we have ![](<a href="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1605499%2F23d2ce6b7bb8c8d60b6e5577b09c32d1%2Ffreesound_algo.png?generation=1561784302344628&amp;alt=media">https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1605499%2F23d2ce6b7bb8c8d60b6e5577b09c32d1%2Ffreesound_algo.png?generation=1561784302344628&amp;alt=media</a> =400x320)</p> <p>The backbone is 9-layer CNN, which is the same as the one in public kernel and CVSSP baseline.</p> <p><strong>Loss function</strong> Focal + ArcFace. ArcFace is modified for mult-label classification. 2 reasons to use ArcFace here 1. Enlarge the margin between positive and negative, so that the threshold eta becomes less sensitive. 2. In the <a href="https://arxiv.org/abs/1801.07698">original ArcFace paper</a>, the authors state that ArcFace enforces less penalty towards samples whose feature vectors have large angle with the weight matrix than other margin-based loss. I suppose it would apply less "weight" to wrong pseudo-labels.</p> <p>In practice using ArcFace makes my NN converge faster and have higher local lwlrap when using only curated data. I don't know what would happen if I trained the semi-supervised model without it because I started using it at a very early stage.</p> <p><strong>Training Details</strong> First I warm up the backbone CNN with noisy set. For the semi-supervised model, I have 6 samples from curated set and 58 samples from noisy set in each batch. Nadam optimizer, cosine annealing learning rate with linear warm-up are used. One can refer to my technical report, Table 2 for details about hyperparameters.</p> <p><strong>Inference</strong> Arithmetic mean on 0.25s sliding window. TTA only on clips with short duration by repeating the sliding window with Spec-augment. </p> <p>Single model trained on entire training set gives me 0.712 lwlrap on stage 1 test.</p> <p>Again, thank the host for hosting such interesting competition. I am still on my way to my first gold medal :)</p>
iMaterialist (Fashion) 2019 at FGVC6
[Update] 1st place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Kagglers,</p> <p>My solution is based on the COCO challenge 2018 winners article: <a href="https://arxiv.org/abs/1901.07518">https://arxiv.org/abs/1901.07518</a>. </p> <h1>Code:</h1> <p><a href="https://github.com/amirassov/kaggle-imaterialist">https://github.com/amirassov/kaggle-imaterialist</a></p> <h1>Model:</h1> <p><a href="https://github.com/open-mmlab/mmdetection/blob/master/configs/htc/htc_dconv_c3-c5_mstrain_400_1400_x101_64x4d_fpn_20e.py">Hybrid Task Cascade with ResNeXt-101-64x4d-FPN backbone</a>. This model has a metric Mask mAP = 43.9 on COCO dataset. This is SOTA for instance segmentation.</p> <h1>Validation:</h1> <p>For validation, I used 450 training samples splitted using <a href="https://github.com/trent-b/iterative-stratification">https://github.com/trent-b/iterative-stratification</a>.</p> <h1>Preprocessing:</h1> <p>I applied light augmentatios from the <a href="https://github.com/albu/albumentations">albumentations</a> library to the original image. Then I use multi-scale training: in each iteration, the scale of short edge is randomly sampled from [600, 1200], and the scale of long edge is fixed as 1900. <img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/preproc.png" alt="preprocessing"></p> <h1>Training details:</h1> <ul> <li>pre-train from COCO</li> <li>optimizer: <code>SGD(lr=0.03, momentum=0.9, weight_decay=0.0001)</code></li> <li>batch_size: 16 = 2 images per gpu x 8 gpus Tesla V100</li> <li>learning rate scheduler: <code>if iterations &amp;lt; 500: lr = warmup(warmup_ratio=1 / 3) if epochs == 10: lr = lr ∗ 0.1 if epochs == 18: lr = lr ∗ 0.1 if epochs &amp;gt; 20: stop</code></li> <li>training time: ~3 days.</li> </ul> <h1>Parameter tuning:</h1> <p>After the 12th epoch with the default parameters, the metric on LB was <strong>0.21913</strong>. Next, I tuned postprocessing thresholds using validation data: * <code>score_thr=0.5</code> * <code>nms={type: 'nms', iou_thr: 0.3}</code> * <code>max_per_img=100</code> * <code>mask_thr_binary=0.45</code></p> <p>This improved the metric on LB: <strong>0.21913 -&gt; 0.30011.</strong></p> <h1>Test time augmentation:</h1> <p>I use 3 scales as well as horizontal flip at test time and ensemble the results. Testing scales are (1000, 1600), (1200, 1900), (1400, 2200). </p> <p>I drew a TTA scheme for Mask R-CNN, which is implemented in mmdetection library. For Hybrid Task Cascade R-CNN, I rewrote this code. </p> <p>This improved the metric on LB: <strong>0.30011 -&gt; 0.31074.</strong> <img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/tta.png" alt="TTA"></p> <h1>Ensemble:</h1> <p>I ensemble the 3 best checkpoints of my model. The ensemble scheme is similar to TTA. </p> <p>This improved the metric on LB: <strong>0.31074 -&gt; 0.31626.</strong> <img src="https://raw.githubusercontent.com/amirassov/kaggle-imaterialist/master/figures/ensemble.png" alt="ensemble"></p> <h1>Attributes:</h1> <p>I didn't use attributes at all: they were difficult to predict and the removal of classes with attributes greatly improved the metric. </p> <p>During the whole competition, I deleted classes with attributes: <code>{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}</code> U <code>{27, 28, 33}</code>. But two days before the end I read [the discussion] (<a href="https://www.kaggle.com/c/imaterialist-fashion-2019-FGVC6/discussion/94811#latest548137">https://www.kaggle.com/c/imaterialist-fashion-2019-FGVC6/discussion/94811#latest548137</a>) and added back classes <code>{27, 28, 33 }</code>. </p> <p>This improved the metric on LB: <strong>0.31626 -&gt; 0.33511.</strong></p> <h1>Postprocessing for masks</h1> <p>My post-processing algorithm for avoid intersections of masks of the same class: <code>def hard_overlaps_suppression(binary_mask, scores): not_overlap_mask = [] for i in np.argsort(scores)[::-1]: current_mask = binary_mask[..., i].copy() for mask in not_overlap_mask: current_mask = np.bitwise_and(current_mask, np.invert(mask)) not_overlap_mask.append(current_mask) return np.stack(not_overlap_mask, -1)</code></p> <h1>Small postprocessing:</h1> <p>I deleted objects with an area of less than 20 pixels. </p> <p>This improved the metric on LB: <strong>0.33511 -&gt; 0.33621.</strong></p>
Instant Gratification
IG's magic 22nd Public / 33rd Private [LB 0.9756]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In our case, the key to break the 0.974 score was clustering each label to 4 clusters. Actually, we ran many experiments like 3 clusters for label 1 VS 3 clusters for labels 0, 3VS2, 2VS2 and 4VS4. Our best CV was with models which use 3:3 but LB score was always lower and showed that 4:4 gave better result. <strong>So, we continued working on 3:3 and especially on 4:4 models.</strong> </p> <p>After breaking 0.745 on LB. CV and LB scores were no longer correlated and our best submission which was 0.97488 has 0.968 CV. So we decided to stop EDA and modeling and we started focusing on finding a way that would stabilize the gap between CV and LB... <strong>The key for that was ensembling</strong> : We used many models for ensembling but the best results were based on these combinations: model1: GMM 4vs4 model2: model1 + pseudolabeling with thresholds 0.9 for 1 and 0.1 for 0 model3: GMM 3vs3 model4: model3 + pseudolabeling with thresholds 0.9 for 1 and 0.1 for 0</p> <h3>What did not work for us :</h3> <ul> <li>Outliers removal</li> <li>Inverting outliers 1 to 0 and 0 to 1 using isolation forest</li> <li>Neural nets with embedding layers</li> <li>Stacking with linear metamodels</li> <li>PCA features </li> <li>Ordering features according to their importance to finally concatenate all subdatasets together.</li> </ul> <p>Finally I want to thanks <a href="/abhishek">@abhishek</a> for being such an amazing teammate and for all his great ideas. PS: This magic was discovered in the last 3 days and we were using 1:3 models during the first 2 days. Our all top models were submitted 1 hours before the competitions would end. So try your best and never give up. <a href="https://www.kaggle.com/rinnqd/kmeans-4-clusters-per-label-gmm-33th-lb-9756">You will find here one of our selected kernels</a></p>
Instant Gratification
Solution high light and how to choose submission
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>High light.</p> <ol> <li><p>After training GMM model, re-training GMM by initializing the means of each class to [(means[0]+ means[1])/2, (means[1]+ means[2])/2, (means[2]+ means[0])/2,] for avoiding falling into local minimum.</p></li> <li><p>Assuming your samples’ mean is u, Class 0’s mean is init_u0, Class 1’s mean is init_u1. Because some targets are flipped(5%). So the real Class means is: u0 = init_u0 + 0.05 * (init_u0 - init_u1) u1 = init_u1 + 0.05 * (init_u1 - init_u0) So, I use x0’ = 2 * u0 – x0, x1’ = 2 * u1 - x1 to augment data.</p></li> </ol> <p>How to choose submission.</p> <p>I tested my model on data generated by make_classification. I found that the AUC of 512*512 test samples always close to 0.975. I concluded that the (Public score + Private score) / 2 would close to 0.975. So I choose a low Public score model as my one submission.</p> <p>My kernel: <a href="https://www.kaggle.com/daishu/gmm-3-3-2">https://www.kaggle.com/daishu/gmm-3-3-2</a></p> <p>It’s really hard to post in English for me. </p>
Instant Gratification
GMM solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I entered the Instant Gratification competition when it was 16 days to go, so there was already a lot of important information available in Kernels and Discussions. Shortly, in the Instant-Gratification competition we deal with a dataset of the following structure:</p> <ul> <li><p><a href="https://www.kaggle.com/c/instant-gratification/discussion/92930#latest-541375">Data seemed to be generated by groups</a> corresponding to the column <strong>wheezy-copper-turtle-magic</strong>, which means that we have 512 independent datasets of the size approximately 512 by 255 for training our models.</p></li> <li><p>Out of 255 features <a href="https://www.kaggle.com/fchmiel/low-variance-features-useless/">only features with high variance supposed to be important</a>. The number of important features varies from 33 to 47.</p></li> <li><p><a href="https://www.kaggle.com/speedwagon/quadratic-discriminant-analysis">QDA proved to be a good modeling approach</a> for this case and can give AUC of 0.966. </p></li> <li><p>This high quality can be explained by the fact that the <a href="https://www.kaggle.com/mhviraf/synthetic-data-for-next-instant-gratification">data probably was generated with make_classification function</a>.</p></li> </ul> <p>Knowing that let's look closer at what make_classification function has under the hood. This function has the following parameters:</p> <ul> <li>n_samples: The number of samples.</li> <li>n_features: The total number of features.</li> <li>n_informative: The number of informative features.</li> <li>n_redundant: The number of redundant features, linear combinations of informative features.</li> <li>n_repeated: The number of duplicated features.</li> <li>n_classes: The number of classes.</li> <li>n_clusters_per_class: The number of clusters per class.</li> <li>weights: The proportions of samples assigned to each class.</li> <li>flip_y: The fraction of samples whose class are randomly exchanged.</li> <li>class_sep: Larger values spread out the clusters/classes and make the classification task easier.</li> <li>hypercube: Parameter corresponding to the placement of the cluster centroids.</li> <li>shift: Shift of the feature values.</li> <li>scale: Scaling of the features by the specified value.</li> <li>shuffle: Shuffling the samples and the features.</li> <li>random_state: Random state.</li> </ul> <p>I invite you to follow the sequence of my thoughts about it.</p> <hr> <h3>Step 1. Size and features (first 6 parameters of make_classification function).</h3> <p>From the data structure (512 groups and 255 variables) and size (262144 = 512 * 512), we can assume that <strong>n_features=255</strong> and <strong>n_samples=1024</strong> were taken to generate both training and test data sets (both private and public).</p> <p>Now, let's talk about features. First of all, repeated features are exact copies of important features, and because we don't see such columns in competition data we'll set <strong>n_repeated=0</strong>. Let's generate data set with make_classification function and following parameters:</p> <p>~~~~ fake_data = make_classification( n_samples=1024, n_features=255, n_informative=30, n_redundant=30, n_repeated=0, n_classes=2, shuffle=False) ~~~~</p> <p>By setting shuffle=False, I force the first 30 columns to be informative, next 30 to be redundant and others to be just noise. By doing it 1000 times lets look at the distribution of the standard deviation of the informative, redundant and random features.</p> <p><a href="https://ibb.co/HG8WWDR"></a></p> <p>We see a clear difference in standard deviation for informative, redundant and random features, that's why selecting important features with 1.5 threshold works so well. Moreover, there are no features in the competition data that have std bigger than 5, which leads us to an assumption that <strong>n_redundant=0</strong>.</p> <p>Number of important features ranges from 33 to 47, so <strong>n_informative</strong> is in <strong>{33,...,47}</strong>. Number of classes is obviously <strong>n_classes=2</strong>. </p> <hr> <h3>Step 2. Shift, Scale, and randomness (last 4 parameters of make_classification function).</h3> <p>Because mean and standard deviation for random columns are 0 and 1 respectively, we can assume that shift and scale were set to default <strong>shift=0</strong>, <strong>scale=1</strong>. On top of that, the standard deviation for important columns also looks the same for competition data and for data that we've just generated. So it's a pretty promising assumption and we can go further. Parameters <strong>shuffle</strong> and <strong>random_state</strong> are not so important, because they don't change the nature of the data set.</p> <hr> <h3>Step 3. Most interesting parameters (why QDA is not so perfect?).</h3> <p>Parameters <strong>n_clusters_per_class, weights, class_sep, hypercube</strong> are the ones that we are not very sure about, especially <strong>n_clusters_per_class</strong>. Weights look to be the same because the target seemed to be balanced. What probably makes it impossible to get 100% accurate model is that <strong>flip_y&gt;0</strong>, and we can not predict this random factor in any case. However, what we can try is to build a model that can perfectly predict when <strong>flip_y=0</strong> and leave the rest for the luck.</p> <p>QDA shown to be a very good approach, but let me show you the case when it's working not so good:</p> <p><a href="https://ibb.co/bKhGFTq"></a></p> <p>Data set was generated with make_classification:</p> <p>~~~~ make_classification( n_samples=1000, n_features=2, n_informative=2, n_redundant=0, n_classes=2, n_clusters_per_class=2, flip_y=0.0, class_sep=5, random_state=7, shuffle=False ) ~~~~</p> <p>Now, lets look how QDA algorithm can hadle it:</p> <p><a href="https://ibb.co/pWRTdtD"></a></p> <p>Pretty good, however, we see that it doesn't see this structure of four clusters in the data. From make_classification documentation one can find that these clusters are Gaussian components, so the best way to find them is to apply a Gaussian Mixture model. </p> <p><em>Note: Gaussian Mixture model will only give you clusters, you need to bind these clusters to one of the classes 0 or 1. That you can do by looking at the proportion of the certain class in each cluster and assigning cluster to the dominant class.</em></p> <p>Lets look how GMM algorithm performed at this data:</p> <p><a href="https://ibb.co/Q9sPPxm"></a></p> <p>Nearly perfect! Let's move to step 4.</p> <hr> <h3>Step 4. Effect of flipping the target</h3> <p>Knowing that we can build nearly the best classifier, what is the effect of flipping the target? Suppose that your classes are perfectly separable and you can assign probabilities that will lead to a perfect AUC. Let's see an example of 10000 points, 5000 in each class:</p> <p>Classes = [0, 0, ..., 0, 1, ..., 1, 1]</p> <p>Predictions = [0.0000, 0.0001, ..., 0.9998, 0.9999]</p> <p>Lets flip the target value for 2.5% (250) of the points (1) in the middle, (2) on the sides, (3) randomly, and look at the AUC:</p> <p><a href="https://ibb.co/M1cNwkw"></a></p> <p>We see that the impact of flips can dramatically change the result. However, let's face two facts:</p> <ol> <li>We can not predict this randomness.</li> <li>We can assume that the flips are evenly spread across our predictions.</li> </ol> <p>Let's also look at the distribution of AUC for the randomly flipped target.</p> <p><a href="https://ibb.co/PjBttyk"></a></p> <p>It means that results on unseen data can deviate by more than 0.005 in AUC even for the perfect model. So our strategy will be not to find the parameters to raise on the LB of the competition but to find the parameters that can perfectly predict the target for most of the data set.</p> <hr> <h3>Step 5. Experiments with synthetic data</h3> <p>How we can build a model that perfectly predicts data and not overfit to it? Because we have a strong hypothesis on how data was created we can tune model parameters such that it will predict perfectly for most of the synthetically generated data sets. Not showing competition data to it will save us from overfitting.</p> <p>We will be generating a data set of the same structure as the competition data:</p> <ul> <li><strong>n_samples=768</strong>, 512 for training and 256 for test</li> <li><strong>n_features=255</strong></li> <li><strong>n_important=33</strong></li> <li><strong>n_clusters_per_class=3</strong></li> <li>by variating <strong>weights</strong>, we will change the propotion of the dominant class from 0.5 to 0.6</li> <li><strong>flip=0</strong> to experiment if GM model can be completely correct</li> <li><strong>class_sep</strong> will be a random number between 0.9 and 1</li> <li><strong>hypercube</strong> will be chosen randomly between True and False</li> </ul> <p>Next thing to do is to generate hundreds of such data sets and optimize parameters of the GMM.</p> <p>For the first attempt, we'll set the parameter n_components in the GMM to 6, because we know that there will be 3 clusters per each of the 2 classes, and do the modeling with other parameters set to default. Let's look at the minimum, maximum and mean score that we can get for 100 different data sets:</p> <p><code> Min score = 0.44783168741949336 Mean score = 0.8159223370950822 Max score = 1.0 </code></p> <p>Wow! Sometimes we even reached the perfect classifier. Let's try to optimize some of the GMM parameters. It has parameters n_init and init_params which means that GMM model will be run several times starting from random initializations and then it selects the best iteration. Let's try to set n_init to 20 and init_params to random.</p> <p><code> Min score = 1.0 Mean score = 1.0 Max score = 1.0 </code></p> <p>Now it's perfect! Let's try it for more than 33 features (for example, let's take 47):</p> <p><code> Min score = 0.562083024462565 Mean score = 0.7917138432059158 Max score = 1.0 </code></p> <p>For 47 features it worked not so well. The secret sauce is the regularization of the covariance matrix. Let's look at the model quality for different values of reg_covar parameter of GM model:</p> <p><code> reg_covar = 1: Min score = 0.6936146721462308, Mean score = 0.8782139741161032 reg_covar = 5: Min score = 0.991250080422055, Mean score = 0.9998679217326921 reg_covar = 10: Min score = 0.9987766087594813, Mean score = 0.999945378120418 </code></p> <p>To tell you the short story, by a simple grid search for the best reg_covar parameter one can find that more features require larger reg_covar for better performance. By doing so you can find that the good parameters are:</p> <ul> <li>33 features: 2.0,</li> <li>34 features: 2.5,</li> <li>35 features: 3.5,</li> <li>36 features: 4.0,</li> <li>37 features: 4.5,</li> <li>38 features: 5.0,</li> <li>39 features: 5.5,</li> <li>40 features: 6.0,</li> <li>41 features: 6.5,</li> <li>42 features: 7.0,</li> <li>43 features: 7.5,</li> <li>44 features: 8.0,</li> <li>45 features: 9,</li> <li>46 features: 9.5,</li> <li>47 features: 10.</li> </ul> <p>We still have one thing not clear.</p> <h3>Important note: how do we know the number of clusters in advance?</h3> <p>We were building models in the assumption that there are 3 clusters per class, but how can we know it? Well, the answer is pretty simple, the right choice of the n_components will give better results:</p> <p>#Clusters per class = 4, #Components of GMM = 4: AUC = 0.7170955882352941</p> <p>#Clusters per class = 4, #Components of GMM = 6: AUC = 0.8263480392156862</p> <p>#Clusters per class = 4, #Components of GMM = 8: AUC = 0.9587009803921569</p> <p>#Clusters per class = 3, #Components of GMM = 4: AUC = 0.7400829259236339</p> <p>#Clusters per class = 3, #Components of GMM = 6: AUC = 1.0</p> <p>#Clusters per class = 3, #Components of GMM = 8: AUC = 0.9844049755554181</p> <p>#Clusters per class = 2, #Components of GMM = 4: AUC = 1.0</p> <p>#Clusters per class = 2, #Components of GMM = 6: AUC = 1.0</p> <p>#Clusters per class = 2, #Components of GMM = 8: AUC = 0.9924635532493205</p> <p>For me it seemed like for most of the groups the best option was taking 3 clusters per class, sometimes 2 was slightly better (but it might be the effect of the flipped target). I assumed that this parameter was set to 3 for all groups.</p> <hr> <h3>Conclusion</h3> <p>Parameters found in step 5 will give you an AUC of 0.99 minimum (for the true target of course)! So we will use these parameters to build a model for competition data. </p> <p>That approach gave me a 52nd place. Thanks for reading!</p>
Google Landmark Retrieval 2019
2nd Place and 2nd Place Solution to Kaggle Landmark Recognition and Retrieval Competition 2019
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Retrieval 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for the great challenge, our solution has been released to arxiv. Further, we will open our source code. Thanks.</p> <p>arxiv link: <a href="https://arxiv.org/pdf/1906.03990.pdf">https://arxiv.org/pdf/1906.03990.pdf</a> source code: <a href="https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark">https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/Research/landmark</a></p>
Instant Gratification
Hoxosh solution (private 37) with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is a sketch of our solution. I didn't plan to share given there are better ones, and, more importantly, because I thought our solution would be similar to others. In fact there are two differences, most important one being due to my team mate <a href="/cashfeg">@cashfeg</a> hence the title. Given he is very modest he didn't think it was worth sharing. You can also have a look at how he found 3 clusters per class <a href="https://www.kaggle.com/c/instant-gratification/discussion/96632#latest-558215">here</a>.</p> <p>We started from the good QDA models disclosed by Vlad. QDA is great, but it was hard to get above 0.9736 on public LB with it if I remember well. The reason was clear to me, it was because QDA only used half of the data, the train data. </p> <p>A model close to QDA is Gaussian Mixture, it is the unsupervised equivalent of QDA. Issue is GM with 2 components hardly gets you better than 0.80 CV and LB, which is probably why people did not use it till the last week of competition. Then I had the idea to use QDA output to init GM. However, I didn't thought of using means and precisions from QDA to init GM (I did later), or use Gaussian Lasso as Dieter proposed later <a href="https://www.kaggle.com/christofhenkel/graphicallasso-gaussianmixture">here</a>. I rather subclassed sklearn GaussianMixture with an additional way to initialize it with cluster representatives. Feeding QDA prediction to my GM was able to reach 0.9703 on public LB.</p> <p>This is the point when <a href="/cashfeg">@cashfeg</a> moved us near the top. He found that there were 3 clusters per class. He also had this simple, yet clever idea: he ran GM with 3 clusters on train with target 0, and a separate GM with 3 clusters on train with target 1. Then the 6 clusters were input to my GM model and got 0.973x on public LB.</p> <p>The rest is great team work with everybody contributing. W,e explored variants, tuned GM parameters, and tried to exploit more of the data generation properties. We didn't find a good way to align centroids on hypercube vertices, but we did filter seeding clusters by keeping runs where the seeding clusters had similar size (another <a href="/cashfeg">@cashfeg</a> idea).</p> <p>What didn't work? </p> <p>Using a likelihood function that takes into account target flip rate (yet another idea of <a href="/cashfeg">@cashfeg</a> ). We are still wondering why it didn't help. </p> <p>Kernels were a pain. The submission process where you have to run kernel twice, in commit, then in submit. There were outages in the last few days, and also lot of time lost because of lack of warning or proper error messages. I think KO competitions are a good idea, but I will rely on notebooks I run on another machine if I enter another KO competition. I'll then copy notebooks to kernels for submissions. I do think kernels will keep improving and that my concerns will be memories, but until then I'll rely on Anaconda ;)</p> <p>Main takeaway for me is that I was very happy to team with such talented team mates.</p> <p>Update: I made public one of our kernels. It is not the very best nor one that we selected, but it is close to best, and simpler, with private LB above 0.975. <a href="https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412">https://www.kaggle.com/cpmpml/fork-of-cpmp-017-2-stage-mean-precisision-76aa21?scriptVersionId=16037412</a></p>
Instant Gratification
How to Score LB 0.975
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>LB 0.970</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F50e532f1b9ca5d0ddd783e04e7422f20%2Fimg1.jpg?generation=1602508821789459&amp;alt=media" alt=""></p> <h1>LB 0.975</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fc2be5521e7c585f7776c21272d0ec925%2Fimg2.jpg?generation=1602508834244506&amp;alt=media" alt=""></p> <h1>Summary</h1> <p>It was shown <a href="https://www.kaggle.com/cdeotte/pseudo-labeling-qda-0-969" target="_blank">here</a> that QDA + Pseudo Labeling will score LB 0.970. The competition data appears to be created with Sklearn's <code>make_classification</code>. EDA shows that <code>n_clusters_per_class=3</code>. To score LB 0.975, we need to use QDA with 6 ellipses instead of 2 ellipses. Code is posted <a href="https://www.kaggle.com/cdeotte/3-clusters-per-class-0-975/" target="_blank">here</a>. (with more advanced techniques in Appendix 2).</p> <p>We find the 6 ellipses with <code>sklearn.mixture.GaussianMixture</code>. First find the 3 ellipses of <code>target=0</code> then the 3 ellipses for <code>target=1</code>. Then label each row of the training data with its ellipse number 0, 1, 2, 3, 4, 5. Finally call <code>QDA.fit(X,labels)</code> and <code>QDA.predict_proba(test)</code>. Then <code>test_pred = Pr(in ellipse 3) + Pr(in ellipse 4) + Pr(in ellipse 5)</code> where <code>Pr()</code> denotes probability.</p> <h1>EDA to determine <code>n_clusters_per_class</code></h1> <p>Clusters are located at hypercube corners. Therefore clusters have centers whose coordinates are 1's and -1's. To determine how many clusters per class, plot a histogram of all the variables' means grouped by target. If there is only 1 cluster per class we will see bumps at -1, and 1. If two clusters per class, we will see bumps at -1, 0, and 1. If three clusters per class, we will see bumps at -1, -1/3, 1/3, and 1.</p> <p>Within <code>magic=0</code> sub dataset, there are approx 40 features and 2 targets. Calculate the means of these 80 groups of numbers. Do this for every sub dataset. In total, we have approx 41,000 means. Now plot their histogram.</p> <p>More explanation why this works is provided in Appendix 1 <a href="https://www.kaggle.com/cdeotte/3-clusters-per-class-0-975/" target="_blank">here</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F37c5c7d4faa1689bf77c01efa433ab66%2Fimg3.png?generation=1602508847384837&amp;alt=media" alt=""></p>
Instant Gratification
Public 4th place / Private 32th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Before I begin, I'm thankful to my teammates. The idea is below.</p> <p><strong>1. Feature Engineering</strong> - KernelPCA(cosine kernel) - GMM - AgglomerativeClustering</p> <p><strong>2. Outlier Replace</strong> - I predicted values using nusvc. - Based on the threshold, the true values were replaced when it differ from the predicted value. - Re training</p> <p><strong>3. Pseudo Labeling</strong> - Run each 512 cases at ratio 0.2.</p> <p><strong>4. Model</strong> - nusvc</p> <p><strong>5. Partial Prediction Value Scaling</strong> - StandardScaler(Improve Lb 0.002)</p> <p><strong>6. Seed ensembles</strong> - Using four seeds</p> <p>My code is below - Kernel public score : 0.97513 - Kernel private score : 0.97565 <a href="https://www.kaggle.com/hyeonho/32th-solution-public-0-97513-private-0-97565?scriptVersionId=16042798">https://www.kaggle.com/hyeonho/32th-solution-public-0-97513-private-0-97565?scriptVersionId=16042798</a></p>
Instant Gratification
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a fun competition and I enjoyed it much. Hope you all have had the same experience. </p> <p>Most the things I did was revealed in public kernels. The only unrevealed key to this competition was to cluster each class before modeling it. I explained pictorially why it works at <a href="https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974">https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974</a>. Everything else was revealed by the many genius and talented people in this competition. They were not only smart but kind and generous enough to share their findings. I am not gonna name them one by one here because we all know who they are.</p> <p>I experimented so many different things for this competition including: * <strong>different number of models to blend</strong>: I believed the more number of models I blend together the more robust answer I would get so I tried to maximize this parameter and have the maximum number of models I could have in 1 run which ended up being 42 models of the same type. * <strong>different number of folds per model</strong>: this item was another important factor because it had a trade-off in it. the more folds I used, the more robust predictions. I achieved robustness by increasing number of models I used though, so I used this parameter for regularization. I chose 2 folds in the end so that despite we have only 512 samples per each <code>wheezy-copper-turtle-magic</code> I only used half of them to train a model and the other half to validate my model. I noticed if my model can have AUC 0.9748+ on only 50% of the data and generalize well on both the training set and public leader board, why not? let's continue with this. * <strong>different clustering algorithms</strong>: because I knew the data was multivariate gaussian distribution around the nodes of a hypercube (refer to <code>make_classification</code> documentation on sklearn and its source code on github) I assumed the best way to cluster them would be by using <code>mixture.GaussianMixture</code> and I ended up using it too. However, I though a lot about different clustering algorithms and studied the document at <a href="https://scikit-learn.org/stable/modules/clustering.html">https://scikit-learn.org/stable/modules/clustering.html</a> carefully and experimented with other clustering algorithms I believed might be useful. * <strong>different number of clusters</strong>: I used the elbow rule and tsne and other algorithms to figure this one out but I couldn't verify what is the exact number of cluster per class they used in data generation. Nonetheless, I ran an experiment on my own synthetic data and by analyzing 1000 experiments I figured whether they are 2 or 3 clusters in the data itself if I lump data into 3 clusters I would get better validation and test AUCs so I continued with 3 clusters per class. * <strong>different classifier algorithms</strong>: It was kinda obvious that GMM is the way to go (Thanks to <a href="/christofhenkel">@christofhenkel</a>, he was very underrepresented in this competition despite his great contributions. You have my respect). Nonetheless, I tried two different algorithms as well. * <strong>different scalers</strong>: First of all I figured I need to add scaler to ensure non-ill-conditioned matrices for covariance calculations. I tried two different scalers, results were not that much different but I continued with <code>StandardScaler</code> again because features were linear combination of standard normal distributions. * <strong>different regularizations</strong>: other than using 2 folds for training I used regularization parameters in both of the clustering and classifier algorithms. In the end I decided to go with their default values since training on 50% of the data was enough regularization. * <strong>model by model hyperparameter tuning and regularization</strong>: I tried these options as well but they didn't work out well for me. * <strong>averaging ranks instead of probabilities</strong>: I tried it both ways sometimes ranks worked out better sometimes probabilities. </p> <p><strong>my full code</strong>: <a href="https://www.kaggle.com/mhviraf/mhviraf-s-best-submission-in-instant-gratification">https://www.kaggle.com/mhviraf/mhviraf-s-best-submission-in-instant-gratification</a></p> <p><strong>how data was generated</strong>: <a href="https://www.kaggle.com/mhviraf/synthetic-data-for-next-instant-gratification">https://www.kaggle.com/mhviraf/synthetic-data-for-next-instant-gratification</a></p> <p>Below is the TL;DR of <a href="https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974">https://www.kaggle.com/mhviraf/instant-gratification-lb-score-0-974</a></p> <p><img src="https://www.kaggleusercontent.com/kf/16019409/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..jKX7oeRp9n8vXS4jU1p_FA.8jtkixQmRslkeBCAyKsGy2kXmOPDxZER9Z0Zud0AT8uSDhMEG_-5Qjov5X9LTzhWx6bdgcAVH56qkIvIzERPdaQM5no1505hT0bsBidIGd93TTUGVAUg1YKsLADVtepqaXPEBPUQjAO-zO6-GnvjAoQ6He0WDuOBxkvlHPpzrRY.zeCjOzfRQVEgKzKBrtkz_g/__results___files/__results___7_2.png" alt=""></p> <p><img src="https://www.kaggleusercontent.com/kf/16019409/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..jKX7oeRp9n8vXS4jU1p_FA.8jtkixQmRslkeBCAyKsGy2kXmOPDxZER9Z0Zud0AT8uSDhMEG_-5Qjov5X9LTzhWx6bdgcAVH56qkIvIzERPdaQM5no1505hT0bsBidIGd93TTUGVAUg1YKsLADVtepqaXPEBPUQjAO-zO6-GnvjAoQ6He0WDuOBxkvlHPpzrRY.zeCjOzfRQVEgKzKBrtkz_g/__results___files/__results___9_2.png" alt=""></p> <p>Happy kaggling. Hope to see you all soon in the next competitions.</p>
Instant Gratification
Public 13th / Private 14th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congratulations to the winners and all of those who learned from this competition and thank you to Kaggle team for hosting this interesting competition.</p> <p>Detailed explanation of my journey and the code to reproduce the result can be found here( <a href="https://www.kaggle.com/harangdev/instant-gratification-14th-solution">https://www.kaggle.com/harangdev/instant-gratification-14th-solution</a> ).</p> <p>In this thread I'll summarize methods that I used.</p> <ol> <li><p><code>GaussianMixture</code> with <code>n_components=6</code> and <code>n_init=20</code> can classify perfectly except the flipped labels if you fit all of the data including private test set.</p></li> <li><p>To leverage the fact that the metric is roc auc, I randomly assigned integer to each test samples within each predicted class such that max of negative class is smaller than min of positive class. (That is, I ordered the test samples in line, rather than grouping them into 0 and 1)</p></li> <li><p>Then I selected two submissions that scored lowest on public leaderboard. I thought that with fixed number of flipped labels, low score in public set means high score in private set. I actually confirmed it by experimenting with synthetic dataset. But it turned out that this method wasn't much helpful. (Maybe I should dig deeper why -&gt; <strong>6/25 EDITTED: Actually my experiment setting was wrong. Getting low score for public leaderboard has nothing to do with getting high score for private leaderboard. So it was completely random after making a perfect classifier.</strong>)</p></li> </ol> <p>I had about 35 submissions that could have brought me gold, and 15 submissions that could have brought me to the 1st place, but I don't regret cause I had my rules when selecting the submissions.</p> <p>Lastly, special thanks to <a href="/cdeotte">@cdeotte</a> and <a href="/mhviraf">@mhviraf</a> for sharing important ideas about the dataset in the early stage.</p>
Instant Gratification
Please, share your basic idea! [17th place inside]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>tldr; I used <code>sklearn.GaussianMixture</code> with parameter <code>n_components=6</code>. My best result is CV: 0.97533, Public: 0.97444 Private: 0.97578</strong></p> <p>Well, we know that datasets possibly were generated using <code>sklearn.make_classification</code>! So, let's play with 2D synthetic <code>make_classification</code> dataset:</p> <p><code> X, y = make_classification( n_samples=2000, n_features=2, n_informative=2, n_redundant=0, n_repeated=0, n_classes=2, n_clusters_per_class=2, flip_y=0.05, class_sep=2.5, hypercube=True, shuffle=False, random_state=777 ) </code> Let's fit <code>sklearn.QuadraticDiscriminantAnalysis</code>: <img src="https://i8.wampi.ru/2019/06/21/qda.png" alt="qda"> Looks like a not bad. But wait what do we know about these clusters? Yes. They all have Gaussian nature! Let's try <code>sklearn.GaussianMixture</code>:</p> <p><code> gm = GaussianMixture( n_components=4, n_init=20, init_params="kmeans", random_state=456 ) </code> <img src="https://i8.wampi.ru/2019/06/21/gm.png" alt="gm"> Hmmm... really better! And we just need to map these clusters to target. Let's do it. Check <strong>InstatGratificationClassifier.py</strong> file: <code> igc = InstantGratificationClassifier( estimator=GaussianMixture( n_components=4, n_init=20, init_params="kmeans", random_state=456 ) ) </code> <img src="https://i7.wampi.ru/2019/06/21/igc.png" alt="igc"> It looks greate 🥈 </p> <p>And it is really easy to estimate that all of 512 blocks were generated with parameter <code>n_clusters_per_class=3</code>. This is my basic idea to beat 0.974+ score.</p> <p><strong>Please, share your basic idea!</strong></p>
Instant Gratification
Public 19st Solution/ Private 70st Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>Preprocessing</h3> <ul> <li><ol><li>Feature Selection with variance </li></ol></li> <li><ol><li>KERNEL PCA</li></ol></li> </ul> <p>Try and Fail : PCA, SVD, AutoEncoder, DAE etc</p> <h3>Feature Engineering</h3> <p>We used the following three methods to make variable of the distribution of the dataset.</p> <ul> <li><ol><li>GMM_PRED </li></ol></li> <li><ol><li>GMM_SCORE</li></ol></li> <li><ol><li>HIST_PRED</li></ol></li> </ul> <p><img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FcdwpIM%2Fbtqwfl7ksHu%2FHrOefi5xBBPdVrJGSd2tSk%2Fimg.png" alt=""></p> <p>By the way, one of the mistakes I found was that GMM_PRED and GMM_SCORE would increase their score if they were duplicated. The results below are still a mystery to our team.</p> <ul> <li>Gmm_pred 1 + Gmm_score 0 : LB 0.97465</li> <li>Gmm_pred 2 + Gmm_score 0 : LB 0.97475</li> <li>Gmm_pred 3 + Gmm_score 0 : LB 0.97479</li> <li>Gmm_pred 4 + Gmm_score 0 : LB 0.97480</li> <li>Gmm_pred 5 + Gmm_score 0 : LB 0.97481 </li> <li>Gmm_pred 5 + Gmm_score 1 : LB 0.97482</li> <li>Gmm_pred 5 + Gmm_score 2 : LB 0.97482</li> <li>Gmm_pred 5 + Gmm_score 3 : LB 0.97473</li> </ul> <h3>Model</h3> <ul> <li>1st layer : Nusvc + Nusvc2 + qda + svc + knn + lr / Stratified with GMM_LABEL </li> <li>2nd layer : Lgbm + mlp / Stratified</li> <li>3rd layer : 1st layer + 2st layer / Average <img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FVj3FG%2Fbtqwf3ylVfO%2FvvVCmYTe7z4TFIn8AKkSK1%2Fimg.png" alt=""></li> </ul> <h3>Private Test with Make Classification</h3> <p>the process is as follows. - 1. Estimate the number of private magic n(0~511) has number using the linear relationship between the number of magic and auc. (ex. private magic 0 auc 0.97xx then magic number 250) - 2. Create public and private data by seed using make_classification with estimated magic number. - 3. Test Model with maked public, private dataset and check score cv, pb, v - 4. Finally, choose a model with cv + pb + pv close to 0.975. Others thought that overfitting. <img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FcsSklH%2FbtqweIV0tL1%2FecqDnPjxbQ0CKLUPodtEm0%2Fimg.png" alt=""></p> <p>But actually it did not work out.</p> <h3>Resource</h3> <p>Code is below - kernel public 0.97491 : <a href="https://www.kaggle.com/chocozzz/hyun-stacking?scriptVersionId=15820611">https://www.kaggle.com/chocozzz/hyun-stacking?scriptVersionId=15820611</a> </p> <p>Best Score <img src="https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fk.kakaocdn.net%2Fdn%2FbX5sHq%2FbtqwfkURVtU%2FNzDybSbf4K5mKcC6whKhf0%2Fimg.png" alt=""></p>
iMaterialist (Fashion) 2019 at FGVC6
Attributes classification solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all!</p> <p>We've tried to do attributes classification. The first attempts were with 92-way classifier, but later switched to thechnique from image captioning, which generates next attribute, using image features and knowledge about previous attributes. We've used implementation of the paper <a href="https://arxiv.org/pdf/1502.03044.pdf">"Show, Attend and Tell: Neural Image Caption Generation with Visual Attention"</a> from <a href="https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning">this</a> repo. Below is some findings and visualizations. <img src="http://i68.tinypic.com/rvfrtl.png" alt="t-shirt proper"> It works somehow. Above are the proper attributes for t-shirt. And this with one incorrect attribute, however it looks ambiguous (but I'm not an expert in this issue =)): <img src="http://i66.tinypic.com/ta2emv.png" alt="t-short ambiguous"> Some annotations missed the length of t-shirt, so if model predicts it, this turns to false positive: <img src="http://i67.tinypic.com/33ngcio.jpg" alt="t-shirt missed length"> Same for shorts and other classes. Model can predict correctly the majority of atributes: <img src="http://i68.tinypic.com/2r6ehc5.png" alt="shorts proper"> However there are plenty of images with 1 wrongly predicted attribute, so all entry goes to false positive. But they are ambiguous, compare the shorts length on the first two images and style on the last two (frayed in annotations and no/washed in predictions): <img src="http://i64.tinypic.com/2882ov5.png" alt="mislabeled shorts"></p> <p>So with better metric on attributes this challenge can be more interesting.</p>
Google Landmark Recognition 2019
From 0.08695 to 0.14681 by TTA
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2019 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My computation resource only allowed me to train the Se_NeXt 50 with the image size 96 X 96 in a reasonable time. I got the private score 0.08695 if I didn't use any TTA. So for TTA, I tried to use the "larger" images to pick up details by gradually chopping off the edges by certain points (like 2.5%, 5%, 7.5%,10%, 12.5%), then resized the image back to 96 X 96. With left-right flip, I got the total 12 predictions. I then selected the top 1 for each image and the score became 0.14681. It was interesting to note that the averaged probabilities for the common classes for each image gave worse score. </p>
iMaterialist (Fashion) 2019 at FGVC6
just a short tutorial to get 25th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iMaterialist (Fashion) 2019 at FGVC6 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>After removing classes with attributes and following <a href="https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html">this tutorial</a> with images that have a minimal edge length of 800, I got 25th place (Public: 0.22209/Private: 0.20699)</p>
Instant Gratification
A way to get silver medal without magic
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Instant Gratification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to every teams who go through the competition. I have found that most top team solutions are related to Gauss distribution and data generation, so I'm going to talk about some that have not been mentioned. In this competition, stacking and pseudo labeling are enough to win the silver medal, provided they work together.</p> <p>Before I find the wonderful use of GMM, I used QDA, NuSVC and other models to stack and try to improve my scores. At that time, pseudo labeling brought us to 0.969 but 0.970 was still a hard barrier to break through.</p> <p>Maybe a lot of people, like me, have found that <strong>stacking doesn't work with pseudo labeling</strong>. After a few days of observation and reflection, I found <strong>stacking could improve CV, but LB did not</strong>. Then I came to a conclusion that some modifications to the public kernels about pseudo labeling were needed to make stacking work. </p> <p>Key to the problem is that the original code uses the same test points for training and prediction. If we use cross validation to predict the test points that used to make pseudo labels, stacking will start to work.</p> <p>More detailed code <a href="https://www.kaggle.com/wanliyu/how-to-make-stacking-work-with-pseudo-labeling">here</a>. The kernel uses QDA, GMM(not magic), NuSVC and Label Spreading to stack and gets 0.97248 in private score. That's enough to win a silver medal without magic. </p>
Predicting Molecular Properties
#57 Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This turned out to be one of my favorite contests. I have always been fascinated by chemistry and have a reasonable knowledge base in chemistry in general. Originally, I hoped to use my chemical intuition to get an edge in this contest. Bu I realized soon enough that, since I had zero knowledge in molecular modeling, my edge was very slim.</p> <p>I sat on the fence for a very long time reluctant to invest time in learning the state of the art of molecule representation and modeling of molecular properties. Luckily, the Kaggle community came through with many great kernels that I could sample quickly and zero down to a few that were most promising in my hands.</p> <p>I started working hard on this contest about two weeks ago. For a long time I was out of medal range. Then, things started to click around one week ago and I managed quickly move into solid silver range within a few days.</p> <p>There was not much time left and models took long time to run. Since there was very little danger of overfitting to LB from what I observed on oof predictions, I chose not waste resources on creating oof prediction and blend old models with newer versions as they came in using LB as the only guide. The current standings and scores: #57 and -2.21819 (Private) and #58 and -2.22542 (Public), prove that that was a solid strategic choice.</p> <p>The three kernels below (with minor tweaks) contributed the most to the final blend: <a href="https://www.kaggle.com/toshik/schnet-starter-kit">SchNet Starter Kit</a> by <a href="/toshik">@toshik</a> , who barely missed Gold with -2.8 solo score! Amazing! <a href="https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481">Distance - is all you need. LB -1.481</a> by <a href="/criskiev">@criskiev</a> , whose team got Silver! <a href="https://www.kaggle.com/xwxw2929/keras-neural-net-and-distance-features">Keras Neural Net and Distance Features</a> by <a href="/xwxw2929">@xwxw2929</a>, who got solo Bronze! Basically, most credit for my 🥈 goes to the creators of these great kernels. I am glad they all did great in the contest. Congrats!</p> <p>At first the core of my blend was Sergii’s kernel, then Aicanghai_Smile’s Keras version started to weigh in more, but, in the last 5 days or so, I realized that <strong>the biggest hidden gem is actually SchNet.</strong> Wow! In the final blend it has by far the most weight. </p> <p>For SchNet my main modifications was training separate models for each type and adjustment of batch size upward (often I used 16 for mini-batches). Each of 8 models took 6-8 hrs to train for 60-90 epochs using Kaggle’s GPUs. The best SchNet model was -1.971 and I feel that I barely scratched the surface with what it could really accomplish.</p> <p>99% of computing was done with Kernels. It is great to have access to such powerful GPUs. My main wish is that GPU kernels would not freeze two days before the deadline. In the end, I ran out of time before I could apply my secret chemical intuition skills : (. I am guessing that a promising strategy to improve further is to split molecules into finer subgroups based on molecular features such as double bonds, rings, etc. and then train separate SchNet models on these subgroups.</p> <p>Congrats to all! Thank you the organizers and Kaggle for making this a great contest!</p> <p>**<em>Update: *</em><a href="https://www.kaggle.com/stocks/schnet-by-j-type-helped-me-win-silver-57">SchNet Kernel by J-Type</a></p>

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
7
Add dataset card