Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Bengali.AI Handwritten Grapheme Classification
21st Solution using only Kaggle Kernels
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was actually assuming that all the top teams were doing the same postprocessing:)</p> <p>I would like to thank <a href="/seesee">@seesee</a> a lot, because he was very generous to share his notebooks: <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/134161">https://www.kaggle.com/c/bengaliai-cv19/discussion/134161</a> I was not going to join this competition because I have no GPU, but in the last week, I have decided to join after seeing these nice kernels. I could only use 2x30 TPU hours for this competition.</p> <p>The baseline was scoring 0.9708 on the public LB. Here is what I have done on top of that: * I was aware that <strong>random split</strong> for train and validation does not represent the split for the real test set but I was lazy to change it. I just kept it in mind. * Every class contributes to the metric equally. Therefore, I have <strong>multiplied the predictions by inverse class frequencies</strong> before getting the maximum. This gave 0.007 boost on Public LB. (around 0.003 boost for the validation set) * I have changed the backbone model from <strong>Efficientnet B3 to B4</strong>. This gave 0.002 boost. * I have changed the augmentation from mixup to <strong>cutmix ** using the implementation here: <a href="https://www.kaggle.com/cdeotte/cutmix-and-mixup-on-gpu-tpu">https://www.kaggle.com/cdeotte/cutmix-and-mixup-on-gpu-tpu</a> I have only updated it so that it does the cuts as rectangles rather than squares (and fixed a small bug, I believe). Thanks <a href="/cdeotte">@cdeotte</a> for this nice kernel. cutmix did not improve my LB but improved the local validation. * I have changed the learning scheme. **Decreasing learning rates gradually</strong> gave me 0.007 boost. I could make use of it more but TPU run-time limit was 3 hours. If I had higher limit, I could end up in gold zone. Or maybe the opposite, having an underfit model gave me a shake-up. * I have ran the model <strong>on whole data</strong>, rather than 80% train split. This gave 0.001 boost. * I have <strong>blended the last two runs</strong> with different seeds. This gave me 0.001 boost. * Considering that my split was not representative and having more benefit from multiplying by inverse class frequencies than expected, I have decided to exploit a bit more by weight = np.power(weight, 1.2). This improved public LB by 0.0006 but seems to improve private even more. Probably because there are more <strong>unseen graphemes</strong> in the private set.</p> <p>So it was possible to get 21st place using only Kaggle Kernels in a week. Reading discussions and kernels carefully is enough for getting around this position. For top 10, usually more creativity is needed.</p> <p>Thanks. Indeed. But I was lucky that I started from a great baseline by <a href="/seesee">@seesee</a> </p> <p>Inverse class frequencies</p>
Bengali.AI Handwritten Grapheme Classification
22 place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, I have updated those</p> <p>Interesting..</p> <p>I almost gave up hope, cos no matter what i tried the accuracy was not improving on public LB. Regardless the learning here is deeply trust in your local CV and so much more to learn since the accuracy gap between (1, 2, 3, 4th place) and the rest is significant.</p> <p>Solution Outline - - Start with transfer learning, Densenet201, One cycle learning - Unfreeze layers and re-train (Included custom learning rate for individual layers) - 3 separate model approach - Albumentation Image Augmentation - Local CV</p> <p>Other details - - Training epochs - 5 fold CV with callback each fold training with min-20, max-32 epochs - Pre-processing - Converted the image to size 256*256 and applied - Image Aug - basic image augmentation like VerticalFlip=False, img_size of 256*256, - - - BrightnessContrast, RandomGamma, fastai's jitter - Error_metric - Default fastai error_metric for each class - No post processing except used the average of the 5 fold probabilities</p>
Bengali.AI Handwritten Grapheme Classification
24th place solution - Post processing(+0.013 Private LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the organizers for holding a very fun competition. Thank you to all kagglers! Through the discussion, I learned a lot.</p> <p>My solution used a simple ensemble model and post-processing to maximize macro recall. Post-processing adds a bias to the logit output by the model. Without post-processing I am out of medal zone😭</p> <p>Here is an overview of the solution:</p> <h2>Inference Label</h2> <p>First, the DL model calculates the logits. As a post-process, add optimal bias to logits. Infer estimated labels using argmax for logits.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2117884%2Fdb0652c31dc1ea8a9abbb306f3ba6bd2%2Finference.png?generation=1584433357383197&amp;alt=media" alt=""></p> <h2>Model</h2> <p>The model is an ensemble of five models. All training data was used for learning. Due to my limited time and computational resources, I've done very little cross-validation.</p> <ul> <li>model 1, 2, 3, 4 <br> <ul><li>Seresnext50_32x4d (pretrained model using imagenet) </li> <li>image size 128x128 </li> <li>Three linear head (output is grapheme root logit (168), Vowel diacritic logit (168), Consonant diacritic logit (168)) </li> <li>Mish activation, Drop block</li> <li>Cross Entropy Loss </li> <li>AdaBound </li> <li>Change the following parameters for each model </li> <li>Manifold mixup (alpha, layer), ShiftScaleRotate, Cutout </li></ul></li> <li>model 5 <ul><li>Seresnext50_32x4d (pretrained model using imagenet) </li> <li>image size 137x236 </li> <li>Three linear head (output is grapheme root logit (168), Vowel diacritic logit (168), Consonant diacritic logit (168)) </li> <li>Mish activation, Drop block</li> <li>Cross Entropy Loss </li> <li>AdaBound </li> <li>Manifold mixup (alpha, layer), ShiftScaleRotate, Cutout </li></ul></li> </ul> <h2>Calculation of post-processing bias</h2> <p>Since the model is trained in Cross Entropy Loss, it generally does not maximize macro recall. So I considered using a bias that would be added to logit to optimize macro recall. The optimal bias was calculated using a real coded genetic algorithm as follows: </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2117884%2F1b04e39fcfe7d6862464b81da5bd0878%2Fsearch_optimal_bias.png?generation=1584434262345894&amp;alt=media" alt=""></p> <ul> <li>Calculate logit for all training data. </li> <li>Calculate bias to maximize macro recall with real-valued genetic algorithm. </li> <li>Calculate logit for all training data with data extension. </li> <li>Calculate bias to maximize macro recall with real-valued genetic algorithm. </li> <li>The above biases are averaged to obtain the final bias.</li> </ul> <p>In estimating the test data, the above bias calculated from the training data is used.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2117884%2Fb8874f1bef7f5f1e5a0125ab21d3d7d0%2Foptimal_bias.png?generation=1584434294711062&amp;alt=media" alt=""></p> <p>There is not much good English literature on real coded genetic algorithms. I think the following is helpful for implementation. <a href="https://github.com/statsu1990/Real-coded-genetic-algorithm">https://github.com/statsu1990/Real-coded-genetic-algorithm</a></p> <p>CMA-ES is similar to real coded genetic algorithm, and there is a lot of English literature. Therefore, you may want to try CMA-ES. </p> <h2>Score (Public LB / Private LB)</h2> <p>Single model <br> - model1 (wo bias): 0.9689 / 0.9285 <br> - model2 (wo bias): 0.9680 / 0.9270 <br> - model3 (wo bias): 0.9691 / 0.9317 <br> - model4 (wo bias): 0.9681 / 0.9243 <br> - model5 (wo bias): 0.9705 / 0.9290 </p> <p>Ensemble model - ensemble1 ~ 5 (TTA, wo bias): 0.9712 / 0.9309 <br> - ensemble1 ~ 5 (TTA, with bias): 0.9744 / 0.9435</p> <p>That's all. thank you for reading!</p>
Bengali.AI Handwritten Grapheme Classification
257th Place Solution and Observations (public LB 55)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="/christofhenkel">@christofhenkel</a> Thanks for pointing that out! I saw this in many other posts too, and many of them referred to R,C,V LB script. However, I still don't understand how that helps invetigaing the discrepancy. Could you explain that? Thank you very much!</p> <p><a href="/corochann">@corochann</a> Thank you for writing the kernel! It's really helpful!</p> <p>Congratulation to all winners! I was definitely shocked by the shake and losing my first solo silver. I hope to share my solution as offering a comparison to winner solutions and looking forward to hear you about what might went wrong.</p> <p>I built my solution based on <a href="/corochann">@corochann</a>'s kernel (<a href="https://www.kaggle.com/corochann/bengali-seresnext-prediction-with-pytorch">https://www.kaggle.com/corochann/bengali-seresnext-prediction-with-pytorch</a>).</p> <p>I first tried around with seresnext50 and some basic augmentations, but the public LB score wandered under 0.967x. Then I stopped cropping images and everything became bright.</p> <p>Next, I experimented with different augmentation combinations and found cutmix only to work the best and sticked to it in different model testing process.</p> <p>Afterwards, I tested with models including seresnext50, seresnext101, densenet121, and res2net using 256x256 images.</p> <p>My final submissions consist of weighted ensembles of seresnext50 and res2net. The highest public LB score resulted is 0.9884 but with private score only 0.9290. </p> <p>I used 80/20 split for all training and never used all data for one single model. I'm not sure that is a problem. In the 0.9884 ensemble, I used 2 splits of res2net and 1 split of seresnext. I used stepLR with milestone of 75 and 90.</p> <p>Interestingly, I found that my highest private LB score is produced by densenet121. The ensemble of 3 different epochs of densenet121 (same split) resulted in public LB of 0.9799 but private LB of 0.9381. I also saw that efficientnet worked poorly on public LB but well on private LB in other people's solutions. Not sure if these simple model would have better performance on private LB. I also observed that densenet has a smaller cv/lb gap of around 0.5%, while seresnext and res2net all have cv/lb gap around 1.3%.</p> <p>I also found that local CV does not correlate with private LB nor public LB in my case. The densenet121 model never made its local CV above 0.99, while seresnext and res2net both achieved 0.995-6. So I guess "trusting your CV" is not a valid strategy for this competition.</p> <p>All in all, an unexpected competition experience. Definitely a bit sad losing a silver medal. Will try my best to get good score in the rest competitions before summer. </p> <p>Please comment below if you see any potential problem and upvote if you find this post interesting.</p> <p>Happy kaggling.</p> <p>Below is my full training/testing record:</p> <p>'Se_resnext50_32x4d_kernel_baseline_newlr' mixup/cutmix Cv: 0.9868 Epoch 102 LB: 0.9646 diff : 0.0222</p> <p>Se_resnext50_32x4d_kernel_baseline_augmentation_randomseed666_8020split Epoch 35 Cv 0.9775 Lb 0.9641 Diff = 0.0134</p> <p>Epoch 44 Cv 0.978 Lb 0.9646 Dff: 0.0134</p> <p>Epoch 84 Cv 0.9824 Lb 0.9654 Diff = 0.017</p> <p>Milestone lr scheduler</p> <p>epoch 22 Cv 0.9783 Lb 0.9665 Diff 0.0118</p> <p>Epoch 30 Cv 0.98 Lb 0.9674 Diff 0.0126</p> <p>Epoch 42 Cv 0.9804 Lb 0.9678 Diff 0.0126</p> <p>Epoch 53 Cv 0.9805 Lb 0.9676 Diff 0.0129</p> <p>Milestone lr, ohem loss, stratified</p> <p>Epoch 17 Cv 0.9717 Lb 0.9527 Diff 0.019</p> <p>Epoch 29 Cv 0.9739 Lb 0.9588 Diff 0.015</p> <p>Epoch 39 Cv 0.9751 Lb 0.9584 Diff 0.0167</p> <p>Multilabel stratified - cross entropy loss</p> <p>Epoch 87 Cv 0.9869 Lb 0.9679 Diff 0.019</p> <p>Epoch 135 Cv 0.9886 Lb 0.9679 Diff 0.0207</p> <p>Noconv0</p> <p>Epoch 60 Cv 0.9833 Lb 0.9667 Diff 0.0166</p> <p>Epoch 76 Cv 0.9852 Lb 0.9670 Dff 0.0182</p> <p>Cnn - tail - noaffine Epoch 92 Cv 0.9892 Lb 0.9668 Diff 0.0224</p> <p>Epoch 77 Cv 0.9872 Lb 0.9665 Diff 0.0187</p> <p>Epoch 62 Cv 0.9849 Lb 0.9621 Diff 0.0228</p> <p>Tail + affine Epoch 96 Cv 0.9866 Lb 0.9668 Diff 0.0198</p> <p>Epoch 86 Cv 0.9858 Lb 0.9664 Diff 0.0194</p> <p>Epoch 120 Cv 0.9876 LB 0.9671 Diff 0.0205</p> <p>Size 256 Epoch 134 Cv 0.9896 Lb 0.9694 Diff 0.0202</p> <p>Epoch 126 Cv 0.9896 Lb 0.9689 Diff0.0207</p> <p>Epoch 114 Cv 0.989 Lb 0.9690 Diff 0.020</p> <p>Epoch 107 Cv 0.9886 Lb 0.9688 Diff 0.0198</p> <p>Epoch 98 Cv 0.9884 Lb 0.9691 Diff 0.0193</p> <p>Epoch 48 Cv 0.981 Lb 0.9672 Diff 0.0138</p> <p>Ensemble Epoch 134, 114, 98 Lb 0.9695</p> <p>Resnext 101 size 256 Epoch 81 crop Cv 0.9888 Lb 0.9680 Diff 0.0208</p> <p>Epoch 81 no_crop Cv 0.9888 Lb 0.9651 Diff 0.0237</p> <p>Epoch 78 crop Cv 0.9881 Lb 0.9688 Diff 0.0193</p> <p>Epoch 65 crop Cv 0.9851 Lb 0.9659 Diff 0.0192</p> <p>Resnext50 size 256 no crop</p> <p>Epoch 109 Cv 0.9924 Lb 0.9766 Diff 0.0158</p> <p>Epoch 98 Cv 0.9924 Lb 0.9762 Diff 0.0162</p> <p>Epoch 84 Cv 0.9911 Lb 0.9759 Diff 0.0152</p> <p>Epoch 79 Cv 0.9899 Lb 0.9750 Diff 0.0149</p> <p>Ensemble epoch 84, 98, 109 Lb 0.9767</p> <p>Epoch 164 Cv 0.9937 Lb 0.9769 Diff 0.0168</p> <p>Epoch 155 Cv 0.9934 Lb 0.9765 Diff 0.0169</p> <p>Epoch 143 Cv 0.9928 Lb 0.9756 Diff 0.0172</p> <p>Epoch 130 Cv 0.9928 Lb 0.9767 Diff 0.161</p> <p>ensemble of epoch 164, 130, 109 Lb 0.9770</p> <p>Ensemble (epoch 164) 0.9779</p> <p>Epoch ~180 Cv 0.9940 Lb 0.9767 Diff 0.0173</p> <p>Ensemble of epoch 164, 130, 109 Lb 0.9779</p> <p>Seresnext50 autoaugment Epoch 93 Cv 0.9897 Lb 0.9734 Diff 0.0163</p> <p>Epoch 85 Cv 0.9892 Lb 0.9742 Diff 0.015</p> <p>Epoch 77 Cv 0.9872 Lb 0.9732 Diff 0.014</p> <p>Epoch 65 Cv 0.9852 Lb 0.9709 Diff 0.0143</p> <p>Ensemble: epoch 77 auto augment, epoch 164 no crop: 0.9784</p> <p>Epoch 139 Cv 0.9908 Lb 0.9741 Diff 0.0167</p> <p>Epoch 122 Cv 0.9908 Lb 0.9737 Diff 0.0171</p> <p>Epoch 108 Cv 0.9903 Lb 0.9735 Diff 0.0168</p> <p>Densenet121 no cutmix</p> <p>Epoch 119 Cv 0.9862 Lb 0.9799 Diff 0.0053</p> <p>Epoch 92 Cv 0.9861 Lb 0.9796 Diff 0.0065</p> <p>Epoch 82 Cv 0.9859 Lb 0.9789 Diff 0.007</p> <p>Ensemble of epoch 119, 92, 82 Lb 0.9799</p> <p>Seresnext 50 only cutmix</p> <p>Epoch 65 Cv 0.9947 Lb 0.9831 Diff 0.0116</p> <p>Epoch 48 Cv 0.993 Lb 0.981 Diff 0.012</p> <p>Epoch 120 Cv 0.9963 Lb 0.9849 Diff 0.0114</p> <p>Epoch 118 Cv 0.9963 Lb 0.9849 Diff 0.0114</p> <p>Epoch 100 Cv 0.996 Lb 0.9847 Diff 0.0113</p> <p>seresnext 50 only cutmix Fold 0</p> <p>Epoch 112 Cv 0.9996 Lb 0.9862 Diff 0.0134</p> <p>Epoch 91 Cv 0.9955 Lb 0.9854 Diff 0.0101</p> <p>Epoch 81 Cv 0.9953 Lb 0.9851 Diff 0.0102</p> <p>Ensemble of epoch 112, 91, shuffle120 Lb 0.9869</p> <p>Epoch 118 Cv 0.9996 Lb 0.9867 Diff 0.0129</p> <p>Ensemble of epoch 112, 91, shuffle120, epoch 118 Lb 0.9869</p> <p>Ensemble of shuffle 120 epoch118 weighted 0.4 0.6 Lb 0.9873</p> <p>Weighted0.3, 0.7 Lb 0.9871</p> <p>Ensemble epoch 112, epoch 118 weighted 0.4 0.6 Lb 0.9865</p> <p>Res2net fold 0</p> <p>Epoch 77 Cv 0.9969 Lb 0.9858 Diff 0.0111</p> <p>Epoch 67 Cv 0.9954 Lb 0.9853 Diff 0.0101</p> <p>Epoch 53 Cv 0.9943 Lb 0.9829 Diff 0.0114</p> <p>Ensemble weighted 0.4 0.6 r2n lb0.9858 seresnext_fold0 lb0.0.9867 Lb 0.9880</p> <p>res2net_fold1_new_fromepoch66_epochrange80(cv0.9996e85).pt</p> <p>Fold 1 Epoch 85 Cv 0.9996 Diff 0.9865</p> <p>Epoch 70 Cv 0.9991 Diff 0.9858</p> <p>ensemble weighted 0.3 0.35 0.35 r2n lb0.9858 fold0 r2n lb0.9865 fold1 seresnext lb0.0.9867 0.9881</p> <p>0.3 0.3 0.4 0.9884</p>
Bengali.AI Handwritten Grapheme Classification
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>No theory behind it, but it was natural to try different ways to go from grapheme level to component level and this is what worked best.</p> <p>Thanks, happy if it helps :)</p> <p>Updated the solution post!</p> <p>It is a cherry-picked image, but in general they look way more natural. </p> <p>30 points means for example 9850 --&gt; 9880</p> <p>Yep, we had a local CV where holdout contained unseen graphemes which is where we determined the weights. Interestingly, scores on LB really matched these weights, we slightly tuned them on LB also though.</p> <p>Thanks <a href="/returnofsputnik">@returnofsputnik</a> . What we did is simply the following: rank the probabilities for C=3 (so 3rd column of consonant prediction array), and hard-set k extra cases with C=3 if they are not already predicted as C=3. We started with 155 (as this is exactly the number of samples for one grapheme) and multiplied that with a factor. Hope that's clear. So something like that:</p> <p><code> idx = [z for z in np.argsort(all_preds_consonant_proba[:,3]) if all_preds_consonant[z] != 3][-155:] all_preds_consonant[idx] = 3 </code></p> <p>No, never had a timeout. One model took 15min approx.</p> <p>I would call 7th place an expert</p> <p>So if you predict on all ~1300 graphemes as target, you can decode them to the components and then we just average the probabilities of all graphemes with let's say C=2 to get the probability for C=2. This helps to generalize way better to unseen graphemes, and actually also helps seen graphemes. So win-win ... Edit: too slow</p> <p>I dont know, it is honestly just a guess what top solution did. Hope we will learn.</p> <p>Thanks a lot to the host and Kaggle for this very interesting competition. We learned a lot and specifically the issue of predicting unseen graphemes in test data was very intriguing and has led to some very elegant and nice solutions as imminent from all the other solution posts. As always, this has been an incredible Zoo team effort and always amazing to collaborate with <a href="/dott1718">@dott1718</a>.</p> <p>We will try to tell the story of progress a bit, without only explaining the end solution as we believe that the whole process of a competition is important to understand how decisions have been made.</p> <h3>Early efforts</h3> <p>We started the competition by fitting 3-head models for R,C,V and quite quickly got reasonable results landing us somewhere in range of top 40-50 on LB. As always, we tried to understand how test data can be different, and why there is a gap to local CV. It became quite quickly clear to us that it is mostly due to unseen graphemes in test. We tried to assess how such a gap can exist, and checked how R, C and V scored individually on LB (you can figure this out quite easily by predicting all 0s for the rest, which is baseline sub). We saw that most drop comes from R and C, which is impacted by the huge effect of rare case misclassifications. So we tried something funny, which is adding additional C=3 and C=6 predictions based on the next highest indices. Just adding 310 C=3 and 310 C=6 (which reflects one extra grapheme for each) boosted us 30-40 points on LB. So we figured that this will be key in the end, and we need to generate a solution that generalizes well to unseen ones. We did also not want to rely on hardcoding too much, so tried to find a better way. These early subs with no blending at all, but this simple hardcoding, would btw still be rank 10-20 now.</p> <h3>Grapheme models and fitting</h3> <p>Seeing reported CV scores on forums, which were all quite higher than ours, made us think that top teams are doing something different than us, which apparently was different targets. So our, first trick was to switch from predicting R,C,V to predicting individual graphemes. This also had benefits to us for not needing to care about proper loss weights of the 3 heads, different learning weights for the heads, how to deal with soft labels properly, and we could focus fully on NN fitting and augmentations. After we switched, mixing augmentations shined immediately - we replaced cutout with cutmix and then with fmix, which made it to the final models we had. Fmix (<a href="https://arxiv.org/abs/2002.12047">https://arxiv.org/abs/2002.12047</a>) worked clearly better for us than cutmix, and also the resulting images looked way more natural to us due to the way the cut areas are picked. This is an example of a mixed image:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F37166%2F18ef2d72114caf6d37e0fb7c90053969%2FSelection_007.png?generation=1584456239523632&amp;alt=media" alt=""></p> <p>In the end, we mixed either 2 images or 3 images with 50% probability and used beta=4 to mostly do equal mixing. </p> <p>Having this grapheme model, the tricky part was to apply the model to test. At first, we tried applying the model as it is, so predicting only the known graphemes by just converting the predicted grapheme to its R,C,V components. LB score was better than for the model predicting R,C,V separately, which tells us there are not that many new unseen graphemes in public LB. To make the model work better on unseen graphemes, we applied the second trick - a post-processing routine. Surprisingly, it also improved the metric on the known graphemes as well.</p> <p>The model outputs 1295 probabilities of each distinct train grapheme. For each component, e.g. C, we calculate the scores of each C=0,..,6 by averaging probabilities of the graphemes having this component. So, for C=3 it is an average of only 4 probabilities as there are only 4 graphemes with consonant diacritic of 3 in train data. For C=0 it is an average of hundreds of probabilities as it is the most common C value. The post-processing ends with picking C value with the highest “score”. The logic behind it was to treat each C value equally regardless of its frequency similar to the target metric, not to limit the model to the set of train graphemes, and pick more likely component values for the graphemes with non-confident predictions. This routine immediately gave us another 50 extra points on LB.</p> <h3>Improving unseen graphemes</h3> <p>The third trick was about improving the predictions of unseen graphemes without hurting predictions of known graphemes too much. As most of you probably know and as elaborated earlier, the gap between cross-validation and public LB was coming from the drop of accuracy in the C component, caused mainly by C=3 and to lesser extent by C=6. It is easy to explain, as these are the rarest classes in train, and most probably public LB has at least one new grapheme with C=3 and C=6. The second largest drop was coming from the R component, while V recall was quite close between cross-validation and public LB.</p> <p>The main problem with both 3-head models and specifically grapheme models is that they overfit to the seen graphemes as they are heavily memorizing them. So to close the gap a bit, we needed models which predict R and C better on unseen graphemes, which led us to fitting individual models for these 2 components. Specifically for C it was very hard to generalize and what helped a bit was to randomly add generated graphemes based on the code we found in this kernel (kudos to the author!) <a href="https://www.kaggle.com/kaushal2896/bengali-graphemes-starter-eda-multi-output-cnn">https://www.kaggle.com/kaushal2896/bengali-graphemes-starter-eda-multi-output-cnn</a>. But, as we learnt after the end of the competition, there was way more room to improve the models this way than we actually did.</p> <h3>Blending</h3> <p>Trick number four was about how to blend the models together. To test the blending approaches, we recreated the hold-out sample by removing a few graphemes completely from the training part, also a couple of graphemes with C=3 and 6. We ended up doing different blending per component. But first, the models we had by the end of the competition are: - 3 grapheme models, fitted on the whole train. The modes use adam and sgd scheduling, fitted for 80 - 130 epochs and all use fmix mixing 2-3 images. - 1 model with 3-heads for all components. Cutout instead of fmix, sgd and 40 epochs, similar to all the following models - 1 model for R - 2 models for C</p> <p>For R the blend is: post-processed average of grapheme models + 0.4 * average of 3-heads model and R model</p> <p>For V the blend is: post-processed average of grapheme models + 0.2 * 3-heads model scaled to have equal means</p> <p>For C the blend is: post-processed average of grapheme models + 15 * average of C models * C class weights</p> <p>C class weights were introduced to fix the imbalanced frequencies of C, especially C=3 and 6. They were set to inverse frequencies of the class in train and normalized.</p> <p>6 out of 7 models are SE Resnext50 and one is SE Resnext101. Image size was either original or 224x224. Adam scheduling with decay worked well, but SGD with scaled down every X epochs was even better. Fmix was done on the entire sample, meaning that for each image there were 2 or 3 (random with prob=0.5) random images picked from the whole training sample and mixed. </p> <p>Happy to try to answer any questions!</p> <p>Which is fair but not ideal. Fitting one fold still took us 12-24hours depending on model and tests.</p> <p>Which is what <a href="/dott1718">@dott1718</a> pointed out and which is what makes CV so hard and sometimes a bit frustrating. It is very difficult to test ideas properly. In best case we would always run 5-fold with multiple bags per fold to also see effects of bagging and so on. Completely impossible here.</p> <p>It depends on how well your CV correlates with LB and how test data looks like. In NFL we also didnt have to sub for a month.</p> <p>Thanks and congrats on your great result! </p> <p>I would not call us tabular engineers, rather allrounders. We actually have better kaggle results on non-tabular data competitions :)</p> <p>I cannot believe that you can test experiments properly in this competition within 2 hours and full model fitting then takes days. How does your CV look like and how is it different from full model fitting? It is also dependent on HW available of course. </p> <p>What we usually do is have a CV, and then fit models on full data, which is 20% longer. Also, in this competition you had a perfectly fine public LB dataset to test how it behaves on unseen data. Why didn't you submit for a whole month? I was really curious about that.</p>
Bengali.AI Handwritten Grapheme Classification
38th Place - ZoneMix - A domain specific augmentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>A really big thanks to all the amazing participants for sharing the ideas. @haqishen @hengck23 @bibek777 @machinelp @quandapro and so many other awesome users without whose suggestions I would have been completely lost. And also big thanks to Bengali.AI organizers for this competition about my native language, the lessons learned from this competition will also be applicable to other languages of the Indian subcontinent like Hindi, Odia, Telegu, etc. </p> <p>I'll be honest, I joined the competition by at nearly the end of January, and for most of the time, I was trying to improve the score just by simple augmentations/model/hyperparameter tuning to improve the score. And having only Google Colab to train doesn't help with that either. Had I continued on that track I surely wouldn't have been in any medal zone. Thanks a lot to my teammate Ram for pointing out the important discusions about unseen graphemes by the organizers. And subsequently Qishen Ha's post on validating on unseen graphemes. </p> <p>With only 10 days remaining after realizing how poorly the model will do on unseen graphemes. Particularly - Classes with few combinations - Which is more than half of the grapheme roots, and consonant 3 and 6 - Generalize the worst. (Check combination counting kernel <a href="https://www.kaggle.com/dipamc77/bengali-dataset-grapheme-combination-counts?scriptVersionId=30348992">here</a>)</p> <p>I probed for the scores of some of these classes similar to the method described <a href="https://www.kaggle.com/philippsinger/checking-r-c-v-scores">here</a>. From the gaps, I was sure that a huge shakeup was coming.</p> <p>After reading the top solutions I'm inspired to pursue ML-based methods to improve generalization rather than a rule-based method like I describe below; I would still like to share it</p> <h1>ZoneMix - Splitting graphemes by zones to apply cutmix</h1> <p>As a native Bengali speaker, I can say that generally graphemes can be split into particular zones based on the combinations. Below is a diagram of the zones.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2Ffc1a8235909e0be649a656d3bece6637%2FZones.png?generation=1584474729279357&amp;alt=media" alt=""></p> <p>Some extra rules have to be made for some combinations, which was kind of a pain to write in code and made me question why I'm even working on such a tedious idea 😭 . I'll share the code surely, just right now its a huge mess.</p> <p>Each combination takes up some of these zones and cutting by the zone's coordinates based on the character's bounding box can roughly give the components. Here is an example</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2Fdf89e7805d9fb22cb46320f15a51cf63%2Fgoodexample.PNG?generation=1584475233238792&amp;alt=media" alt=""></p> <p>Of course - Real-world data doesn't always follow rules and failure cases are common</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2F574f7c8d3cd2f39a37347ac672e5c4ab%2Fbadexample.PNG?generation=1584475938169427&amp;alt=media" alt=""></p> <p>Once the zone data is extracted, they can be placed on the other images by resizing based on the new zone coordinates. This can produce very realistic images, but also unrealistic ones.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2F6af3d256dc7ef3fccc04024763b75102%2FGoodMix.PNG?generation=1584476645803135&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2F4e649a79884043537ec5bc181b8e343c%2FBadMix.PNG?generation=1584476950698513&amp;alt=media" alt=""></p> <p>When all the setup was done I had about 3 days left to train with this method. And sadly it didn't work directly, I think its because the mistakes and unrealistic images are too much noise for the model to deal with.</p> <p>I was able to tone down the noise by doing the following tricks - The grapheme root zones are roughly correct most of the time, while the vowel and consonants are way off. I figured that since grapheme root and consonant 3 and 6 are the main classes to tackle, I should focus on those. </p> <p>Using zonemix I trained a model to only predict the grapheme root, this reduced the noisy gradients from the vowel and consonants and improved the recall as well as unseen recall.</p> <p>Consonant 3 and 6 have a particular property that they are basically a combination of Consonant 2 and 4 and Consonant 4 and 5 respectively. Dieter also explains this in his <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136129">solution post</a> and I feel he found a much better way to deal with it. Anyway, so my approach was to use zonemix to place consonant 4 onto images with consonant 2 and 5. Examples below. With these I trained a new model for consonants.</p> <p>I literally trained both models parallely on two colab sessions on the last day and didn't even have time to tune anything, had to manually stop the runs because I had only 3 hours left to the end of the competition. </p> <p><strong>Final solution</strong> - Zonemix models for grapheme roots with less combinations (&lt;5), and consonant 3, 6, along the old trained ensemble for the rest (A pretty bad one with public LB only 0.982 and private LB 0.931). But I confidently picked the zonemix kernel knowing it will surely be good on private set 😄 </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2Ff596e6402b241d8ee5cfe0be5646ce43%2FCapture.PNG?generation=1584474005219015&amp;alt=media" alt=""></p> <p>Maybe with some better tuning and validation it could have probably done much better. </p> <h1>Hyperparameters tuning that worked and didn't work for me</h1> <ol> <li>Wasted weeks on cropped and preprocessed images, didn't work at all. I'm still not convinced why.</li> <li>Did lots of experiments on resnet34 with 128x128 (just resize) - This was practical for fast experimentation on Google Colab.</li> <li>Balanced sampling - The metric is macro average, so helped to do balanced sampling I guess.</li> <li>Cutout/Cutmix - Gives better results than mixup.</li> <li>Interestingly Mixup + Gridmask gives much better results on Unseen CV</li> <li>Finally trained SeResNeXt50 with 128x128 which gave improvement on CV (Though same pathetic results on Unseen CV)</li> <li>Well tuned ReduceLRonPlateau converges as much as 1.5x - 2x faster than Cosine Decay. Didn't try other LR schedules.</li> </ol> <h1>Some ideas that I didn't get to try</h1> <ol> <li>CAM FMix - One of the top solution uses CAM Cutmix</li> <li>Correct zone segmentation via noisy student training methods on the noisy zones.</li> </ol> <h1>Final thoughts and important lessons learned</h1> <ol> <li><p>From the only other competition I took part in <a href="https://www.kaggle.com/c/severstal-steel-defect-detection">Severstal: Steel Defect Detection</a> - I had this lesson: If there is a gap in CV and LB, trust your CV more and don't overfit LB.</p></li> <li><p>I can change that lesson now - If there is a gap in CV and LB, try to understand why from the discussions, problem statement and the data. Don't dive into coding new ideas trying to reduce the gap without having a proper understanding of what is causing the gap.</p></li> <li>Try ML-based approaches for solving the problem, after seeing the top solutions I feel like I understand them very well, but it didn't occur to me to use these ideas. Maybe I'm biased to use rule-based ideas and that needs to change.</li> <li>Do lots of EDA from time to time, not just at the beginning of the competition. Seeing the results of trained models changes our intuition about the data.</li> <li>Don't ignore external datasets. Of course, working with them needs time and effort, but with correct usage, external data can work wonders.</li> <li>Lastly - It is important to keep hacks to a minimum, but at the end of the day, Its a competition, do what you need to do to improve your score, like the awesome metric hacking magic.</li> </ol> <p>No, I find the bounding boxes and reverse calculate the zones coordinates based on the bounding box coordinates. When taking from one zone and placing onto the other, there is resizing, but for most cases maintaining aspect ratio of vowels and consonants should not be very important.</p> <p>During the first few weeks, I had tried publicly shared cropping methods that were maintaining aspect ratio. But I realized the bounding boxes were often wrong due to random lines on the sides that didn't go with simple rules. Finally, I wrote my own bounding box finding logic based on contours and that ignores contours on the sides, that was more consistent. Though some images still have small ink spots and bounding boxes do come out wrong for those.</p> <p>This kernel is a very popular one many people were using towards the beginning. <a href="https://www.kaggle.com/iafoss/image-preprocessing-128x128">https://www.kaggle.com/iafoss/image-preprocessing-128x128</a> It finds the bounding box and then pads the cropped image such that the aspect ratio of the cropped image is maintained.</p> <p>Is this what you mean by image ratio?</p> <p>Yes that is true, and frankly I believe it might be even better than CAM cutmix because of the CAM will most likely be wrong for classes with less combinations like I described. I too wanted to generate unseen graphemes with this method but it needed some polishing and I didn't get the time.</p> <p>Its an excuse on my part but my basic ensemble on seen graphemes wasn't good enough to begin with and that overshadowed this method I developed. Have to improve my basic DL skills for that.</p> <p>Broadly speaking I mean understanding and how deep learning works and making creative solutions to exploit it. </p> <p>Two examples of this for me were the 1st and 5th place solutions. I understand cyclegan well but didn't think of using it to transfer handwritten to printed font and use a classifier on that. </p> <p>5th place one really felt magically creative to me. Because I too improved my consonant 3 and 6 by using zonemix. But the multilabel refactoring of the problem is so much more natural. </p> <p>Here I tried to improve the data with rule based thinking, I want to be more creative like those solutions and others. </p>
Bengali.AI Handwritten Grapheme Classification
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry, that is grapheme, not data. I already modified it. So we predict whether grapheme class of test data is included in train grapheme or not.</p> <p>write solution summary. feel free to comment if you have question.</p> <p>Congrats everyone with excellent result.</p> <p>Here is our solution summary.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1620223%2Fb2143c50d9c92271e7b871912dd171d7%2Fbengali_pipeline%20.jpg?generation=1584447993610250&amp;alt=media" alt=""></p> <h1>Solution summary</h1> <h2>Dataset</h2> <ul> <li><strong>pretrain</strong>: triple identities by hflip and vflip</li> <li>image resolution: 137 x236</li> </ul> <h2>Model</h2> <p><strong>for seen grapheme and unseen grapheme</strong> * (phalanx) encoder -&gt; gempool -&gt; batch_norm -&gt; fc * (earhian) encoder -&gt; avgpool -&gt; batch_norm -&gt; dropout -&gt; fc</p> <p><strong>arcface</strong> * encoder -&gt; avgpool -&gt; conv1d -&gt; bn * s 32(train), 1.0(test) * m 0.5</p> <p><strong>stacking</strong> * conv -&gt; relu -&gt; dropout -&gt; fc</p> <h2>Augmentation</h2> <ul> <li>cutmix</li> <li>shift, scale, rotate, shear</li> </ul> <h2>Training</h2> <ul> <li><strong>seen grapheme</strong>: train model for seen grapheme with pretrain dataset, then finetune it with original dataset</li> <li><strong>arcface and unseen grapheme</strong>: train pretrained model for seen grapheme with original dataset</li> <li>replace softmax with <a href="https://arxiv.org/abs/1911.10688">pc-softmax</a></li> <li>loss function: negative log likelihood</li> <li>SGD with CosineAnnealing</li> <li>Stochastic Weighted Average</li> </ul> <h2>Inference</h2> <p><strong>arcface</strong> * use cosine similarity between train and test embedding feature * threshold: smallest cosine similarity between train and validation embedding feature</p> <p><strong></strong></p> <ol> <li>use cosine similarity between train and test feature. about threshold, we use smallest cosine similarity between train and validation feature.</li> </ol> <p>2., 3. you're right</p>
Bengali.AI Handwritten Grapheme Classification
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We appreciate the efforts Bengali.AI and Kaggle spent on this competition, and congrats to all the winners. We are very impressed by your interesting findings, novol methods, deep understanding of the problem, the dataset, as well as the evaluation metric.</p> <p>Our solution is relatively straightforward and bears similarities with some of the top solutions.</p> <p>Let's call the 1295 graphemes in the train set ID (in-dictionary), and the unknown graphemes in the test set OOD (out-of-dictionary). The first step was training Arcface models and computing the feature centers of each of the 1295 graphemes. We can then tell if a test image is ID or OOD based on its smallest feature distance to every grapheme centers. In our 4th place submission, we set the classification threshold to 0.15 (cosine distance), which was estimated locally. After this ID/OOD classification, we applied the Arcface models to the ID set only and determined the component classes by the grapheme classes. For the OOD set, we trained another group of 1-head and 3-head component models and applied them to the OOD set only.</p> <p><strong>More of the Arcface models:</strong> network structures: inception_resnet_v2 and seresnext101 input image preprocessing: plain resizing input image size: 320x320 loss: Arcface target: 1295 grapheme classification total training epochs: 90 augmentations: cutout only</p> <p><strong>More of the OOD models:</strong> network structures: inceptions, resnets, densnets, efficientnets ... ... ... input image preprocessing: <a href="https://www.kaggle.com/iafoss/image-preprocessing-128x128">https://www.kaggle.com/iafoss/image-preprocessing-128x128</a> input image size: various sizes from 256x256 to 416x416 loss: CE target: 3-head for vowel_diacritic and consonant_diacritic, 1-head for grapheme_root total training epochs: 10 (more training harmed the OOD performance) augmentations: mixup only</p> <p>Thank you. There was nothing special of our 3-head model at training stage, but during inference, we did not use the predictions of the "grapheme_root" head. We built dedicated classifiers (168-class classification, only 1-head) for grapheme_root predictions. Our ID (Arcface) models predicted grapheme classes (1295-class), and we could directly lookup the three components from train.csv.</p> <p>For local OOD experiments, we split the 60 rarest graphemes out of the 1295 as validation set. For final LB submission, we simply trained the OOD models on all the train data. The 3-head corresponded to grapheme_root, vowel_diacritic, consonant_diacritic</p> <p>We only trained on known graphemes.</p>
Bengali.AI Handwritten Grapheme Classification
55th place solution - A frugal approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Kaggle and Bengali.ai for hosting this contest. Congratulations to all the winners and all the participants who kept the discussion alive and encouraging throughout the contest with few special shoutouts to <a href="/haqishen">@haqishen</a>, <a href="/pestipeti">@pestipeti</a>, <a href="/hengck23">@hengck23</a>, <a href="/ildoonet">@ildoonet</a>, <a href="/bibek777">@bibek777</a>, <a href="/machinelp">@machinelp</a> for really interesting discussions inundating with ideas. Special thanks to <a href="/iafoss">@iafoss</a> for his starter notebook and <a href="/drhabib">@drhabib</a> for posting detailed code of previous competition solutions, which I relied upon. </p> <p>This is my first competition medal on Kaggle in 2 years. The public leaderboard standing is from one single model. Due to lack of hardware resources, most of my code was trained on Kaggle Kernels (The reason why I call this a frugal approach ;p). </p> <h2>Preprocessing and Augmentations</h2> <p>I resized the images to <strong>240x240x1</strong> and zero-padded them. Simple augmentations such *as crop_resize, image_wrapping, rotations, and image_lightening* were used. </p> <h2>Validation</h2> <p>I experimented with <strong>5 fold CV</strong> and for submission notebook, I performed a <strong>80-20</strong> split of the dataset. </p> <h2>Backbone and architecture</h2> <p>I performed most of the ideas shared on discussion forums keeping EfficientNet-B1 as the model backbone, which I took from EfficientNet Pytorch, and carried out certain modifications inspired from <a href="/iafoss">@iafoss</a>'s starter kernel. EfficientNet-B3 and above, being computationally expensive for Kaggle kernel, could not be trained properly in the limited kernel runtime of 32000s.</p> <h2>What didn't work</h2> <p>Although MixUp and Cutmix augmentation seemed really promising, they provided no effective improvement for me. They require higher epochs for training which was not at all feasible in my case.</p> <p>Based on some previous contest discussion advice, it was helpful to maintain experiment log files. I naively created the experiment log, but, it turned out to be confusing rather than helping when selecting the final solution for submission. </p> <p>Lack of hardware was a major turndown for me at the beginning of the contest and no improvement in the public lb was discouraging. The time that was wasted without any submission could have been used crucially. I timed every single epoch and then divided 32000s from that in order to train my model for longer and longer time. I used half precision and had to replace <em>Mish activation</em> in the code with <em>ReLU</em> for faster computation (It gave me almost one extra epoch). </p> <h3>What I learned during the competition ( note to self )</h3> <ul> <li>Prepare and maintain a proper experiment log file throughout the contest (Any help with this would be appreciated)</li> <li>Discussion forums are really interesting and full of ideas to experiment with but proper experiment documentation is also necessary.</li> <li>Proper selection of solution for submission is extremely important, this helped me through LB shakeup.</li> </ul> <p>Here are my two selected submissions stats:</p> <p><strong>Model backbone: EfficientNet-B1 with mixup</strong> total epochs: 50 Learning rate: from 1e-6 to 2e-2 Val score: 0.9779 Public LB score: 0.9638 Private LB score: 0.9360</p> <p><strong>Model backbone: EfficientNet-B1</strong> total epochs: 18 Learning rate: from 1e-4 to 1e-2 CV score: 0.9717 LB score: 0.9620 Private LB score: 0.9385</p>
Bengali.AI Handwritten Grapheme Classification
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks at Bengali.ai and kaggle for hosting this interesting competition as it was more to it than simple ensembling of computer vision models. Thanks to the authors and contributors of pytorch, pytorch-lightning, apex and pytorchcv for making my life easier.</p> <p><strong>Acknowledgements &amp; notes on GM</strong></p> <p>I am humbled to finally become competition grandmaster. I learned a tremendous amount of tricks in the recent two years and it would not have been able if it wasn’t for all this generous sharing of top solutions and extremely talented teammates I was lucky to have along the way. </p> <h3>Short Summary</h3> <p>My solution is a simple bag (different seeds) of the same model seresnext50 with custom head. Major boost in LB score came from redesign of consonant diacritic target. Minor LB improvement from postprocessing tweaks such as thresholding and finding closest train examples. </p> <h3>Preprocessing</h3> <p>I did not resize the image, the only thing I did was normalize each image by its mean and std, since I experienced a good regularization from that in previous competitions.</p> <h3>Architecture &amp; Training</h3> <p><strong>Backbone</strong> I used a plain seresnext50 but adjusted the very first layer to replace resizing the image and account for single channel. I did that by changing input units and reducing stride from (2,2) to (1,2).</p> <p>``` from pytorchcv.model_provider import get_model as ptcv_get_model</p> <p>backbone = ptcv_get_model('seresnext50_32x4d', pretrained=True) backbone = backbone.features backbone.init_block.conv = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(1, 2)) ```</p> <p><strong>neck</strong></p> <p>same as in maciejsypetkowski solution <a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/110543">https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/110543</a> </p> <p><strong>heads</strong></p> <ul> <li>3 heads, each for consonant, vowel and root and </li> <li>auxiliary head for grapheme with arccos loss</li> </ul> <p>So in total my used architecture looks like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1424766%2F3e968cbc82806b9211bf7609b662a18a%2Fbengali%20arch.001.jpeg?generation=1584456769762003&amp;alt=media" alt=""></p> <p><strong>Training</strong></p> <ul> <li>train on all data</li> <li>4 cycle cosine annealing with augmentation increase <ul><li>cycle 1 &amp; 2 cutmix</li> <li>cycle 3 &amp; 4 cutmix, cutout, scale, translate, rotate</li></ul></li> <li>Adam </li> </ul> <p><strong>Losses</strong></p> <ul> <li>root loss: CrossEntropy</li> <li>consonant loss: Multi label Binary Crossentropy</li> <li>vowel loss : CrossEntropy</li> <li><p>grapheme loss: ArcCos + CrossEntropy</p></li> <li><p>total loss = root loss + consonant loss + vowel loss + grapheme loss</p></li> </ul> <h3>The Magic (jôfôla)</h3> <p>Now to the key aspect of my solution. I realised all models perform poorly on consonant diacritic 3 (I abbreviate with cons3 in the following) and especially confuse it with cons4. And with poorly I mean game changing poorly: 89% recall compared to 99% for the other consonant classes. At first I just fought the symptoms (instead of the root cause) as several other teams did, e.g. by transferring low confident predicts from high frequent class cons4 to low frequent class cons3 and hence leveraging the definition of macro recall metric and that gave a good LB boost (0.9900 -&gt; 0.9915). That might have been enough for gold but I wanted to improve my chances. So I wanted to understand why the model did so poor on this one and read through discussions, wikipedia and other domain knowledge related things. Luckily, I found 2 things: - Wikipedia: As the last member of a conjunct, য jô appears as a wavy vertical line (called যফলা jôfôla) to the right of the previous member: ক্য "kyô" খ্য "khyô" গ্য "gyô" ঘ্য "ghyô" etc. In some fonts, certain conjuncts with যফলা jôfôla appear using special fused forms: দ্য "dyô" ন্য "nyô" শ্য "shyô" ষ্য "ṣyô" স্য "syô" হ্য "hyô".</p> <ul> <li>The consonant diacritics are itself composed from lower level components <ul><li>0: []</li> <li>1: ['ঁ']</li> <li>2: ['র', '্']</li> <li>3: ['র', '্', 'য']</li> <li>4: ['্', 'য']</li> <li>5:['্', 'র']</li> <li>6: ['্', 'র', '্', 'য']</li></ul></li> </ul> <p>So I realized that not only 3,4 and 6 have a jô component which might have the special jôfôla form but also that the special forms are in train for <strong>only</strong> cons4 but <strong>not</strong> for cons3, and thats why the models are so bad on cons3. My solution was the following. I recoded the cons classes into a multilabel classification problem using the following function:</p> <p>``` cons_components = ['ঁ', 'য', 'র','্']</p> <p>def is_sub(sub, lst): ln = len(sub) return any(lst[i: i + ln] == sub for i in range(len(lst) - ln + 1))</p> <p>def label2label_v2(label): res = np.zeros((6,),dtype=int) components = cons2components[label] for i,item in enumerate(cons_components): if item in components: res[i] = 1 if is_sub(['্', 'য'], components): res[4] = 1 if is_sub(['্', 'র'], components): res[5] = 1 return res</p> <p>```</p> <p>That function re-codes the 7 consonant class labels into multilabel 6 dimensional vectors</p> <p><code> 0 -&gt; [0, 0, 0, 0, 0, 0] 1 -&gt; [1, 0, 0, 0, 0, 0] 2 -&gt; [0, 0, 1, 1, 0, 0] 3 -&gt; [0, 1, 1, 1, 1, 0] 4 -&gt; [0, 1, 0, 1, 1, 0] 5 -&gt; [0, 0, 1, 1, 0, 1] 6 -&gt; [0, 1, 1, 1, 1, 1] </code></p> <p>That schema not only enabled to learn special forms of cons3 from cons4 but also enables to tune via thresholding (i.e. when to set a continuous prediction to 0 or 1) how the model separates the 6 classes. E.g. by setting a very low threshold on the 2nd logit more predictions would be set to 1 and hence more predictions would move from cons4 to cons3 as these two classes only differ in logit2. Purely changing the target for consonant diacritic in this way improved Public LB from 0.990 to 0.993 !</p> <h3>Postprocessing</h3> <ul> <li>I adjusted threshold for binarizing consonant diacritic. Best was a threshold of 0.1 for all logits classes. I tried not dare to temper which each logit threshold individually as I was afraid of overfitting. Result on Public LB: 9930 -&gt; 9937 </li> <li>I used cosine similarity to find closest train sample for root and vowel when top1 probability &lt; 90% and use its label. bestfitting did the same in the 1st place solution of Human Protein competition) as can be found under <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109</a> Result on Public LB: 9937 -&gt; 9942 </li> </ul> <h3>Choosing subs</h3> <p>I tempered around with binarization and metric learning thresholds and purely checked LB hence I was aware of a high risk to potentially overfit to LB. So I chose my best sub and a failsafe sub that I felt was the least tempered with but still doing ok. (Both were gold on private at the end)</p> <p>Cheers. </p> <p>I think the young imposter in this <a href="https://www.youtube.com/watch?v=Q0_Xajic_9U&amp;feature=youtu.be">video</a> gives some tips. Try to learn long term and as broad as possible (NLP, computer vision, tabular data) you never know when you could use a trick from a different field.</p> <blockquote> <p>Did you set the threshold search local CV?</p> </blockquote> <p>Yes, but I also probed leaderboard a bit for that. </p> <blockquote> <p>and if we get vectors other than those seven, what should we do?</p> </blockquote> <p>luckily there were not many of that cases. (Which was also a good check if the model is working properly) You could take the closest of the 7 using some distance metric (e.g. cosine distance). Here it made more sense to directly label as C=3, due to the design of the competition metric</p> <p>Every time you use the Public Leaderboard to evaluate if something is working or not, you fit on it and that's risky, since the behavior might not hold on private. So as the second submission, I tried to chose a less fitted to leaderboard one.</p> <p>1) normalize every single image based on its mean and std, which I thought makes sense, since you couldn't peak into the test images to norm over all samples</p> <p>2) Several heads or multi losses normally work well for me. I normally start with very simple weights. and then tune them a bit. I use a simple scaling factor to get the losses to same scale, when using different losses. </p> <p>3) Sometimes you can leverage from multi output. For example average the predictions, but here I did though using the grapheme prediction might hurt the models performance on unseen graphemes, so I did not try. </p> <p>Private score (roughly) 0.92 -&gt; 0.95, so a huge improvement</p>
Bengali.AI Handwritten Grapheme Classification
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations, too. <a href="/roguekk007">@roguekk007</a> ! We used Nelder-Mead optimizer to do this post-processing. Nelder-Mead takes initial values as an arg. And optimal results, <code>coefficients</code>, will depend on this initial values to some extent ( will depend on situations, iteration times, tolerance, ... ). As I wrote, we used inverse values of counts as initial values and got consistent results (coefficients) in all experiments. </p> <p>I think our post-processing is almost same as Chris's one. </p> <p>Good point, thanks <a href="/murphy89">@murphy89</a> . I think my explanation is a little short. <br> This architecture was motivated by topic 2, <code>Co-occurence</code>.</p> <p>I wrote, <br> &gt;build a model which consider both co-occurence and non-co-occurence of R, V, C to be robust with unseen data in SE-ResNet50 based model. This model has dual paths after SE-ResNet block. One is straightforward path from GeM2D to each top fc layers (R, V and C). And the other is through 2 fc layers (please see above figure).</p> <p>R, V and C share the same weight in 2 fc layers (1280 fc -&gt; 512 fc). I used this architecture to catch co-occurence, because I thought sharing same weights means considering the relationship of R, V and C: co-occurence. <br> Co-occurence of R, V and C is important. But if we make models learn this relationship extremely, models will become unpredictable to unseen data (unseen combination of R, V and C).</p> <p>Hope this is what you want to know. </p> <p><a href="/roguekk007">@roguekk007</a> </p> <blockquote> <p>I am wondering why would any optimization algorithm be needed?</p> </blockquote> <p>Because the distribution of each class (R + V + C = 186 classes) is not uniform. I mean counts of some classes are higher than others as bellow figure. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F479538%2F1a6151694102081712301b8bf3bf49b7%2Ffig.PNG?generation=1584520815962422&amp;alt=media" alt=""></p> <p>In this case, evaluation metric, Recall ( <strong>Macro Averaged Recall</strong> ) is not fair for all classes. Just adjusting probability higher for low counted classes and lower for higher counted classes makes Recall better. <br> I think this is more elaborately explaned in <a href="/cdeotte">@cdeotte</a> 's <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">thread</a>. </p> <p><a href="/corochann">@corochann</a> Thanks! I also learned from your kernel posted at the early stage in this competition. Looking forward to seeing you in another competition.</p> <p>At first, congratulations to the winner <a href="/linshokaku">@linshokaku</a> and all participants who finished this competition. <br> We did not imagine such a big shake ( We estimated that there would not be so much unseen data in private test dataset ). </p> <p>Anyway thank you Bengali.AI and kaggle for organizing this competition, also to my teammate <a href="/nejumi">@nejumi</a> !</p> <hr> <p>Below figure shows an overview of our solution. I think our solution is very simple and nothing special. <br> ( BTW 1st place solution is very unique and impressive for me ;-) ) Here I will add naive explanations to some points worth to be noted. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F479538%2F7b541b6993070fbf37181aa631843011%2FBengali_AI.png?generation=1584423861453528&amp;alt=media" alt=""></p> <p>NOTE: If you want to see a higher resolution image, please refer this <a href="https://speakerdeck.com/hoxomaxwell/kaggle-bengali-dot-ai-6-th-place-solution">link</a>.</p> <ol> <li>Intensive augmentations We used CutMix, MixUp, Cutout, width/height shift, rotate, Erosion, <a href="https://www.kaggle.com/haqishen/gridmask">GridMask</a> ( credit to <a href="/haqishen">@haqishen</a> ), Zoom... <br> CutMix and Cutout worked well for both of us, but in my case, height shift and rotation did bad, maybe due to 137 x 236 size (some graphemes become outside image region :-( ). <br> <br></li> <li>Co-occurence There are limited combinations between R, V and C in train dataset. But as a host said, in test dataset, some unseen combinations will come. So I did a simple trick; build a model which consider both co-occurence and non-co-occurence of R, V, C to be robust with unseen data in SE-ResNet50 based model. This model has dual paths after SE-ResNet block. One is straightforward path from GeM2D to each top fc layers (R, V and C). And the other is through 2 fc layers (please see above figure). <br></li> <li>Multi stage learning ( Xentropy, Reduced Focal Loss (credit to <a href="/phalanx">@phalanx</a> ) ) In the middle point of this competition, <a href="/phalanx">@phalanx</a> advised us about OHEM in this nice <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/123198#734490">thread</a>. <br> Indeed, our cv score (recall) became better with the same technique(Reduced Focal Loss). <br></li> <li>Different image size ensemble In some CV competitions, different size image ensemble boosted scores. We thought this will be same in this competition. This ensemble pushes our place from 12th to 6th. <br></li> <li>Post Processing (big impact for us) We adjusted our prediction to improve evaluation metric in this competition (Recall). Optimization was performed with all classes (168 + 11 + 7). And we used (1/each class counts) as initial values for optimization. The optimal coefficients are almost close to initial values. We were very nervous to use this post processing, but as a result this pushes us to gold range.</li> </ol>
Bengali.AI Handwritten Grapheme Classification
7th Place Data-based Generalization Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We keep the split consistent in the two stages to prevent potential leaking</p> <p><a href="/kurianbenoy">@kurianbenoy</a> Sorry our code is quite scattered :) My teammates should be able to handle this. But you should be able to achieve comparable performance with any normal high-scoring pipeline (.9970+CV locally) + simple adding of the synthetic data. You can find the data link herehttps://www.kaggle.com/roguekk007/bengaliai-synthetic-magic/kernels. It's quite clean. I suggest you to simply add this to your best CV pipeline and experiment😃 </p> <p>Happy to share it😊 </p> <p>Your kernel helped me a lot during establishing baseline. Sorry about the shakeup</p> <p><a href="/weimin">@weimin</a> Answering (2) first because it helps with (1) 2) We can decompose each grapheme into a sequence of unicode character simply by calling character[i] after reading the character from .csv (or the dictionary). This sequence is generally (95%+ of the time) a combination of the grapheme's RCV in one order or another. I created a comprehensive list of 168x11x7x6 possible combinations this way and did simple matching. Some of the graphemes (~5%) are more complicated, there might be repeated characters in the sequence, so for graphemes that failed stage1 decomposition I simply did matching based on unique elements (set(seq)==set(element of comprehensive list)). Some items in the dictionary also have extraordinarily long sequences; I guess this is because they are compound characters; I simply assumed they will not appear in the testset and did not choose any grapheme with its sequence length &gt;10.</p> <p>1) I tried to decompose every graheme in the dictionary, and simply ignored graphemes that cannot be labeled through the method in (2) and ignored too-long graphemes. Because we had so short time left the method in (2) is only preliminary (has 2 failure-cases which I had to hardcode for graphemes in train). I'm sure that can be improved.</p> <p>Hope this helps</p> <p><a href="/murphy89">@murphy89</a> Thank you. we are using simple stratified KFold split for synthetic dataset. We want the model to see as many graphemes as possible</p> <p>Happy that the shakeup has been kind to us :) </p> <p>Many thanks to Kaggle and Bengali.AI for hosting such an interesting competition. It is my first all-in kaggle competition, and needless to say none of this would have been possible without our awesome team Igor, Habib, Rinat, and Youhan. <a href="/cateek">@cateek</a> <a href="/drhabib">@drhabib</a> <a href="/trytolose">@trytolose</a> <a href="/youhanlee">@youhanlee</a> </p> <h3>Analysis: This competition has a distinctly stratified LB:</h3> <ul> <li>With a good pipeline, single model with baseline augmentations LB ~.97. </li> <li><strong>Removing crops, training on full-resolution, adding cutmix / cutout, and training enough</strong> should get model up to LB .985+, and with some tweaking ~.989+. These are well-covered in the discussions posts.</li> <li>Our best single model is from Rinat, PNASNet-5-Large from <a href="https://github.com/Cadene/pretrained-models.pytorch">Cadene's repository</a> which scores LB 0.9900, CV .9985. I believe most of the participants LB .9850-.9910 are using ensemble of models scoring around this range.</li> </ul> <p>At this stage (.9900-.9905) we found it very hard to further improve our LB. As <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/134601">this post</a> points out, higher CV does not necessary mean better LB anymore, so we were stuck for a while (a long while...).</p> <h3>Improving Generalization</h3> <p>With the CV &amp; LB relationship it is easy to see that the problem is that our model <strong>could not generalize to unseen graphemes</strong> (by grapheme I mean triplet combination of grapheme root, vowel and consonant diacritics). There are only ~1290 graphemes in training set, while there are 12936 possible combinations. Our models might be biased to output seen triplets; for whatever reason, as Qishen Ha's wonderful <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/134434">post</a> pointed out, our models perform badly on these graphemes.</p> <p>In hindsight, it's no magic that the most convenient way to improve generalization is to <strong>add more data</strong>. When investigating the grapheme representations, I found that <strong>graphemes are actually encoded as sequence of unicode characters</strong>. The roots and diacritics also have their corresponding sequences. The most magical part is that <strong>the unicode sequence for the grapheme is a combination of the sequences of its roots</strong>. Best exemplified using this image, where the four rows are <em>grapheme, grapheme root, vowel, and conso</em>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1943421%2F82988244ae7fb6d4ca47f6c6094a3116%2Fimage_480.png?generation=1584399876994128&amp;alt=media" alt=""></p> <p>This observation works (with some minor exceptions) on all graphemes in the training set, and using a very basic algorithm I was able to generate correct labels for all 1295 graphemes in training set except for 2. Following Guanshuo Xu's post pointing to <a href="https://github.com/MinhasKamal/BengaliDictionary">this repository</a>, we are able to access a comprehensive list of grapheme combinations. Using the decomposition algorithm we selected ~3000 graphemes that could be broken down into the given roots and diacritics. Our LB boost on the final day is from better generalization on these graphemes, which partially overlaps with train &amp; test set.</p> <p>With these graphemes, we rendered them using various fonts and obtained a cleanly-labeled synthetic dataset with ~47K images spanning these characters which can be found <a href="https://www.kaggle.com/roguekk007/bengaliai-synthetic-magic">here</a>, all the ingredients for creating this synthetic dataset has been publicly available. Our hope was that by seeing synthetic images, our models should be able to at least learn the topological (if not stylistic) features of the grapheme. And it turned out they do. Here is a sample of synthetic image in our dataset.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1943421%2Ff582156ca8f174802dbbc28a18fa1562%2F1_EkSharifa.png?generation=1584402247696903&amp;alt=media" alt=""></p> <p>At this point we have only 24 hours until the end of the competition so just finetuned our previous models with the synthetic data added to both original training &amp; validation, thus resulting in the exciting quantum jump on the last day😊 </p> <h3>Some Details</h3> <p>My teammates have wonderful pipelines, here are some approaches which have turned out useful: - Train on 128x128 data for ~100 epochs, then finetune on 224x224 data. It definitely yields faster training and maybe better generalization (used in Rinat's pipeline) - Anneal augmentations except for cutmix gradually to 0 at late stages of training (used in Habib's and Igor's pipeline) - A wonderful new architecture called MixNet, we found that it nicely complements PNAS5 for ensembling and performs pretty well. - <strong>Loss:</strong> Plain cross entropy. Tried arface (did not work), focal (did not work), normalized softmax+label smoothing (looked promising but ditched along with my pipeline) - <strong>Scheduler:</strong> ReduceLRonPleateau for Rinat's pipeline, and hand-monitored dropLR for Habib's and Igor's pipelines.</p> <h3>Finally</h3> <p>It has always been my dream to do a gold-medal solution write-up before 17th birthday😃 . In the end, more than happy to see hundreds of experiments and weeks of stress translate successfully into solid rank. With more time than 24 hours, we would have: 1. Searched for a more comprehensive list of graphemes 2. Created more synthetic data with more fonts 3. Somehow make synthetic data more similar to handwritten data; we are using dimming max_val to 230 and gaussian-blur, but can definitely be improved by maybe CycleGAN 4. Make flipping work as pointed out <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/126761">here</a>. It's really ironic I did not make my own idea work when Qishen Ha did; guess this is the difference🤓 5. Try more model-regularization methods, ShakeDrop was on the list and looked promising but was preceded by other priorities.</p> <p>Overall, this has a very exciting and memorable puzzle-solving competition. Again kudos to the team and: Love Kaggling</p>
Bengali.AI Handwritten Grapheme Classification
8th Place Solution /w Code (minimal ver.)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your question and here's my answer.</p> <blockquote> <p>Why are you using logits_metric for the prediction?</p> </blockquote> <p>It performs a little bit better than the softmax head in my validation set, like 0.00002 better.</p> <blockquote> <p>Is it because arcface can perform better because the loss is used with 1295 graphemes, while binary classification would be harder to have "seen vs unseen"?</p> </blockquote> <p>Yes you're right. As for the threshold selection, it's based on validation score and LB score.</p> <blockquote> <p>why do you still fit to Linear(1295) in the Unseen model, if you are trying to predict unseen graphemes?</p> </blockquote> <p>The Linear(1295) is not parallel with Linear(168) in unseen model, but is after it. It helps the model to converge faster.</p> <p>Thanks! If the top players are showing more elegant solution than me, I won't be disappointed. And actually I'm not ;)</p> <p>I choose seen_th by validation and LB, for the public kernel, I tried 0.7, 0.75, 0.8, 0.85, 0.825. You can see it in the older version.</p> <p>Yes, 0.7 performs the best in public LB and I guess there's more unseen in private so I tuned it to 0.75 before the end. I got my best private LB score by the final submission but obviously 0.75 is still less aggressive.</p> <p>Thanks and congratulation to you, too! It's the 3rd players got the best score without external data, not me ;)</p> <p>Hi, everyone</p> <p>First of all, I want to thank Kaggle and the hosts for hosting the competition.</p> <p>Congratulation to all winners!</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F448347%2Fd37b2cfc0fd3524b5be0ba10e5d8b649%2F1.png?generation=1584409865722874&amp;alt=media" alt=""></p> <p>An illustration of my pipeline. It contains 2 models, Seen Model and Unseen Model.</p> <h1>Summary</h1> <ul> <li><p>NO EXTERNAL DATA</p></li> <li><p>I used 2-stage prediction in my pipeline, it contains 2 models as one set (Seen Model and Unseen Model) as shown in the figure above.</p></li> <li>I used Arcface to distinguish unseen graphemes. Just like to distinguish unseen faces ;)</li> <li>If an image is detected to be seen, I use the output of Seen Model as prediction directly. Else if the image is detected to be unseen, I’ll pass it to Unseen Model to get unseen prediction.</li> <li>The different points between Seen Model and Unseen Model are as follow: <ul><li>Seen Model only uses very few augmentation to make sure it can ‘overfit’ to seen graphemes, while Unseen Model uses heavy augmentations to make it generalize to unseen ones.</li> <li>Seen Model has 5-head outputs, including Arcface output. While Unseen Model has normal 4-head outputs.</li> <li>Beside the number of output heads, the architecture of the top is a little bit different as well.</li> <li>I decompose the predicted grapheme to 3 components when the prediction is made by Seen Model, while I use predicted 3 components directly when the prediction is made by Unseen Model.</li></ul></li> </ul> <h1>Code</h1> <p>I’ve published a minimal version of my training &amp; testing pipeline on Kaggle kernel as follow:</p> <ul> <li>Step 1. <a href="https://www.kaggle.com/haqishen/bengali-train-seen-model">Bengali Train Seen Model</a> (trained ~5h on kernel)</li> <li>Step 2. <a href="https://www.kaggle.com/haqishen/bengali-train-unseen-model">Bengali Train Unseen Model</a> (trained ~8.5h on kernel)</li> <li>Step 3. <a href="https://www.kaggle.com/haqishen/bengali-predict-with-seen-unseen-models">Bengali Predict with Seen &amp; Unseen Models</a> (submission ~20min)</li> </ul> <p>The minimal version with two efficientnet-b1 which are training on 128x128 for 30 epochs can give you public LB 0.985+ or private LB 0.935+ If you find my notebooks helpful, please upvote them! </p> <hr> <p>To get over 0.993+ or 0.950+ on public or private LB, what you need to do is just simply: * Change the backbone to efficientnet-b5 / b6 / b7 / b8 * Train on 224x224 * Train for 60~90 epochs * Emsenble more Seen &amp; Unseen Model</p> <p>If you have any questions please feel free to leave a message to me. Thanks!</p> <p>Thanks. Augmentation is not the key for this competition. I don't suggest you pay too much attention on it. I use cutout, gridmask and cutmix for Seen Model, and bunch of other usual augmentations for Unseen Model. Please refer to my training kernel for that.</p> <p>I updated it in the <code>Summary</code> part, please have a look at it ;)</p> <p>I use Arcface to find outliers when doing prediction.</p> <p>Thank you guys for the kind words! I've update the <code>Summary</code> part.</p> <p>If all test data is seen graphemes, decompose from predicted grapheme will give better score, and decompose from arcface prediction is a little bit better than decompose from softmax.</p> <p>You can decompose component from graphemes and then calculate the score on it...</p> <p><code>square image</code> is giving higher local score in my case. But not necessarily better for all CNN.</p> <p>I'm sorry but to be honest, without a local GPU it's almost impossible to get a gold medal in CV competition nowadays....</p> <p>Well, maybe you can refer to Arcface paper or some blogs. I highly recommend it.</p> <p>I can set 2x batchsize by using apex. It's better for batch norm.</p>
Bengali.AI Handwritten Grapheme Classification
9th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. I thought that as well orignally, but experiments showed differently :)</p> <p>I tried 3 - 1*grapheme + 0.1 * three tasks, 1 * grapheme + 1 * three tasks, 0.1 * grapheme + 1 * three tasks, and found the first way works best </p> <p>yes, like PL. since I have inference for 'grapheme' as well, I will do argmax to get the hard label for 'grapheme' for all external dataset. Then after I combined with original training data to retrain the model, I will do balanced sampling during training based on the 'grapheme' for all data. Let me know if this is not clear</p> <p><strong>Model</strong> My model backbone is simply based on seresnext50, using CV of randomly split 4 folds for training, no stratification. However, i have two extra loss heads which I found quite helpful to regularize the model - 1292-grapheme head CE loss and a Arcface loss of the grapheme head. </p> <p>However, the 1292 head is on top of the combined 3 tasks heads with binary mapping - </p> <p><code>self.logit_to_1292 = nn.Linear(168+11+7, 1292, bias=None)</code></p> <p><code>out_1292 = self.logit_to_1292(out_3tasks)</code></p> <p>where I initialize the <code>self.logit_to_1292</code> layer with a frozen binary weight that has dimension of (168+11+7, 1292), where it correctly maps each three combination of (grapheme_root, vowel_diacritic, consonant_diacritic) to its corresponding grapheme. I don't train this layer, however.</p> <p>During back-propagation, I also used a very low weighting for the 3 tasks head and arcface head (0.02), but have a very high weight for grapheme head (1), which is quite counter-intuitive. </p> <p><strong>Augmentation</strong></p> <p>I used 75% of CutMix, sample-wise. Also I gradually add in grid mask as training goes on, but I used 'grid-mix' instead - so instead of removing those grids, I randomly shuffle them within each image, so the per image stats won't change comparing to if you crop out those grids from image. I found this slightly more helpful. </p> <p>For sample-wise cut mix (this is the most helpful data aug) - each image will randomly draw a binary cutmix mask during pytorch's data loading function, and I only shuffle and pair up the images within each batch during training, using this pre-computed mask. </p> <p>I also add in 5% of augmentation proposed by XingJian Lyu. </p> <p>Overall cutmix is the most useful part. </p> <p>*<em>Self-training with external dataset *</em> I would say this is the part that helps my model much in generalizing to the test set. I only discovered the external dataset of ekush disclosed on the external data disclosure thread on the last week of the competition. </p> <p>Basically I downloaded and resize those 300K unlabeled images, and used my best trained 4 folds model to generate soft targets for the grapheme (1292) and 3 tasks heads (168+11+7). Then combined with original training data, I re-train my above model in 4 folds, with soft targets as labels. In each fold, I combine the entire 300K external data plus the training folds to train each model, and do this 4 times. </p> <p>This approach is similar to the idea of noisy students self-training paper shared earlier on in the forum, where I generated soft targets without noise, and retrain the model with noise. The student and teacher models are exactly same as above. </p> <p>The key thing is how you do sampling with those unlabeled dataset - the one that works best is first to generate hard labels for graphemes, and then combined with our train dataset, to use Heng's balanced sampler to do sampling based on all graphemes. Other sampling strategies simply don't work. </p> <p>This finally pushes my model from 994 to 996 on Local, and on LB to improved from 985 to 990, where ensembling of the 4 folds further boosts it to my current score of 916 (all based on public score) </p> <p><strong>Things didn't work for me</strong></p> <p>OHEM Efficientnet (slightly worse than seresnext50) senet152 dropblock attentiondrop shake-drop intensive-augmentation or autoaugment using flip or 180 rotate as addition identities (strategies that worked best for humpback competition)</p> <p>I used a simplified way to sample the cutmix box: </p> <p><code>shorter = min(img_shape[0], img_shape[1])</code> <code>cut_size = np.random.randint(int(shorter*0.1), int(shorter*0.8))</code> <code>bbx1, bby1, bbx2, bby2 = rand_bbox_center_square(img_shape, cut_size)</code></p> <p>I found the hyper params for cut mix are not sensitive. beta distribution is not needed, just cut the square box uniformly random</p>
Bengali.AI Handwritten Grapheme Classification
Gold medal - 12 place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you</p> <p>Thank you</p> <p>Hi <a href="/cnzengshiyuan">@cnzengshiyuan</a> I was looking for a revenge to myself for that enormous drop in private leaderboard on 2019 DSB. I found my peace now ! :D</p> <p>Thank you <a href="/yuanlin08">@yuanlin08</a> </p> <p><a href="/tonychenxyz">@tonychenxyz</a> The mean of all 5 densenet201 folds was 0.9390 (private score). I was courious about efficientnets, but I could not get the gpu's to test them also, I am sure many guys had success with them</p> <p>First I want to thank the organizers for an awesome competition. It was really challenging My team got a good result, we have our first gold medal and most important than that, there were a lot of useful things to learn</p> <p>And a quick overview of our solution</p> <p>We used 3 different types of architectures </p> <p><strong>First Architecture Details:</strong> - 5 folds created with MultilabelStratifiedKFold (mixed up by mean for final prediction) - Image size: 128x128 - Augmentation: Cutmix (0.4) / Mixup (0.4), choosing by random one of them - Label Smoothing - Adam (lr=0.001, ReduceLROnPlateau(factor=0.8, patience=5)) - Trained for 100 epochs - One head with 186 results than split the results into 3 softmax</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4005865%2Fff5d567efb67719ae9c74fd1c7204fdc%2F1.png?generation=1584410546592180&amp;alt=media" alt=""></p> <p><strong>Second Architecture Details:</strong> - 5 folds created with MultilabelStratifiedKFold (mixed up by mean for final prediction) - Image size: original image - Augmentation: Cutmix (choose random from 0.3, 0.4 ,0.5, 0.6, 0.7) + Cutout(max_holes=6, max_height=12, max_width=12, p=0.5 ) - Label Smoothing - Adam (lr=0.001, ReduceLROnPlateau(factor=0.8, patience=5)) - Trained for 150 epochs <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4005865%2F39752ea74f035ab413744a7dada0717f%2F2.png?generation=1584410030146832&amp;alt=media" alt=""></p> <p><strong>Third Architecture Details</strong> - 2 folds created with MultilabelStratifiedKFold (mixed up by mean for final prediction) - Image size: 300x300 - Augmentation: Cutmix (choose random from 0.3, 0.4 ,0.5, 0.6, 0.7) + Cutout(max_holes=6, max_height=12, max_width=12, p=0.5 ) - Label Smoothing - Adam (lr=0.001, ReduceLROnPlateau(factor=0.8, patience=5)) - Trained for 150 epochs <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4005865%2Fbc442d6921d47e44b5bd97fb2a7efc46%2F3.png?generation=1584410773226026&amp;alt=media" alt=""></p> <p>The final submission was the mean of all the folds from the 3 architectures (12 folds) each with the same weight The difference from public to private was 0.9784 -&gt; 0.9507</p> <p><strong>Conclusions</strong> In the end, I can say that our solution did not use any unusual ace in the sleeve, not any unusual trick, just respecting the good practices in computer vision.</p> <p>Thank you <a href="/greatgamedota">@greatgamedota</a> </p> <p>Thank you <a href="/quandapro">@quandapro</a> . To be honest, in the last days I kind of lose hope of getting a good result but I sticked to the planned methodology and continue</p>
Bengali.AI Handwritten Grapheme Classification
Kaggle Competition Experience(Team: Learning at snail pace) | Rank 1473
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yea, I think the formula is just start participating in Kaggle</p> <p>Thanks to Kaggle and Bengali AI for organising this competition. I would like to congratulate all the winners of this competition.</p> <p><code> Public LB: 0.9749(Rank 297) Private LB: 1473 </code></p> <p>I joined this competition when it was started, yet only got actively involved in this competition after <a href="/seesee">@seesee</a> released his public notebooks on using TPUs. At that time, I was participating in Kaggle’s flower recognition competition(Playground) and was able to understand what @see--- Notebook did.</p> <p>After a bit of fine-tuning, with hyperparameters(changing to EfficientNet B4) and training for 30 epochs, I was able to get a score of 0.9729. I was all excited because it was the first time I got a higher score than the best available public kernels available(in a live competition) at that time + I didn’t make many changes to original kernel.</p> <p>Last 7 days of the competition was full of excitement for me(as my college was shut down due to Covid-19 situation in Kerala, India). After getting this initial result, I read through almost all the posts in discussions.I tried a variety of ideas and organised them as github issues(suggestion from my teammate). I tried to implement augmentations like Mixup, Cutmix, Gridmask etc(Thanks to <a href="/cdeotte">@cdeotte</a>, <a href="/xiejialun">@xiejialun</a> Notebook from Flower classification with TPUs notebooks). I tried training on all the architectures of EfficientNet from B3 to B7. I started retraining my models with weights from already available model, yet for me always retraining on weights made my model perform worst.</p> <p>For me this competition was like <a href="/init27">@init27</a> saying, kaggle competition felt like a 100 Mile sprint where you are competing against people on Supercars (GrandMasters with a LOT of experience) while I was running barefoot.</p> <p>I tried changing with other architectures like Densenet 169, 121. Yet single models based on this approach was not so effective. After doing all these experiments for the last 4-6 days, I was not able to improve my model, any further. It gave me a feel when all the Kaggle grandmasters and masters were able to implement ideas and do things quickly, I was not able to perform so well. I saw a lot of failed ideas, and even after reading lots of papers I was not able to transform certain augmentations into Tensorflow for BengaliAI competition from Flower Classification with TPUs competition.</p> <p>On the final day of competition, I trained my model with EfficientNetB4 for 30 epechs, with step learning rate and regularisation. And I was able to obtain my highest score of 0.9749 in public LB(which was able to have a better score than the best public kernel available then). I tried ensembling weights with Densenet, but it didn’t work out any good. </p> <p>For me, this competition was a huge learning experience for me and I was able to spend about 100+ hours for this competition and learned a lot of new things from experienced folks here. I would like to thank <a href="/hengck23">@hengck23</a> for encouraging to share the solution. </p> <p>Obviously, after the competition, I got a lot of new insights which I am trying to ponder and experiment more in the coming days(both in Tensorflow and Pytorch).</p> <p>If you interested to look my code do check my kernels:</p> <p><a href="https://www.kaggle.com/kurianbenoy/efficientnetb4-parameter-tweaks">Training Notebook</a></p> <p><a href="https://www.kaggle.com/kurianbenoy/final-submit-inference">Inference Notebook</a></p>
Bengali.AI Handwritten Grapheme Classification
My Solution Writeup + Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was an awesome competition! Congrats to the winners! Here is my final writeup for this comp: <a href="https://github.com/GreatGameDota/Bengali-Character-Classification">https://github.com/GreatGameDota/Bengali-Character-Classification</a> A shame I didn't pick the right submission but I'll take what a learned and apply it to the next competition! Happy modeling!</p> <p>I wish I had the code I used to share. What I remembered is close but a lot of it I'm still unsure, especially the amount of epochs which is just a guess.</p>
Bengali.AI Handwritten Grapheme Classification
My overview solution 0.9361 with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all 🤠 </p> <p>I have published my solution that I didn't choose as a final score 😏 in this <a href="https://github.com/tanyapohn/kaggle-bengali-2020">github</a> where I explained most of my experiments.</p> <p>In this code, you can play around with your idea as well since I wrote it as simple as I can.</p> <p>Also, this <a href="https://www.kaggle.com/moximo13/spatial-transform-network-bengali?scriptVersionId=30662848">kernel</a> is a submission forked from my private one (uncleaned code).</p> <p>The reason I give up on this experiment is that I couldn't boost my accuracy on LB anymore (pretty bad decision) Hope this help beginners who just start their path in Machine Learning.</p> <p>Any comments or suggestions with my code and experiments are appreciated.</p> <p>Thank you! 😊 </p>
Bengali.AI Handwritten Grapheme Classification
Our simple approach 😮 and Gold Solutions Summary 💪
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello, Kaggle community 🖖</p> <p><a href="https://github.com/Mishunyayev-Nikita/Kaggle-Bengali.AI-Handwritten-Grapheme-Classification-2020-solution">Here</a> you can find the code with our solution for this competition with huge shake-up 😲 and a list of cool approaches from the winners that helped them win 💯 </p> <p>Thanks to the whole Kaggle community and good luck!</p>
Bengali.AI Handwritten Grapheme Classification
Potential 6th place solution with a single model ! [all about layers]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello! After quite a rough LB shakeup , we ended being 317th in the private LB from 79th in public, however , scrolling down through our submissions , we did find one low scoring public kernel , which could have given us ~70th place on private with an LB of 9356. However , on using post-processing as Chris mentioned <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">here</a> , we managed to hit a private LB score of 9557 , with public LB 9703! Which would potentially give us 6th place as the private leaderboard stands right now.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3459992%2F95ceecb84fb83408fa0a27fc8dad1c60%2Foof.PNG?generation=1584697688199528&amp;alt=media" alt=""></p> <p>This was really unexpected for us , because it was a really low score for the public LB and we had been stuck there for about a month. </p> <h2>The Solution -</h2> <p>Briefly , the solution is really simple. - We use a single model , a SEResNeXt50 from pretrainedmodels, with 3 different output heads , one for each class, as used by iafoss in <a href="https://www.kaggle.com/iafoss/grapheme-fast-ai-starter-lb-0-964">this</a> notebook. - Data used was 128x128 images , cropped using <a href="https://www.kaggle.com/iafoss/image-preprocessing-128x128">this</a> notebook. - For training , we used a basic Shift , Scale , Rotate , and Mixup only. Training was done for ONLY 30 epochs over a OneCycle LR policy , and Over9000 optimizer.</p> <p>With this , we get a public LB of ~966 , and private LB ~92. Now, this is where the magic comes. With the following changes , we make a straight jump to a private LB of 9557, and public LB 9703, which is the main part of our solution.</p> <ul> <li>Change every AdaptiveAvgPool2D layer to GeM layer</li> <li>Change every AdaptiveConcatPool2D layer to GeM layer</li> <li>Removed Dropout and Weight Decay (refer to <a href="https://arxiv.org/abs/1912.11370">this paper</a> )</li> <li>Used Group Normalization instead of Batch Normalization</li> <li>Added weight standardization to all vanilla Conv2D layers</li> </ul> <p>Codes for the same:-</p> Generalized Mean Pooling (GeM) - <p><code>def gem(x, p=3, eps=1e-6): return F.avg_pool2d(x.clamp(min=eps).pow(p), (x.size(-2), x.size(-1))).pow(1./p)</code></p> <p><code>class GeM(nn.Module): def __init__(self, p=3, eps=1e-6): super(GeM,self).__init__() self.p = Parameter(torch.ones(1)*p) self.eps = eps def forward(self, x): return gem(x, p=self.p, eps=self.eps) <br> def __repr__(self): return self.__class__.__name__ + '(' + 'p=' + '{:.4f}'.format(self.p.data.tolist()[0]) + ', ' + 'eps=' + str(self.eps) + ')'</code></p> Con2D with Weight Standardization - <p>`class Conv2d(nn.Conv2d):</p> <pre><code>def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True): super(Conv2d, self).__init__(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias) def forward(self, x): weight = self.weight weight_mean = weight.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True) weight = weight - weight_mean std = weight.view(weight.size(0), -1).std(dim=1).view(-1, 1, 1, 1) + 1e-5 weight = weight / std.expand_as(weight) return F.conv2d(x, weight, self.bias, self.stride, self.padding, self.dilation, self.groups)` </code></pre> Codes to convert the model - <p><code>def convert_to_gem(model): for child_name, child in model.named_children(): if isinstance(child, nn.AdaptiveAvgPool2d): setattr(model, child_name, GeM()) else: convert_to_gem(child)</code></p> <p><code>def convert_to_conv2d(model): for child_name, child in model.named_children(): if child_name not in ['fc1','fc2']: if isinstance(child, nn.Conv2d): in_feat = child.in_channels out_feat = child.out_channels ker_size = child.kernel_size stride = child.stride padding = child.padding dilation = child.dilation groups = child.groups setattr(model, child_name, Conv2d(in_channels=in_feat, out_channels=out_feat, kernel_size=ker_size, stride=stride, padding = padding, dilation=dilation, groups=groups)) else: convert_to_conv2d(child)</code></p> <p><code>def convert_to_groupnorm(model): for child_name, child in model.named_children(): if isinstance(child, nn.BatchNorm2d): num_features = child.num_features setattr(model, child_name, GroupNorm(num_groups=32, num_channels=num_features)) else: convert_to_groupnorm(child)</code></p> <p>If you have any ideas , or suggestions please let us know! We also have a feeling using non-cropped images , and some other augmentations would have given us even better results. Thanks for reading! </p> <p>EDIT - Here are the notebooks for this -</p> <p>Inference - <a href="https://www.kaggle.com/p4rallax/private-0-9557/">https://www.kaggle.com/p4rallax/private-0-9557/</a></p> <p>Training - <a href="https://www.kaggle.com/virajbagal/potential-6th-place-lb-magic-layers-pp">https://www.kaggle.com/virajbagal/potential-6th-place-lb-magic-layers-pp</a></p>
Bengali.AI Handwritten Grapheme Classification
Solution 38, some lessons learned
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Typo, fixed ;)</p> <p>Using the post processing from Chris Deotte's writeup we get private 0.9672 and public 0.9950. Post processing was definitely one way to go!</p> <p>First of all, let me thank my team mate Marios (Kazanova). Without him I'm not sure I would have a medal at all here. Second, I want to thank all those who shared so much. This was a goldmine for people with little experience in computer vision like me. I wish I had paid more attention on some of it, for instance discussion about unseen graphemes in test, or predicting graphemes rather than components as a secondary target. Last, but not least, many thanks to Bangali AI sponsor and Kaggle for providing a challenging image classification problems with rather small size dataset. Hardware (access to a number of recent GPU) wasn't the main deciding factor here. </p> <p>I am a bit disappointed to have dropped out of gold after being up to top 7th on public LB of course. At the same time, my first sub is only from 13 days before competition end. And I could team with Marios who had started earlier. Still, issue with little time is that you have to be right in selecting what to try and what you decide to no try. And given training computer vision models takes time, you easily lose a day or two on ideas that don't work. I won't enter computer visions less than 2 weeks before end again. First lesson learned.</p> <p>We didn't looked at which targets where hard to predict. In general, I didn't look at data enough. Second lesson.</p> <p>I mentioned some of what we didn't try unfortunately, here is what we tried, with more or less success.</p> <p>Before teaming I worked on 64x64 images, and a bit on 128x128 images with seresnext50. Small image sizes leads to high epoch throughput and allow for lots of experiments. I didn't work on image augmentation and focused on learning pytorch, understanding losses and image preprocessing. When I teamed, Marios had a efficientnet-b4 model with a good augmentation pipeline. He used cutout, grid mask, mixup, albumentation, and found that cutmix wasn't effective. He also used ohem loss, and cosine lr scheduler. We didn't revisit this except for ohem loss after we teamed.</p> <p>After teaming we explored ways to add diversity, and ways to combine as many models as possible in the scoring kernel. In hindsight it also helped overfit a bit more to the public test data. Our CV LB score gap was rather stable, which also explains why we did't explore more the possible public/private distribution discrepancy. Third lesson learned, pay more attention to this.</p> <p>We used an average of 14 efficientnet-b4 models predictions in the end. Our scoring kernel could handle up to 18 models within 2 hours. For each model we averaged weights of several checkpoints. Models differ by:</p> <ul> <li>input data, original size, or crop resize with same image ratio</li> <li>fold or full train data</li> <li>loss, weighted loss for R, V, C, or weighted loss per class within each of R, V, C.</li> <li>ohem or unmodified cross entropy loss</li> <li>use of external data or not</li> <li>postprocessing </li> </ul> <p>We looked at postprocessing predictions the last few hours before end. It was clearly promising, and one sub did improve a bit over our best blend.</p> <p>We only looked at external data, BanglaLekha-Isolated two days before end of competition. We predicted targets using our best blend, then took the majority class and checked the mapping by visual inspection between the BanglaLekha paper and the class_map csv. Our mapping was wrong in few case. Then we assigned the fixed for R target and the 0 target for V and C. CV and Lb scores were not really better but it added dieversity to the blend. Here again, we had to crop resize with same image ratio.</p> <p>All in all I learned a amazing lot in a short time. This is where Kaggle is great, you can get a glimpse of state of the art practice in a very short time. Again, this is only because of sharing from the community members. I'm now looking forward to next computer vision competition to be honest!</p> <p>I hope I haven't missed too much, especially on what Marios did before we teamed. I'll let him expand/correct me if need be.</p> <p>My hunch is that models predicting grapheme are not suffering from class imbalance as models predicting components. This si from reading writeups, not from proper experiment, hence may not be right.</p>
Bengali.AI Handwritten Grapheme Classification
[RECAP] 41st place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yeah, it helped, but not too much.</p> <p>Hi!</p> <p>1). Originally I used (2, 1, 1) as in official metric. I had had a low score for <code>grapheme_root</code> and I decided to increase its contribution to the loss value, so the network could be punished more for missclassifying <code>grapheme_root</code>. There was an <a href="/iafoss">@iafoss</a> kernel where he was using (0.7 0.1 0.2). This is how I picked (7, 1, 2)</p> <p>2). As for finetuning: without it my models score is in range (0.9820, 0.9825) CV. Finetuning added about 0.003 - 0.004, which yielded (0.9856 - 0.9867) CV range for single fold models. </p> <p>Regarding augmentations on finetuning stage you can do whatever you want, I tried some geometric ones, but it didn't work, so I decided to completely get rid of them.</p> <p>Hello everyone!</p> <h1>Story</h1> <p>I've joined this competition in the first days of January, with the purpose of learning computer vision from both theoretical and practical sides. I thought that it would be a good idea to finally learn how to write a dl pipeline using config files. I also wanted to dive deeper into Catalyst framework, so Catalyst Config API seemed to fit perfectly.</p> <p>For the first two weeks, I couldn't get a score higher than 0.955 whilst there were 0.96+ kernels. I'd thought that picking a dataset with cropped images is a good idea since I didn't want the model to memorize the position of the character to not overfit on public lb.</p> <p>Following pieces of advice on forum, I started training with mixup and cutmix. At that moment there was no Cutmix implementation in Catalyst and Mixup implementation didn't support multi-output. So, I adopted code from forum and wrote MixupCutmixCallback, which later resulted in PR with Cutmix implementation to Catalyst library. It was my first time contributing something to open-source deep learning project.</p> <p>Mixup + Cutmix had given some improvement and I finally crossed 0.955 border. The next step was to go beyond high scoring public kernels (the best one had a score of 0.966 at that time). I tried different architectures (primarily sticked to Efficient Net series), optimizers, schedulers, lrs, losses, augmentations, but 0.966 was still far from me. I got very close to it after I added finetuning on 224x224 images (the first stage was on 128x128 cropped images). </p> <p>On the very next day after I got close to 0.966 kernel a new 0.97 kernel came out, so I had to do the climb all over again and at that time I'd tried tweaking almost every possible thing in my pipeline. I <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/131358">also</a> felt like I was doing some basic thing very wrong, but couldn't identify which exactly. I thought the problem was in misunderstanding some concept in Catalyst, so I decided to rewrite my whole pipeline to Pytorch. This is definitely not something you want to do taking into account there were 3.5 weeks before the end of the competition. Before I started rewriting my pipeline to Pytorch, I decided to train on uncropped images with Catalyst. On the next day I'd finished my Pytorch pipeline and decided to look at the results of training with uncropped images: it was the first time my CV crossed 0.97, resulting in 0.968 LB. Of course, I decided to stay with Catalyst😄 . I took a deeper model and finally beat the 0.97 kernel. </p> <p>Again, I was out of ideas and again I went on the forum. I skimmed through the main topics and even went through all the posts written by top-scorers individually. I decided to implement the tricks from <a href="https://arxiv.org/abs/1912.11370">this</a> paper posted by <a href="/drhabib">@drhabib</a> , but none of them seem to work. </p> <p>I started reading other papers, which had been posted on the forum and decided to improve my augmentations. I couldn't fit the geometric ones and thought to tweak <code>alpha</code> in Mixup and Cutmix. I decided to go aggressive with Mixup and set <code>alpha = 4</code>, which I think is pretty reasonable as there are unseen graphemes in the test set.</p> <p>In the last week, I'd switched to original images, trained 5-fold B3 and added 3-epoch finetuning with no augmentations, which yielded 0.985. I had a pretty small CV/LB gap and decided to choose my highest scoring model.</p> <h1>Configuration</h1> <ul> <li>Efnet B3 with weights from Noisy Student </li> <li>One model with three heads</li> <li>opt_level O2</li> <li>137x236 images</li> <li>CrossEntropyLoss. I guess I could spend some more time tweaking OHEM, it should give a better result.</li> <li>Mixup (alpha = 4) + Cutmix (alpha = 1), each with probability of 0.5</li> <li>loss weights: 7, 1, 2 respectively</li> <li>100 epochs</li> <li>AdamW optimizer</li> <li>OneCycleSchduler</li> <li>finetuning 3 epochs with no augs also on 137x236</li> </ul> <p>You can find my complete config files in the link to my repo provided below.</p> <h1>Forum</h1> <p>Thanks, to the people who shared actively on the forum. Namely: <a href="/drhabib">@drhabib</a> , <a href="/pestipeti">@pestipeti</a> , <a href="/hengck23">@hengck23</a> , <a href="/machinelp">@machinelp</a> , <a href="/haqishen">@haqishen</a> . You are not just giving hints, but implicitly encourage others to participate in discussions and share as well, as a form of expressing gratitude for what we have learned from your posts and kernels (poor wording, but I hope you get the idea😅 ).</p> <p>The other important thing related to the forum is I think in this competition many people trapped themselves into hyper-parameters tweaking (myself included). Most posts for this comp were about architecture, number of epochs, augmentations, and even optimizers. I guess some of us can relate that they were jumping back and forth between parameter options after some high-ranked users posted some information about his/her pipeline. </p> <p>A very few people were discussing validation and strategies to deal with unseen graphemes. </p> <p>The challenge here is the more information we have the harder to find the truth. So, it is very crucial to focus on things that matter. </p> <h1>Takeaways</h1> <ul> <li>Lab book to track your experiments</li> <li>Cam activations to better understand what is going on with you model thx <a href="/cdeotte">@cdeotte</a> </li> <li>Better error-analysis in general</li> <li>Data preprocessing (resize, file extension, etc) which allow you to conduct more experiments</li> <li>Use forum wisely</li> <li>Don't be afraid to ask questions on forum. The worst question is the question that hasn't' been asked.</li> </ul> <p>And congratulations to everyone who had taken part in this competition regardless of their score!</p> <p>You can find my code here:</p> <p><a href="https://github.com/LightnessOfBeing/kaggle-bengali-classification">https://github.com/LightnessOfBeing/kaggle-bengali-classification</a></p>
Bengali.AI Speech Recognition
119 rank solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all Thanks to kaggle for organising this event . My solution is nothing fancy cause i joined late and couldn't give much time to it(Sadly)</p> <p>Model:<br> I have used this notebook for my solution - <a href="https://www.kaggle.com/code/goelyash/bengali-ai-wave-2-vec-2" target="_blank">https://www.kaggle.com/code/goelyash/bengali-ai-wave-2-vec-2</a><br> Changed some paramters in that notebook . added parameters that is mentioned in this notebook --&gt; <a href="https://www.kaggle.com/code/dangnguyen97/lb-0-443-change-the-decoding-parameters-vote-up" target="_blank">https://www.kaggle.com/code/dangnguyen97/lb-0-443-change-the-decoding-parameters-vote-up</a></p> <p>I wish i could play more with parameters cause i got to know that just by changing the parameters we could have get the good results.</p> <p>Things didn't work for me:<br> I tried to train the model from scratch but couldn't train for more than 40K steps and the results from 40k checkpoint was around 0.440<br> I have also tried to create my own 5gram model but it didn't work for me that well(dont know why )</p> <p>congratulations to those who got medals and good ranking in this comp😁🥳</p>
Bengali.AI Speech Recognition
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Of course. Also, sorry for misusing the term 'CV', it just a OOD validation set I hold out 🤔</p> <p>It is based on this <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/425932" target="_blank">post</a>. The author annotates the OOD examples, and after correcting some mistakes, I calculate my model OOD wer rate against these ground truths. The improvements in public LB is positively correlated with OOD wer rate. </p> <p>I interpret that as a sign that public LB will be correlated with private LB. Thus, there isn't much shake up/down if the margin is relatively large.</p> <p>It's just a hypothesis :)</p> <p>Hi, all the fellow Kagglers! </p> <p>I'd like to first give a massive gratitude to Kaggle, and Bengali AI. This competition is my first competition and I have learnt a lot in the process. Besides, I want to thank lots of fellow competitors for providing inspirational insights. Thanks to authors of reposLastly, thanks to my team mates, particularly <a href="https://www.kaggle.com/rkxuan" target="_blank">@rkxuan</a> for spotting the repo that forms the basis of the punctuation model. </p> <hr> <h2>Model Architecture:</h2> <ul> <li><strong>ASR model:</strong> Wav2vec2 CTC model</li> <li><strong>Ngram:</strong> Kenlm</li> <li><strong>Punctuation Model:</strong> xlm-roberta-large</li> </ul> <h2>What Worked:</h2> <ul> <li>Fine-tuning on <a href="https://www.kaggle.com/umongsain" target="_blank">@umongsain</a> filtered Common Voice dataset. Public LB: <code>0.439 -&gt; 0.428</code></li> <li>Fine-tuning on Openslr 37 resulted in: Public LB <code>0.428 -&gt; 0.414</code></li> <li>Incorporating Oscar corpus into ngram</li> <li>Data augmentation with <a href="https://github.com/asteroid-team/torch-audiomentations" target="_blank">audiomentations</a>.</li> <li>xlm-roberta-large configuration from this <a href="https://github.com/xashru/punctuation-restoration" target="_blank">repo</a>.</li> <li>Optuna search for decoding hyperparameters, as demonstrated in this <a href="https://www.kaggle.com/code/royalacecat/lb-0-442-the-best-decoding-parameters" target="_blank">notebook</a>.</li> </ul> <h2>Challenges:</h2> <h3>Datasets:</h3> <ul> <li><strong>Openslr 53:</strong> Voluminous and might've led to overfitting during training. It being crowdsourced could be a factor, especially when compared to the more refined Openslr 37.</li> <li><strong>Fleurs:</strong> Plagued with inconsistent quality. A consistent sharp noise mars the dataset.</li> <li><strong>Competition ds:</strong> Exhibits quality diversity.</li> </ul> <h3>Punctuation Restoration:</h3> <ul> <li>Difficulties in restoring five punctuations: <strong>!</strong>, <strong>,</strong>, <strong>?</strong>, <strong>।</strong>, and <strong>-</strong>.<ul> <li>Notably, hyphens appear to be tricky. Maybe isolating it for training could help.</li></ul></li> </ul> <h3>Audio Augmentation:</h3> <ul> <li>Might've overdone with BGMs. Modulating pitch could potentially be more effective.</li> </ul> <h3>Speech Enhancement:</h3> <ul> <li>Both FAIR Denoiser and Nvidia CleanUnet fell short of expectations. It's perplexing, but perhaps they inadvertently degraded human voice quality.</li> </ul> <h2>Pro-Tips:</h2> <ul> <li>Prefer ARPA over BIN. Kenlm seems to lose unigram post-conversion, but this trick can amplify the score by <code>0.007</code>. Mind the 13GB RAM constraint on Kaggle. I eventually settled with a trimmed 4gram, approximately 12GB.</li> </ul> <h2>Observations:</h2> <ul> <li>Local CV, grounded in annotated OOD examples, aligns well with the public LB. This might explain the negligible shake-up in the end.</li> <li>Oscar primarily features formal <strong>articles</strong>. So, the ngram, despite its magnitude, might overlook the niche vocabularies in OOD test sets, especially with the high OOV observed.</li> </ul> <hr> <p>Feel free to share your thoughts and experiences!</p> <p>I have also realized that my notebook only runs for approximately 6 hours, which means I may get some easy money by simply increasing <strong>beam width</strong> as long as it fits into the time limit! That may be missing element that makes me lost the gold medal 🙃</p> <p>Stay tune to my reflection on the competition! I will post it soon!</p> <p>Hahah, can't be too explicit about the tricks I use🤣. I think I ngram is more about data rather than the architecture, 'cause ngram is a statistical model. But, 40 giga! wow you are really stretching it to the limit!</p> <p>Also, rly thank you a lot for almost posting a comment in all of my posts🫡. We may be able to see each other soon!</p> <p>if I had remembered to do that, I may be placed 11th and get the gold medal🥲. Lots of sadness as I realize that. This is truly a simple method to boost up the rank at the cost of time… The beam_width in my final submission is 1024…</p> <p>Yes, let's say we have an audio with a length of 35 seconds:</p> <p>On the first forward pass of the model, we will have:</p> <ul> <li>The first 30 seconds of audio</li> <li>START 0.0 will be our tokens</li> </ul> <p>Then we have three possible cases based on the response of the model:</p> <ul> <li><p>First case: START 0.0 sentence…<br> In this case, the model has used all of the 448 tokens it can output and has not found an end to the sentence. For the second iteration, I will advance the audio 15 seconds, and the next initial sequence will be: START 0.0 half sentence.</p></li> <li><p>Second case: START 0.0 sentence 18.0<br> In this case, the model has found an end to the sentence, and I will just advance the audio based on that end timestamp. The next tokens will be PREV half sentence START 0.0.</p></li> <li><p>Third case: START 0.0 sentence 5.0 sentence…<br> In this case, we have found a timestamp, but the sequence is not finished, so I just advance the audio to the timestamp in the middle, 5 seconds, and continue from there. The next tokens will be PREV some part of the previous sentence START 0.0 half sentence (max context is 224 tokens).</p></li> </ul> <p>I run this in a loop until we have visited all 35 seconds of audio. I recommend running the inference code.</p> <h1>11th Solution</h1> <p>The solution consisted of a single whisper medium model using a beam decoder with a size of 4.</p> <h6>Training</h6> <ul> <li>Like other solutions, the most important step is to use cleaned data. I use following rules based on metadata shared by host:</li> </ul> <pre><code>cond0 = train_df[ ( (train_df.ykg_wer &lt; ) | (train_df.ggl_wer &lt; ) ) &amp; ( (train_df.total_wer_by_client_ykg &lt; ) | (train_df.total_wer_by_client_ggl &lt; ) ) &amp; (train_df.mos_pred &gt; ) ] </code></pre> <ul> <li><p>To speed up training, about 80% of the dataset is presented to the model in combinations of two audios, showing only single audios to prevent hallucinations. This also helps to reduce the impact of possible bad annotations.</p></li> <li><p>SpecAugment, SpecAugment++, CutOut were used.</p></li> </ul> <h6>Inference</h6> <ul> <li><p>For inference, the most important thing is to handle correctly audios longer than 30 secs and sentences longer than 448 tokens. This made an improvement on LB from 0.43 to 0.38.</p></li> <li><p>Inference code: <a href="https://www.kaggle.com/code/themadrambito/11th-place-whisper-inference" target="_blank">https://www.kaggle.com/code/themadrambito/11th-place-whisper-inference</a></p></li> </ul>
Bengali.AI Speech Recognition
14th Place Solution for the Bengali.AI Speech Recognition Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and Kaggle staff for holding the competition, and congratulations to the winners!</p> <h1>Context</h1> <p>Business context: <a href="https://www.kaggle.com/competitions/bengaliai-speech/overview" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/bengaliai-speech/data" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/data</a></p> <h1>Overview of the Approach</h1> <p>My approach addressed two main challenges:</p> <p><strong>Challenge:</strong></p> <ol> <li>The need for robust speech recognition capable of handling diverse speakers.</li> <li>The requirement to restore punctuation in transcriptions.</li> </ol> <p><strong>Approach:</strong></p> <ol> <li>Fine-tuning the <code>indicwav2vec_v1_bengali</code> model using competition data.</li> <li>Leveraging the Punctuation Restoration tool from <a href="https://github.com/xashru/punctuation-restoration" target="_blank">https://github.com/xashru/punctuation-restoration</a>.</li> </ol> <p>This approach led to a Public Leaderboard score of 0.38.</p> <p>I began with the foundation provided by <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a>’s <a href="https://www.kaggle.com/code/ttahara/bengali-sr-public-wav2vec2-0-w-lm-baseline" target="_blank">notebook</a>.</p> <h1>Details of the submission</h1> <p><strong>Diverse Speaker Recognition:</strong><br> The test audio data comes mostly from YouTube, which means that the speakers' identities are often unknown. To create a versatile model, I trained it on diverse audio data.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8577629%2F04724b5e0fab1e4b550271153025dcd3%2Fdomain.png?generation=1697591004781091&amp;alt=media" alt=""><br> (dataset paper: <a href="https://arxiv.org/abs/2305.09688" target="_blank">https://arxiv.org/abs/2305.09688</a> C.1.1. Data Scraping Roadmap &amp; Prerequisites)</p> <p><strong>Punctuation Restoration:</strong><br> It's known that the labels are normalized, which implies that punctuation is preserved, as mentioned <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/432305#2400110" target="_blank">here</a>.<br> Predicting punctuation during transcription is difficult, and leaving it out would result in word errors.</p> <p>By restoring punctuation, Word Error Rate (WER) can be reduced:</p> <ul> <li>(label) hello. how are you?</li> <li>(predict) hello how are you<br> → wer: 0.5</li> <li>(restore) hello. how are you.<br> → wer: 0.25</li> </ul> <p>While the public notebook appends periods at the end of sentences, the test data has an average of 34.42 words per sample and a Macro Train/Validation set with averages of 8.42/9.21. This suggests that multiple sentences may exist in one audio file, making it necessary to restore punctuation at points other than sentence endings.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8577629%2F1aa92dc382aaefb5ee7797fdc42a5c94%2Fwps.png?generation=1697591051306745&amp;alt=media" alt=""><br> (dataset paper: <a href="https://arxiv.org/abs/2305.09688" target="_blank">https://arxiv.org/abs/2305.09688</a> Table 1: OOD-Speech Dataset Statistics)</p> <h2>Models</h2> <ol> <li>Wav2vec2CTC model<ul> <li>Fine-tuned Wav2vec2 <code>ai4bharat/indicwav2vec_v1_bengali</code></li></ul></li> <li>Language model<ul> <li>KenLM <code>arijitx/wav2vec2-xls-r-300m-bengali</code></li></ul></li> <li>Punctuation Restore model<ul> <li>XLM-RoBERTa-large from <a href="https://github.com/xashru/punctuation-restoration" target="_blank">https://github.com/xashru/punctuation-restoration</a></li></ul></li> </ol> <h2>Training</h2> <p><a href="https://github.com/Neilsaw/kaggle_Bengali.AI_ASR_16th_solution" target="_blank">Github</a></p> <p>Fine-tuning Wav2vec2CTC with Transformers involved choosing datasets based on Yellowking’s WER, CER, and MOS_PRED metrics, as outlined in this <a href="https://www.kaggle.com/code/imtiazprio/listen-to-training-samples-data-quality-eda" target="_blank">notebook</a>. Two dataset splits were used:</p> <ol> <li><strong>Easy Data:</strong> audio samples where inference was straightforward.<ul> <li>YKG WER &lt; 0.1</li></ul></li> <li><strong>Hard Data:</strong> audio samples where the character content was correct but the WER was high.<ul> <li>0.3 &lt; WER &lt; 1.5</li> <li>CER &lt; 0.15</li> <li>MOS_PRED &gt; 3</li></ul></li> </ol> <p>The first dataset helped the model adapt to a variety of voices, while the second dataset allowed it to handle audio with higher WER.</p> <p>Additionally, the inclusion of white noise during training led to a slight improvement in the Leaderboard score by 0.001.</p> <h2>Validate</h2> <p>I only used LB for Validate.<br> I couldn't rely on local cross-validation because the domain shift between the training data and the test data was too significant.</p> <h2>Inference</h2> <p><a href="https://www.kaggle.com/code/neilus/16th-solution/notebook" target="_blank">inference notebook</a></p> <p>improved things from public notebook</p> <ul> <li><p>Punctuation restoration (-0.023)</p> <ul> <li>for using <a href="https://github.com/xashru/punctuation-restoration" target="_blank">xashru/punctuation-restoration</a>, we need to change transformers==2.11.0.</li> <li>so after LM inference, pip install transformers==2.11.0 and execute punctuation-restoration on command line for reset import packages.</li></ul></li> <li><p>using unigrams.txt for KenLM ( -0.005)</p></li> </ul> <pre><code> ( / , encoding=) : unigram_list = [t.() t f.().().()] decoder = pyctcdecode.( (sorted_vocab_dict.()), ( / ), unigram_list, ) </code></pre> <ul> <li>beam width 1500 (-0.002 ~ -0.001)</li> </ul> <h2>Results</h2> <table> <thead> <tr> <th></th> <th>Public</th> <th>Improvement</th> <th></th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>0.471</td> <td></td> <td></td> <td>0.564</td> </tr> <tr> <td>Training (Easy Data)</td> <td>0.425</td> <td>-0.046</td> <td></td> <td>0.508</td> </tr> <tr> <td>Beam width 1500</td> <td>0.423</td> <td>-0.002</td> <td></td> <td>0.506</td> </tr> <tr> <td>Punctuation restoration</td> <td>0.400</td> <td>-0.023</td> <td></td> <td>0.488</td> </tr> <tr> <td>Using unigrams.txt</td> <td>0.395</td> <td>-0.005</td> <td></td> <td>0.48</td> </tr> <tr> <td>Training (Hard Data)</td> <td>0.380</td> <td>-0.015</td> <td></td> <td>0.458</td> </tr> </tbody> </table> <h2>Things didn't work for me</h2> <ul> <li>NER (Named Entity Recognition)<ul> <li>For restore "-" to NER, but using NER for ASR output sentences occur a lot of False detection and LB down.</li></ul></li> <li>create own LM<ul> <li>since i only used MaCro Train data, may be too small sentence.</li></ul></li> <li>fine tuned punctuation model<ul> <li>same reason LM. only used MaCro Train data.</li></ul></li> <li>denoiser (<a href="https://github.com/facebookresearch/denoiser" target="_blank">https://github.com/facebookresearch/denoiser</a>)<ul> <li>LB and CV down. so I didn`t use.</li></ul></li> <li>add various noise while training<ul> <li>only White noise was work.</li></ul></li> </ul> <h1>Conclusion</h1> <p>At the beginning of the competition, I tried adding noise to adapt to different domains, but it didn't improve the LB. From this, I thought that there were other challenges to address besides noise.</p> <p>This competition challenged participants to achieve generalization and deal with label noise (punctuation). It required addressing the question of how much generalization is necessary and how to handle punctuation noise effectively. </p> <p>I am grateful for the opportunity to learn from this competition.</p> <h1>Source</h1> <ul> <li><a href="https://www.kaggle.com/code/ttahara/bengali-sr-public-wav2vec2-0-w-lm-baseline" target="_blank">https://www.kaggle.com/code/ttahara/bengali-sr-public-wav2vec2-0-w-lm-baseline</a></li> <li><a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/432305#2400110" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/discussion/432305#2400110</a></li> <li><a href="https://github.com/xashru/punctuation-restoration" target="_blank">https://github.com/xashru/punctuation-restoration</a></li> </ul> <p>Hasn't. using pre-trained model from github repo.</p> <p>I tried fine-tune with MacRo Train sentence.<br> but LB became worse (0.400 -&gt; 0.405) so I gave up.</p> <p>Thank you!👍</p>
Bengali.AI Speech Recognition
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>you can use any VAD library, but i have used webrtcvad.</p> <p>STT Model:</p> <ul> <li>OpenAI whisper-medium</li> <li>Huggingface trainer</li> <li>Trained on 8x 48GB RTX A6000</li> <li>bs=8 and lr=1e-5</li> <li>Train steps 50k</li> <li>Spectrogram dithering</li> <li>Spectrogram time and frequency masking</li> <li>Resampling 16khz-&gt;8khz-&gt;16khz as augmentation</li> <li>Inference with max_length=260, num_beams=4 and chunk_length_s=20.1s</li> <li>Libsonic based speed/pitch augmentation</li> <li>Datasets: OpenSLR 37, OpenSLR 53, MadASR, Shrutilipi, Macro, Kathbath, GoogleTTS generated audios and pseudo labeled YouTube videos</li> </ul> <p>Punctuation Model:</p> <ul> <li>AutoModelForTokenClassification google/muril-base-cased</li> <li>Huggingface trainer</li> <li>Labels: period, comma and question mark</li> <li>bs=64, lr=2e-4 and max_seq_length=512</li> <li>Ensemble of 4 models (using 6, 8, 11 and 12 layers of google/muril-base-cased)</li> <li>Normalized IndicCorp v2 Bangla dataset</li> </ul> <p>In my daily job, I do speech speech recognition for low resource central Asian languages. From my experience, OpenAI Whisper works really well for OOD audios and can even transribe song lyrics. The downside is it is very sensitive to the annotation noise. So fixing the annotation noise, is the most crucial part of this competition.</p> <p>Because the competition dataset was not validated, the initial model was trained on OpenSLR datasets. We normalized the texts and filtered out texts containing Bengali digits. All punctuation was also removed. Additionally, we sampled 420k texts from the IndicCorp and synthesized audios using GoogleTTS, which were then used as training datasets.</p> <p>Following the training of an initial Whisper-medium model on OpenSLR and GoogleTTS, we conducted inference on MadASR, Shrutilipi, Macro, and Kathbath. We included audios with a WER of less than 15% in the next training phase. After three rounds of training, the model achieved an 8% WER on the Macro validation dataset and a public leaderboard score of approximately 0.380.</p> <p>Since most of the training set audios were short, we merged some short audios to create around 70k longer audios. Subsequently, we achieved a public leaderboard score of approximately 0.370.</p> <p>Whisper with the original tokenizer was slow on Bengali audios. Therefore, we trained a Whisper tokenizer with a 12k vocabulary on Bengali texts. With this tokenizer, we were able to perform inference with a num_beam value of up to 8 and a chunk_length_s of 20.1 seconds in less than 7 hours.</p> <p>In the next step, we applied pseudo labeling to some YouTube videos, which enabled us to achieve a public leaderboard score of approximately 0.360. When we combined the predictions of four punctuation models, our public leaderboard score improved to around 0.325.</p> <p>By adding more pseudo-labeled YouTube videos, our public leaderboard score further improved to 0.312 (private LB 0.372)</p> <p>model weight and inference notebook: <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/447970" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/discussion/447970</a><br> cleaned/long/pseudo data: <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/448110" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/discussion/448110</a></p> <p>I have updated the solution summary, the important part is you have to remove wrong annotations from the dataset.</p> <p>We don't do any preprocessing except converting to 16khz. During the training, audios are read from the disk and applied ramdom speed/pitch etc before computing features.</p> <p>Browse YouTube and identify Bangle channels like news, cartoon, poetry etc. Then download all videos from the channels and do pseudo labeling. Nothing fancy. I have released the data here: <a href="https://www.kaggle.com/datasets/tugstugi/bengali-asr-data" target="_blank">https://www.kaggle.com/datasets/tugstugi/bengali-asr-data</a> The audio file name starts with the channel name + YouTube id.</p> <p>Long youtube audios are splitted using with VAD. Reject too short or too long audio segments and let's say use only segments between 5 and 22 seconds.</p>
Bengali.AI Speech Recognition
20th Rank Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our Final Solution is a single Wave2Vec model along with a N-Gram Model</p> <h1>Step 1:</h1> <p>Trained a model for 5 epochs on </p> <ol> <li>OpenSLR37</li> <li>OpenSLR53</li> <li>Fleurs</li> <li>Common voice</li> </ol> <p>Validation:<br> Kaggle Validation data</p> <h1>Step 2:</h1> <p>Estimate the WER for the Kaggle training data with the model from Step 1.<br> Remove samples which have WER greater than 0.5<br> Resulting cleaned version of Training data had 50-60% of overall Training data</p> <h1>Step 3:</h1> <p>Trained a model for 20 epochs on </p> <ol> <li>OpenSLR37</li> <li>OpenSLR53</li> <li>Fleurs</li> <li>Common voice</li> <li>Cleaned version of Training data</li> </ol> <h1>Step 4:</h1> <p>Build a N-Gram model with Common Voice dataset</p> <p>Our best Wave2Vec model 0.407 in LB<br> Our best Wave2Vec model + N-Gram Model 0.396 in Public LB</p> <p><strong>Things that worked:</strong></p> <ol> <li>Caching of the datasets reduced the overall training time. But it didnt allow us to add any augmentations on the fly</li> <li>Having more warmup steps ensured that the model doesn't throw NaN. We used warmup of 0.25</li> <li>MSD as Final layer instead of Linear layer</li> </ol> <p><strong>Things that didnt work:</strong></p> <ol> <li>We tried adding few more finetuning steps. But mostly the model started to overfit</li> </ol> <p>We will add few more points about our pipeline in next few hours</p>
Bengali.AI Speech Recognition
23nd Solution - My first competition medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to express my gratitude to the organizers of this competition. As a relative newcomer to Automatic Speech Recognition (ASR), I'm thrilled to have secured a medal in this event. I am immensely satisfied with the outcome, and I extend my thanks to both the competition participants and its hosts.</p> <p><strong>The solution consists of 2 components:</strong></p> <ol> <li>ASR model</li> <li>Ngram KenLM model</li> </ol> <p><strong>1. ASR model</strong><br> Initially, I experimented with numerous Hugging Face (HF) models, training them on approximately 20,000 data samples. The models I tested included:</p> <ul> <li>LegolasTheElf/Wav2Vec2_XLSR_Bengali_1b</li> <li>arijitx/wav2vec2-large-xlsr-bengali</li> <li>arijitx/wav2vec2-xls-r-300m-bengali</li> <li>tanmoyio/wav2vec2-large-xlsr-bengali</li> <li>ai4bharat/indicwav2vec_v1_bengali</li> <li>Umong/wav2vec2-large-mms-1b-bengali</li> <li>kabir5297/Wav2Vec2-90k-Bengali</li> <li>tanmoyio/wav2vec2-large-xlsr-bengali</li> <li>bayartsogt/bengali-2023-0016</li> <li>shahruk10/wav2vec2-xls-r-300m-bengali-commonvoice</li> <li>jonatasgrosman/wav2vec2-large-xlsr-53-english</li> <li>wav2vec2-xls-r-2b - training from scratch</li> <li>wav2vec2-xls-r-1b - training from scratch</li> </ul> <p>Eventually, I settled on the "shahruk10/wav2vec2-xls-r-300m-bengali-commonvoice" model. You can find all the fine-tuned models in my HF repository <a href="url" target="_blank">https://huggingface.co/Aspik101</a><br> The final fine-tuned model can be accessed: <a href="url" target="_blank">https://huggingface.co/Aspik101/shahruk10_checkpoint-360_2444</a></p> <p>The final model underwent a two-step training process. In the first step, the model was trained on filtered data, where the Word Error Rate (WER) was less than 80, sourced from the yellowking model, MADASR dataset, and google/fleurs. This training was executed based on a streaming mode and took approximately 20 hours.</p> <pre><code>trainer = Trainer( model=model, data_collator=daata_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=IterableWrapper(train), eval_dataset=IterableWrapper(val), tokenizer=processor.feature_extractor) </code></pre> <p>Loading data via streaming on Hugging Face can be useful when processing large data sets or when you want to feed data to a model while it is running rather than loading the entire data set into memory at once.</p> <p>At this stage, the best result I managed to get was around 0.424 on the leaderboard</p> <p>In the next step, the model was trained using data from the competition but from the <strong>examples folder!</strong> Thanks to this, I managed to get results of <strong>0.404.</strong> with KenLM model.</p> <p><strong>2. Ngram KenLM model</strong><br> 5-gram kenlm model trained on competition dataset external Bengali corpus IndicCorp V1+V2.</p> <p><strong>What doesn't work</strong></p> <ul> <li>Punctuation model <a href="url" target="_blank">https://huggingface.co/1-800-BAD-CODE/xlm-roberta_punctuation_fullstop_truecase</a> On the test data set, my calculations show that I should get about 0.02 WER, unfortunately it did not work on the final data set</li> <li>Attempting to use a large language model to correct typos, such as Qwen 7b, from <a href="url" target="_blank">https://huggingface.co/Qwen/Qwen-7B-Chat</a> showed potential but did not yield successful results in my case.</li> <li>I experimented with the "indicparser" as follows:</li> </ul> <pre><code> indicparser graphemeParserg gp=graphemeParser() </code></pre> <ul> <li>Exploring the stacking of logits from various wav2vec models.</li> <li>Generating transcriptions using the Conformer model available <a href="url" target="_blank">https://huggingface.co/bengaliAI/BanglaConformer</a>, while considering only punctuation and combining it with the final wav2vec model.</li> <li>And many other ideas that I have already forgotten</li> </ul> <p>In conclusion, I want to reiterate my appreciation to the competition organizers and participants. This journey has been an incredible learning experience for me, and I'm excited to continue exploring the world of ASR.</p>
Bengali.AI Speech Recognition
24th Place Solution for the Bengali.AI Speech Recognition Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, here is our main inference code:</p> <pre><code> torch.no_grad(): batch tqdm(test_loader): x = batch[] x = x.to(device, non_blocking=) torch.cuda.amp.autocast(): y = model_1(x).logits* + model_2(x).logits* + model_3(x).logits* y = y.detach().cpu().numpy() l y: sentence = processor_with_lm.decode(l, **best_params).text pred_sentence_list.append(sentence) </code></pre> <p>However, it is the wrong way to do the ensemble. Because the format of the logits of wav2vec2 model is not aligned. There are other ways to do this, like (<a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/448074#2487921" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/discussion/448074#2487921</a>) this solution.</p> <h2>First of all, we want to thank the organizers who held this wonderful competition. This is my first kaggle competition, I fell satisfied with the final outcome. Thanks my teammate <a href="https://www.kaggle.com/nisshokuitsuki" target="_blank">@nisshokuitsuki</a> , who worked with me during this three months journel. Thanks everyone who contributed to the discussion and the notebooks, your works gave us a lot insperiations. </h2> <h1>Model</h1> <p>The model we used is</p> <pre><code> </code></pre> <p>We finetuned it further on this model <a href="https://www.kaggle.com/datasets/nischaydnk/bengali-wav2vec2-finetuned" target="_blank">bengali_wav2vec2_finetuned (kaggle.com)</a><br> Since the data quality of the competition dataset is low, we used the train split of <a href="https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized?select=train.tsv" target="_blank">Common Voice 13 | Bengali (Normalized) (kaggle.com)</a>to train. We trained about 10 epochs with the 20000 data in the dataset, and splited about 700 data for validation. <br> The data here is normalized and removed punctuations.<br> Here is the training arguments:</p> <pre><code>training_args = TrainingArguments( &amp;nbsp; &amp;nbsp; group_by_length=, &amp;nbsp; &amp;nbsp; weight_decay=, &amp;nbsp; &amp;nbsp; num_train_epochs=, &amp;nbsp; &amp;nbsp; fp16=, &amp;nbsp; &amp;nbsp; learning_rate=, &amp;nbsp; &amp;nbsp; warmup_steps=, ) </code></pre> <p>And we used cosine optimizer.</p> <p>The best model got local wer 0.15, and Improved the Public Score from <em>0.445-&gt;0.434</em></p> <p>However, my teammate also trained a model with the same data for just 60 steps( different args), and got the same score. (even 0.001 better on the private LB). How interesting and confusing.</p> <h1>Language Model</h1> <p>We trained a 6gram with <a href="https://www.kaggle.com/datasets/mbmmurad/lm-no-punc" target="_blank">lm_no_punc (kaggle.com)</a>. <br> Mention that there is a error in the LM provided in <a href="https://www.kaggle.com/datasets/sameen53/yellowking-dlsprint-model" target="_blank">YellowKing_DLSprint_Model (kaggle.com)</a>. </p> <pre><code>There is no terminator arpa . So we need &lt;/s&gt; . </code></pre> <p>Thanks to this notebook <a href="https://www.kaggle.com/code/umongsain/build-an-n-gram-with-kenlm-macro" target="_blank">Build an n-gram with KenLM | MaCro | Kaggle</a>, we are able to realize this. <br> Adding the improved the LB from <em>0.445-&gt;0.422</em><br> And building an 6gram with <a href="https://www.kaggle.com/datasets/mbmmurad/lm-no-punc" target="_blank">lm_no_punc (kaggle.com)</a>.Improved about <em>0.001</em></p> <h1>Punctuation Restoration</h1> <p>Punctuation really matters. Thanks to this post <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/432305" target="_blank">Bengali.AI Speech Recognition | Kaggle</a>, we are able to realize this. And an response under this post showed us a way to restore the punctuation:<br> <a href="https://github.com/xashru/punctuation-restoration" target="_blank">xashru/punctuation-restoration: Punctuation Restoration using Transformer Models for High-and Low-Resource Languages (github.com)</a> First we trained a model with the dataset provided in this repositorie. It can restore 3 punctuations : </p> <pre><code>{: , : , : } </code></pre> <p>This improved the LB from <em>0.422-&gt;0.400</em><br> Combining with the finetuned model, we have <em>0.400-&gt;0.397</em><br> Afterwards, we thought that 3 punctuations might be not enouth. So we made a dataset with <a href="https://huggingface.co/datasets/oscar" target="_blank">oscar · Datasets at Hugging Face</a>, filterd datas that have only have bengali words. We chose 7<br> punctuations: </p> <pre><code>{: , : , : , : , : , : , : } </code></pre> <p>We trained 6 epochs with the default pharams.<br> And we have <em>0.393-&gt;0.387</em></p> <h1>Model ensemble</h1> <p>We simply ensembled our model like this: </p> <pre><code> y = model_1(x).logits* + model_2(x).logits* + model_3(x).logits* </code></pre> <p>Wait, This works???<br> Yes, thouth the predictions may not be aligned, But since the three models are trained on same datasets, the no-aligning problem is paritially solved. <br> This improved our performance about <em>0.001</em></p> <h1>Decoder pharams selection</h1> <p>There are three main pharams for the decoder:</p> <pre><code>alpha: weight language model during shallow fusion beta: weight score adjustment during scoring beam_width: determines candidate output sequences retained step. </code></pre> <p>To find the best pharams, we used optuna <a href="https://www.kaggle.com/code/snnclsr/0-444-optimize-decoding-parameters-with-optuna" target="_blank">[0.444] Optimize Decoding Parameters with Optuna | Kaggle</a> to search the best pharams. We searched the pharams with the example datas in the dataset, which is ood data, brought us better LB score.<br> The final decoder pharams are:</p> <pre><code>{'alpha': , 'beta': , 'beam_width': } </code></pre> <h1>What doesn't work for us</h1> <ul> <li>Data augmentation. We added background noise downloaded from <a href="https://pixabay.com/sound-effects/search/noise/" target="_blank">https://pixabay.com/sound-effects/search/noise/</a> and also pitch shift , time stretch etc. But The LB got worse (<em>0.397-&gt;0.415</em>). Every experiment of data augmentation takes too much time and I fells to tired to do more experiments. Maybe I could write some codes to do it automatically</li> <li>Denoise model. We tried three denoising models: <br> UVR: Notebook Run Out of time<br> <a href="https://github.com/facebookresearch/denoiser" target="_blank">facebookresearch/denoiser</a>: decreased about 0.01<br> <a href="https://github.com/NVIDIA/CleanUNet/blob/main/exp/DNS-large-high/checkpoint/pretrained.pkl" target="_blank">CleanUNet</a>: decreased about 0.005<br> We thought that denoising harms the features and makes some short syllables unrecornizable.</li> <li>Train a bigger LM with more data. We used 15G normalized benglai data to build an kenlm, the score got worse. We still haven't found the cause of the problem. </li> </ul>
Bengali.AI Speech Recognition
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you very much and congrats on your team's strong finish too!</p> <p>That's a good question. I gave dot and hyphen for ASR model to learn because they are the most problematic punctuations in the annotations. Since their absence break a word into subwords, with WER metric the score detoriates signifincantly compared to punctuations only attached to one word. And looking at validation result, the model was actually able to learn some of the patterns. But you need to combine training with external data that have dot annotations (ex: MADASR), since competition data have a lot of hyphens but very few abbreviation dots.</p> <p>Many thanks to Kaggle and Bengali.AI for hosting such an interesting competition. As I was about to start my PhD research on speech processing and I knew nothing about the field, the competition came as the perfect opportunity for me to learn. In the end it was a highly rewarding experience.</p> <p>The solution consists of 3 components:</p> <ul> <li>ASR model</li> <li>Language model</li> <li>Punctuation model</li> </ul> <p><strong>1. ASR model</strong></p> <p>I used <code>ai4bharat/indicwav2vec_v1_bengali</code> as pretrained model.</p> <p><strong>Datasets:</strong></p> <ul> <li>Speech data: Competition data, Shrutilipi, MADASR, ULCA (for ULCA data most of the links are dead but some are still downloadable, only a few thousands samples though)</li> <li>Noise data: music data from MUSAN and noise data from DNS Challenge 2020</li> <li>All data are normalized and punctuation marks are removed except for dot (<code>.</code>) and hyphen (<code>-</code>)</li> </ul> <p><strong>Augmentation:</strong></p> <ul> <li>use augmentation from <code>audiomentations</code>, apply heavy augmentation for read speech (comp data, MADASR) and lighter augmentation for spontaneous speech (Shrutilipi, ULCA)<br> Example of read speech augmentation:</li> </ul> <pre><code>augments = Compose([ TimeStretch(=0.8, =2.0, =0.5, =), RoomSimulator(=0.3), OneOf([ AddBackgroundNoise( sounds_path=[ , ], =5.0, =30.0, =PolarityInversion(), =1.0 ), AddBackgroundNoise( sounds_path=[ ], =5.0, =30.0, =PolarityInversion(), =1.0 ), AddGaussianNoise(=0.005, =0.015, =1.0), ], =0.7), Gain(=-6, =6, =0.2), ]) </code></pre> <p>For spontaneous speech augment probabilities are smaller and <code>TimeStretch</code> rate much less extreme.</p> <ul> <li>concat augment: randomly concatenate short samples together to make length distribution of training set closer to OOD test set.</li> <li>SpecAugment: mask_time_prob = 0.1, mask_feature_prob = 0.05.</li> </ul> <p><strong>Training:</strong></p> <ul> <li>First fit on all training data, then remove about 10% with highest WER after fitted from train set.</li> <li>Don't freeze feature encoder.</li> <li>Use cosine schedule with warmups and restarts: 1st cycle 5 epochs peak lr 4e-5, 2nd cycle 3 epochs peak lr 3e-5, third cycle 3 epochs peak lr 2e-5.</li> </ul> <p><strong>Inference:</strong><br> Use <code>AutomaticSpeechRecognitionPipeline</code> from <code>transformers</code> to apply inference with chunking and stride:</p> <pre><code> = pipe(w, chunk_length_s=, stride_length_s=(, ))[] </code></pre> <p><strong>2. Language model</strong></p> <p>6-gram kenlm model trained on multiple external Bengali corpus:</p> <ul> <li>IndicCorp V1+V2.</li> <li>Bharat Parallel Corpus Collection.</li> <li>Samanantar.</li> <li><a href="https://www.kaggle.com/datasets/truthr/free-bengali-poetry" target="_blank">Bengali poetry dataset</a>.</li> <li><a href="https://data.statmt.org/news-crawl/" target="_blank">WMT News Crawl</a>.</li> <li>Hate speech corpus from <a href="https://github.com/rezacsedu/Classification_Benchmarks_Benglai_NLP" target="_blank">https://github.com/rezacsedu/Classification_Benchmarks_Benglai_NLP</a>.</li> </ul> <p><strong>3. Punctuation model</strong></p> <p>Train token classification model to add the following punctuation set: <code>।,?!</code></p> <ul> <li>use <code>ai4bharat/IndicBERTv2-MLM-Sam-TLM</code> as backbone</li> <li>add LSTM head</li> <li>train for 6 epochs, cosine schedule, lr 3e-5 on competition data + subset of IndicCorp</li> <li>mask 15% of the tokens during training as augmentation</li> <li>ensemble 3 folds of model trained on 3 different subsets of IndicCorp</li> <li>beam search decoding for inference.</li> </ul> <p>Thank you very much for reading and please let me know if you have any questions.</p> <p>Update: </p> <ul> <li>training code: <a href="https://github.com/quangdao206/Kaggle_Bengali_Speech_Recognition_2nd_Place_Solution" target="_blank">https://github.com/quangdao206/Kaggle_Bengali_Speech_Recognition_2nd_Place_Solution</a></li> <li>inference notebook: <a href="https://www.kaggle.com/code/qdv206/2nd-place-bengali-speech-infer/" target="_blank">https://www.kaggle.com/code/qdv206/2nd-place-bengali-speech-infer/</a></li> </ul> <p>Thanks a lot bro, huge congrats to you too!</p> <p>Thanks a lot <a href="https://www.kaggle.com/ahmedelfazouan" target="_blank">@ahmedelfazouan</a> . You are well on your way too! Soon the old Icecube team will be full gold 😆</p> <p>I used V100s to train my models. For ASR normally I only sampled 30% of the data for experimentations, but the final full train took ~5days. Training punctuation models are relatively quick though, only a few hours.</p> <p>Thanks <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> , congrats on breaking into global top10, what an impressive run!</p> <p>Sorry I couldn't provide you with exact numbers as I fixed my augmentations about a month ago. I will instead try to talk about my reasonings behind these augs. When I spent some time listening to different samples of read speech (train data) and spontaneous speech (test data), I noticed that spontaneous speech generally: </p> <ul> <li>is spoken with a much more diverse range of speed (can be much faster)</li> <li>contains various background noises</li> </ul> <p>So logically augmentations that simulate these situations should improve results. Then external data were added, some of which were spontaneous speech, so the same logic doesn't apply and naturally a separate scheme must be made for them.</p> <p>What I want to say is that most of the augmentations are there because logically it makes sense for them to be there and not the results of some grid search experiments. I try to take the same approach in CV comps too. Hopefully that can be interesting to you.</p>
Bengali.AI Speech Recognition
33rd place silver medal solution - My first competition medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot! Best of luck to you too!</p> <p>True. I think the main reason behind it was the amount of data the top models were trained on. Almost all of them were trained on at least half a million audios, so the models are quite generalised and robust. </p> <p>Thanks for your kind words! Glad I could help.</p> <p>First, we want to thank Bengali.ai and Kaggle for hosting such an excellent competition on Bengali ASR and releasing a large-scale dataset for this domain. This competition was challenging, to say the least. But I enjoyed the last three months working on it ngl. It was a pretty good learning curve for me. I got the opportunity to contribute to the community and also engage with the community through this competition. I got 1 Gold, 2 silver and 8 bronze medals for my notebooks in this competition. So overall, a very busy and happy three months!</p> <h2>Summary :</h2> <p><strong>Acoustic Model</strong>: We fine-tuned the ai4bharat/indicwav2vec_v1_bengali model<br> <strong>Dataset</strong> : Competition Data + Openslr53+openslr37+ TTS dataset<br> In the first phase, we fine-tuned the model with the whole competition data. which was making things worse, dragging the LB performance below the best public NB ( 0.445)<br> Then we filtered out the bad-quality audio from the competition data using the training metadata provided by the host ( We took the audios with MOS&gt;=2). <br> We did a random split with both the train and validation data(since it was evident MaCro validation audio quality was better than train audio) and trained with 95% of the audios. Then added the other datasets. This improved the LB score to 0.430<br> Augmentation : reverberation, Speed perturbation,Volume perturbation (0.125x ~ 2.0x)<br> Adding background noise from the example audios(Improved performance slightly)</p> <p><strong>Language Model</strong>: We built a 5-gram LM using competition sentences + <a href="https://github.com/csebuetnlp/banglanmt" target="_blank">banglanmt</a> + IndicCorp_V2</p> <h2>What did not work:</h2> <ol> <li><strong>Punctuation model</strong> : I tried to add the <a href="https://github.com/xashru/punctuation-restoration" target="_blank">xashru-punctuation-restoration</a> model but it always gave CUDA OOM. Now it's bugging me to see others implemented it and got a huge boost with it. I also tried to fine-tune T5 model for this task but didn't see much improvement. So I gave up the idea.</li> <li><strong>Speech enhancement</strong> : </li> </ol> <ul> <li><a href="https://github.com/Rikorose/DeepFilterNet" target="_blank">DeepFIlterNet</a></li> <li><a href="https://huggingface.co/speechbrain/sepformer-wham-enhancement" target="_blank">Speechbrain wham</a></li> </ul> <p>I spent a significant amount of time on them. First I tried to add a speech enhancer model to my inference pipeline. It didn't improve the performance. Then I tried to train with enhanced audio, but it didn't help either. The noise cancellation performance by these models was quite good, but it also affected the pitch and distorted the speech a little bit. </p> <p><strong>What could have been done to improve further/ Next steps</strong> : </p> <ol> <li>Adding the punctuation model.</li> <li>Train the acoustic model with more data (MADASR,Shrutilipi)</li> <li>Add more texts to the LM</li> </ol> <p>Thanks a lot! I'm glad my NB was helpful to you. Congratulations to you too!</p> <p>Thanks! Congratulations to you too!</p>
Bengali.AI Speech Recognition
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>During inference, we sorted data based on audio_length in ascending order and set the dataloader's variable shuffle=False.</p> <p>First of all, we want to thank the competition organizers who hosted this fantastic competition and gave us an excellent opportunity to learn how to improve the ASR model against low-resource languages like Bengali and other competitors who generously shared their knowledge.<br> The following is a brief explanation of our solution. We will open-source more detailed codes later, but if you have questions, ask us freely.</p> <hr> <h1>Model Architecture</h1> <h3>・CTC</h3> <p>We fine-tuned "ai4bharat/indicwav2vec_v1_bengali" with competition data(CD).<br> We observed low-quality samples in CD, and mindlessly fine-tuning a model with all the CD deteriorated its performance. So first, we fine-tuned a model only with split=”valid” CD (this improved the model’s performance) and predicted with it against split=”train” CD. After that, we included a high-quality split=’train’ CD (WER&lt;0.75) to split=’valid’ CD and fine-tuned "ai4bharat/indicwav2vec_v1_bengali" from scratch. <br> This improved the public baseline to LB=0.405.</p> <h3>・kenlm</h3> <p>Because there are many out-of-vocabulary in OOD data, we thought training strong LM with large external text data is important. So we downloaded text data and script of ASR datasets(CD, indicCorp v2, common voice, fleurs, openslr, openslr37, and oscar) and trained 5-gram LM.<br> This LM improved LB score by about 0.01 compared with "arijitx/wav2vec2-xls-r-300m-bengali"</p> <h1>Data</h1> <h3>・audio data</h3> <p>We used only the CD. As mentioned in the Model Architecture section, we did something like an adversarial validation using a CTC model trained with split=’valid’ and used about 70% of the all CD.</p> <h3>・text data</h3> <p>We used text data and script of ASR datasets(CD, indicCorp v2, common voice, fleurs, openslr, openslr37, and oscar). As a preprocessing, we normalized the text with bnUnicodeNormalizer and removed some characters('[\,\?.!-\;:\"\।\—]').</p> <h1>Inference</h1> <h3>・sort data by audio length</h3> <p>Padding at the end of audios negatively affected CTC, so we sorted the data based on the audio length and dynamically padded each batch. This increased prediction speed and performance.</p> <h3>・Denoising with Demucs, a music source separation model.</h3> <p>We utilized Demucs to denoise audios. This improved the LB score by about 0.003.</p> <h3>・Judge if we use Demucs or not</h3> <p>Demucs sometimes make the audio worse, so we evaluated if the audio gets worse and we switched the audio used in a prediction. This improved the LB score by about 0.001.<br> The procedure is as follows.</p> <ol> <li>we made two predictions: prediction with Demucs and prediction without Demucs. To speed up this prediction, we made the LM parameter beam_width=10.</li> <li>we compared the number of tokens in two predictions. If the number of tokens in the prediction with Demucs is shorter than the other, we predicted without Demucs. Otherwise, we predicted with Demucs.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9268451%2Fc66948ac5c61830bd513e6af4564cfa4%2Fdenoising_solution.png?generation=1697586862740699&amp;alt=media" alt=""></li> </ol> <h1>Post Processing</h1> <h3>・punctuation model</h3> <p>We built models that predict punctuations that go into the spaces between words. These improved LB by more than 0.030.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9268451%2Fdc61bcc13bd81e9a0259a3df7b95ac6f%2Fesprit_solution.png?generation=1697586896853219&amp;alt=media" alt=""></p> <p>As a tip for training, the CV score was better improved by setting the loss weight of "PAD" to 0.0.<br> Backborn: xlm-roberta-large, xlm-roberta-base<br> Trainer: XLMRobertaForTokenClassification<br> Dataset: train.csv (given), indicCorp v2<br> Punctuations: [ ,।?-]</p> <hr> <p>・<a href="https://github.com/sagawatatsuya/BengaliAI_Speech_Recognition_3rd_solution/tree/main" target="_blank">CTC and LM train code</a><br> ・<a href="https://github.com/espritmirai/bengali-punctuation-model" target="_blank">punctuation model code</a><br> ・<a href="https://www.kaggle.com/code/takuji/3rd-place-solution?scriptVersionId=147177786" target="_blank">inference notebook</a></p> <p>Thank you!<br> We used the original train split data at 2nd step.</p> <p>Thank you.<br> I don't understand what 20GB limit is. The limit of data uploaded to kaggle 107GB. By deleting or making public your datasets, you can upload over 20GB files.</p> <p>Which model (CTC, LM, punctuation)?</p> <p>We trained the CTC model for 10 epochs with 671231 data(split='valid': 28855, cleaned split='train': 642376).</p>
Bengali.AI Speech Recognition
40th Place Solution without External Dataset!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It’s improved 0.001 in LB &amp; PB. Thank you for your comment and congratulations achieving 3rd place 🙌</p> <p>We achieved 40th Place (Silver Medal). Congratulations to all of Team Members from <strong>BengalX</strong>: <a href="https://www.kaggle.com/iftekharamin" target="_blank">@iftekharamin</a> , <a href="https://www.kaggle.com/mdfahimreshm" target="_blank">@mdfahimreshm</a> , <a href="https://www.kaggle.com/fahimshahriarkhan" target="_blank">@fahimshahriarkhan</a> 🎉</p> <p><em>I would like to acknowledge my Team Lead <a href="https://www.kaggle.com/iftekharamin" target="_blank">@iftekharamin</a> Bhaiya for giving opportunity to do this competition and giving proper guideline for achieving silver medal.</em></p> <p><strong>Dataset:</strong> We subset the dataset Based on <a href="https://www.kaggle.com/imtiazprio" target="_blank">@imtiazprio</a> published train metadata. We used condition to filter clean dataset from train meta features (yellowking_preds &amp; google_preds) wer = 90% similar. After this we further filter dataset which mos_pred &gt; 2. And we found around 100k+ datapoints.</p> <p><strong>Data Cleaning:</strong> We filter audios which duration is less then 1 sec. as outlier and to not mislead model performance.</p> <p><strong>Augmentations:</strong> We used audio augmentations i.e, Noise , Background sound mixing, Speed up-down, SpecAug, Changing different Sampling Rates. </p> <p><strong>STT Modeling:</strong> We used Indic wav2vec2 pretrained model. And we finetune with the Subset augmented dataset. </p> <p><strong>Post-processing- LM Decode:</strong> We used arijit indic pretrained KenLM.</p> <p><strong>Post-processing-Punctuation:</strong> We used xashru/punctuation-restoration repo with xlm-roberta-base model and fine tune this competition dataset as punctuation restoration. We only consider 4 punctuation classes : {'O': 0, 'COMMA': 1, 'PERIOD': 2, 'QUESTION': 3} </p> <p><strong>Post-processing-Erro Correction:</strong> We used this repo solution as further error correction <a href="https://github.com/Tawkat/Bengali-Spell-Checker-and-Auto-Correction-Suggestion-for-MS-Word" target="_blank">https://github.com/Tawkat/Bengali-Spell-Checker-and-Auto-Correction-Suggestion-for-MS-Word</a></p>
Bengali.AI Speech Recognition
44th Place Solution 🎉🎉🎉
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Before into the topic, I would like to congratulate all team members: <a href="https://www.kaggle.com/nanaxing" target="_blank">@nanaxing</a>, <a href="https://www.kaggle.com/focuswilliam" target="_blank">@focuswilliam</a>, <a href="https://www.kaggle.com/zhangjinru" target="_blank">@zhangjinru</a>, <a href="https://www.kaggle.com/marcocheung0124" target="_blank">@marcocheung0124</a>. The first three of them are undergraduates and they are all new to Kaggle. CongratulationsBefore the topic, I would like to congratulate all team members: <a href="https://www.kaggle.com/nanaxing" target="_blank">@nanaxing</a>, <a href="https://www.kaggle.com/focuswilliam" target="_blank">@focuswilliam</a>, <a href="https://www.kaggle.com/zhangjinru" target="_blank">@zhangjinru</a>, <a href="https://www.kaggle.com/marcocheung0124" target="_blank">@marcocheung0124</a>. The first three of them are undergraduates and they are all new to Kaggle. Congratulation again on their first medal and the success of our first audio competition🥈🥈🥈!</p> <p>Then I acknowledge <a href="https://www.kaggle.com/takanashihumbert" target="_blank">@takanashihumbert</a> for his published training notebook <a href="url" target="_blank">https://www.kaggle.com/code/takanashihumbert/bengali-sr-wav2vec-v1-bengali-training/notebook</a>. Another acknowledgment is for <a href="https://www.kaggle.com/mbmmurad" target="_blank">@mbmmurad</a> for his work to introduce the audios of the dataset <a href="url" target="_blank">https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0</a> in <a href="url" target="_blank">https://www.kaggle.com/code/mbmmurad/dataset-overlaps-with-commonvoice-11-bn/notebook</a>.</p> <p><strong>Dataset</strong>: We directly used the data provided by the organizer and did not use any external data.</p> <p><strong>Training Environment</strong>: One Colab Pro+ account is utilized. Due to limitations of equipment and computing power, we are unable to train more models(like the punctuation model) and complete data sets. </p> <p><strong>Data Augmentation</strong>: Some augmentations like HighLowPass, Noise, and PitchShift are used to increase robustness.</p> <p><strong>Model Training</strong>: Pretrained model is from <a href="url" target="_blank">https://www.kaggle.com/datasets/takanashihumbert/bengali-ex002</a>. We put in one-tenth of the data for training each time (feature encoder and feature extractor take turns to freeze). For freezing the feature extractor, the lr is 2e-5 for warmup. For the feature encoder, it is 6e-6. In our experiments, bs=1. After training three-fifths of the data, the public score is 0.42 and private score is 0.503.</p> <p><strong>Decoding Parameters</strong>: We adjust the decoding parameters and displayed them in the notebook <a href="url" target="_blank">https://www.kaggle.com/code/yuliknormanowen/bengali-sr-wav2vec-v1-bengali-inference-for-v4?scriptVersionId=146374376</a>. </p> <p>Thank you all so much for reading this. If you have any suggestions, we are happy to accept them. We would be very grateful if you could upvote this topic🥺🥺🥺🙏🙏</p>
Bengali.AI Speech Recognition
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to the organizers for this interesting competition. Speech recognition is a very interesting direction and I did learn a lot from the discussion and public code of the many contestants in this competition. The past three months have been stressful but rewarding. I will try to make my solution clear in my broken English</p> <h2>Summary</h2> <p>My solution is relatively simple, using a wav2vec2 1b model as the pretrain model and training a Wav2Vec2ForCTC model. During the post-processing stage, a 6-gram language model is trained using KenLM, followed by further post-processing of normalization and dari on the output results.</p> <h2>Wav2Vec2ForCTC Training</h2> <p>Specifically, I use <a href="https://huggingface.co/facebook/wav2vec2-xls-r-1b" target="_blank">facebook/wav2vec2-xls-r-1b</a> as the pretrain model. Training this model requires three stages, with different random seeds and consistent data augmentation and parameters in each stage:</p> <ul> <li>Optimizer: AdamW (weight_decay: 0.05; betas: (0.9, 0.999))</li> <li>Scheduler: Modified linear_warmup_cosine scheduler (init_lr: 1e-5; min_lr: 5e-6; warmup_start_lr: 1e-6; warmup_steps: 1000; max_epoch: 120; iters_per_epoch: 1000)</li> </ul> <pre><code> : (): self.optimizer = optimizer self.max_epoch = max_epoch self.min_lr = min_lr self.init_lr = init_lr self.warmup_steps = warmup_steps self.iters_per_epoch = iters_per_epoch self.warmup_start_lr = warmup_start_lr warmup_start_lr &gt;= init_lr self.max_iters = max_epoch * iters_per_epoch (): total_steps = cur_epoch * self.iters_per_epoch + cur_step total_steps &lt; self.warmup_steps: warmup_lr_schedule( step=cur_step, optimizer=self.optimizer, max_step=self.warmup_steps, init_lr=self.warmup_start_lr, max_lr=self.init_lr, ) total_steps &lt;= self.max_iters // : cosine_lr_schedule( epoch=total_steps, optimizer=self.optimizer, max_epoch=self.max_iters // , init_lr=self.init_lr, min_lr=self.min_lr, ) total_steps &lt;= self.max_iters // : cosine_lr_schedule( epoch=self.max_iters // , optimizer=self.optimizer, max_epoch=self.max_iters // , init_lr=self.init_lr, min_lr=self.min_lr, ) : cosine_lr_schedule( epoch=total_steps - self.max_iters // , optimizer=self.optimizer, max_epoch=self.max_iters // , init_lr=self.min_lr, min_lr=, ) (): lr = (init_lr - min_lr) * * ( + math.cos(math.pi * epoch / max_epoch) ) + min_lr param_group optimizer.param_groups: param_group[] = lr </code></pre> <ul> <li>base DataAugmentation (denode as <code>base_aug</code>): </li> </ul> <pre><code> (): trans = Compose( [ TimeStretch(min_rate=, max_rate=, p=, leave_length_unchanged=), Gain(min_gain_in_db=-, max_gain_in_db=, p=), PitchShift(min_semitones=-, max_semitones=, p=), OneOf( [ AddBackgroundNoise(sounds_path=musan_dir, min_snr_in_db=, max_snr_in_db=, noise_transform=PolarityInversion(), p=), AddGaussianNoise(min_amplitude=, max_amplitude=, p=), ] musan_dir [ AddGaussianNoise(min_amplitude=, max_amplitude=, p=), ], p=, ), ] ) trans </code></pre> <ul> <li><p>composite DataAugmentation (denode as <code>comp_aug</code>)。I use three comp_augs:</p> <ol> <li>split an audio wave evenly into 3 segments and perform base_aug on each segment;</li> <li>randomly select two speeches from the dataset, perform base_aug on each of them separately, and then concatenate them together;</li> <li>combine the above two data augmentation methods.</li></ol></li> <li><p>dataset<br> For Wav2Vec2ForCTC training, I didn't use external data, I filtered the competition dataset in the following steps:</p> <ol> <li>train a model based on <a href="https://huggingface.co/arijitx/wav2vec2-xls-r-300m-bengali" target="_blank">arijitx/wav2vec2-xls-r-300m-bengali</a> </li> <li>use the model above to inference the whole dataset, sort all sample scores from small to large and retain the top 70% of the data</li></ol></li> </ul> <h2>KenLM training</h2> <ul> <li><p>dataset:<br> I use IndicCorpv1 and IndicCorpv2 as corpus. After cleaning, the two corpus are combined without reduplicates.</p></li> <li><p>corpus cleaning:<br> I clean each sentence in the corpus using the below code:</p></li> </ul> <pre><code>chars_to_ignore = re.() long_space_to_ignore = re.() bnorm = Normalizer() (): text = re.sub(chars_to_ignore, , text) text = re.sub(long_space_to_ignore, , text).strip() text (): sentence = normalize(sentence) sentence = fix_text(sentence) words = sentence.split() : all_words = [bnorm(word)[] word words] all_words = [_ _ all_words _] (all_words) &lt; : .join(all_words).strip() TypeError: </code></pre> <ul> <li>6gram language model is trained.</li> </ul> <h2>Tried but not work</h2> <ol> <li>use external data, like openslr, shrutilipi</li> <li>use deepfilternet to denose the audio</li> <li>use larger model (wav2vec2-xls-r-2b)</li> <li>use whisper-small</li> <li>use more corpus to train an language model (BanglaLM)</li> <li>train a spelling error correction model</li> <li>train the fourth stage model</li> <li>…<br> There's a lot more, off the top of my head</li> </ol> <h2>Update</h2> <ul> <li>I just found the 7gram-language kenlm model is chosen to be the final submission version, not the 6gram version, even though their scores are the same.</li> <li>inference code is made public: <a href="https://www.kaggle.com/code/hanx2smile/4th-place-solution-inference-code" target="_blank">https://www.kaggle.com/code/hanx2smile/4th-place-solution-inference-code</a></li> <li>training code is made public: <a href="https://github.com/HanxSmile/lavis-kaggle" target="_blank">https://github.com/HanxSmile/lavis-kaggle</a></li> </ul> <p>Thank you for your reply! It looks like that a souce separation model is more suitable for this competition. Your solution is very inspiring, especially the punctuation model part, Congratulations!</p>
Bengali.AI Speech Recognition
57th Place Solution for the Bengali.AI Speech Recognition Competition (Top 8%)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Bengali.AI &amp; Kaggle for organizing this competition. It was a very wonderful learning experience for my team <a href="https://www.kaggle.com/kamaruladha" target="_blank">@kamaruladha</a> <a href="https://www.kaggle.com/ariffnazhan" target="_blank">@ariffnazhan</a>. This is our first bronze medal and first-time dealing with audio data! </p> <p>Business context: <a href="http://www.kaggle.com/competitions/bengaliai-speech/overview" target="_blank">www.kaggle.com/competitions/bengaliai-speech/overview</a><br> Data context: <a href="http://www.kaggle.com/competitions/bengaliai-speech/data" target="_blank">www.kaggle.com/competitions/bengaliai-speech/data</a></p> <h1>Overview of the Approach</h1> <h2>ASR Model</h2> <p>Our most performing model is from finetuning <code>ai4bharat/indicwav2vec_v1_bengali</code> trained for 2 epochs with training data containing both train and valid split. </p> <h2>Dataset</h2> <p>We took 80% of training data and 96% of validation split using random sample for our training set. Dataset is also preprocessed first using bnmnormalizer. We noticed a difference in performance for preprocessed data compared to unprocessed one.</p> <h2>Dataset Cleaning</h2> <p>We conducted data cleaning on the audio transcription by:</p> <ul> <li>Removing all kinds of punctuation. Manually added quotations into the list.</li> </ul> <pre><code> base = .punctuation punct = base + punct &gt;&gt;&gt; !</code></pre> <ul> <li>Strip any whitespaces at the front and at the back of string.</li> <li>Normalize the transcriptions using bnunicodenormalizer.</li> </ul> <pre><code>!pip install bnunicodenormalizer bnunicodenormalizer import Normalizer bnorm=Normalizer() def norm(transcription): text_list = [] texts = transcription.() texts: = bnorm() ([]) &gt; : text_list.append([][][]) : text_list.append() normalized_transcription = .join(text_list) </code></pre> <p>In order for the normalizer to work on our data, we had to split it into tokens (words). This is due to the normalizer not accepting a full sentence and can only process single words. We also noticed that the process could take some time. Hence, we implemented multi-threading to speed up the entire normalization process. We applied the data cleaning process on the base dataset from Bengali.AI and also Google Fleurs.</p> <ul> <li>Calculated the length for each of the audio files.</li> </ul> <p>Based on our analysis, we noticed the audio files had varying lengths. This resulted in unexpected OOM during the validation. Thus, we filtered the audio files and only took 18~20 seconds. </p> <ul> <li>Mixing Training and Validation Data</li> </ul> <p>The data given by Bengali.AI was already separated into 2 (training and validation). We noticed that the validation data was somewhat better as compared to the training data. Thus, we mixed some of the validation data into the training set. </p> <ul> <li>[Untested Process]</li> </ul> <p>In total we had almost 1 million audio files. From our analysis and preliminary training evaluation, we noticed that the data consists of very bad audio quality. This affected the performance of the model directly and prevented the model to improve. Initially we thought of calculating the amplitude of the audio files. But, determining the amplitude does not correlate to background noise. What we thought of doing is to filter the audio files that the model has trouble to transcribe. </p> <p>To put it simply, by training the model on the cleaned data, we would use the best checkpoint and transcribe the training and validation data. The data that produces the worst WER (word error rate) shall be excluded. This is to ensure that the model is only trained and validated on good quality data. </p> <h2>Language Model</h2> <p>We attached language model trained using kenlm library on top of our wav2vec model prediction to improve our score and use Word Error Rate (WER) and Character Error Rate (CER) as our evaluation metrics to benchmark the predictions. We train the language model with the texts from competition datasets (train and validation) and google fleurs bengali.</p> <p>After rigorous evaluations, 3-gram showed the best performance among 4-gram and 5-gram models.</p> <h2>Details of Submission</h2> <h3>Model Saturation</h3> <p>In the early stages of our training phase, we dumped the entire dataset onto the model. Not only did we face OOM problems, but we slowly notice that the model became saturated. Due to not having an early stopping function and not stopping the training at certain points, the model was performing even worst as compared to those who only train with 10%-20% data. </p> <p>To prevent from the model being saturated, we had to feed the data into byte-sized pieces (pun intended). As it turns out, feeding a small partition of the data improved the model' performance quite drastically. From there, we tested the model using the test set. If the performance is increasing, we maintain the current data partition. If the performance is constant or decreasing, we would increase the data partition, i.e. add in more data. </p> <p>By using this strategy, we were able to prevent the model from being saturated. The only downside is that we had to keep track of the model's performance and constantly push out the checkpoints. </p> <p>All in all, implementing these strategies showed a significant improvement on the model's performance as compared to not cleaning the data at all. In our previous projects, we would only conduct extremely minor data cleaning process as to preserve the structure of the data. We wanted the model to learn the clean and also unclean data in the hopes of making it robust. Unfortunately, this was not the case. </p> <p>Not cleaning the data at all would degrade the performance and training the model for long periods of time will result in model saturation.</p> <h3>What didn't work for our team</h3> <p>We experimented with a few other pretrained models, but their result is not as good as our final model. We tried models from:</p> <ul> <li>facebook/mms-1b</li> <li>wav2vec2-large-xlsr-53</li> <li>whisper-small</li> <li>conformer-rnn-t</li> </ul> <p>We also noticed a degrading in performance when we train the model longer, our validation score no longer correlates with the public lb. </p> <p>We experimented with adding google fleur to our dataset for training, but it seems that the model trained on the dataset doesn't improve much of our submission score.</p> <p>We tried increasing the value in wav2vec2 parameters for mask_time_prob &amp; mask_feature_prob but we didn't notice any improvement in performance.</p> <p>We learned that it's very important that the dataset is properly cleaned and filtered in this competition. It could help boost the score if we manage to filter poor quality audio with high WER. We didn't manage to work more on preprocessing the dataset.</p> <h3>Sources</h3> <p>Pretrained model: <a href="https://huggingface.co/ai4bharat/indicwav2vec_v1_bengali" target="_blank">https://huggingface.co/ai4bharat/indicwav2vec_v1_bengali</a><br> Source Code: <a href="https://github.com/malaysia-ai/bengali.ai-stt-competition" target="_blank">https://github.com/malaysia-ai/bengali.ai-stt-competition</a><br> Inference Notebook: <a href="https://www.kaggle.com/code/aisyahhrazak/inference-wav2vec2-kenlm" target="_blank">https://www.kaggle.com/code/aisyahhrazak/inference-wav2vec2-kenlm</a></p>
Bengali.AI Speech Recognition
5th place solution - ensembling works
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Inference code is cleaned up and published:<br> <a href="https://www.kaggle.com/code/benbla/5th-place-solution/notebook" target="_blank">https://www.kaggle.com/code/benbla/5th-place-solution/notebook</a></p> <p>It is important to normalize the texts the same way you normalize the training data for the audio model. </p> <p>Therefore you should preprocess and normalize the texts before training your KenLM model. If you use the <strong>bnunicodenormalizer</strong> for the training annotations, you should use it for the language model text corpus as well. </p> <p>I think the problem with the linked KenLM model is not related to the custom vocabulary, but rather that the texts for the model were normalized differently or not at all.</p> <p>You receive as raw output of a Wav2Vec2 model a matrix with the logits. The number of lines is proportional to the length of the audio file and the number of columns is proportional to the length of the vocabulary. If we assume greedy decoding as the simplest method, we use argmax to check for each line which character has the highest probability. This means that the output is not autoregressive at the character level. The CTC algorithm then handles pauses and consecutive letters and creates entire words. This means that in this case you have a string of words that can be further processed into sentences by adding punctuation marks.</p> <p>Therefore, there is no need to manually put the output together or to split the output again for further processing. I hope, it answered your question.</p> <p>Thank you for organizing this competition so well. Whenever questions arose, the hosts responded promptly and clearly. I really enjoyed working on this problem. Thank you very much <a href="https://www.kaggle.com/imtiazprio" target="_blank">@imtiazprio</a> </p> <p><strong>Modeling</strong><br> Like most of the participants, I started with the YellowKing pipeline. I changed the backbone to IndicWav2Vec and reinitialized the CTC layer. With this method and the public Dari/normalizing pipeline, it was already possible to achieve 0.48x scores without a Language Model. However, there was a big problem: <strong>the local WER score did not match the public lb anymore if the model was trained for too long</strong>. At a learning rate of 3e-5, local overfitting started at 40k steps (batch size 16). I suspect that the model started learning the relationship of erroneous audio/annotation pairs.</p> <p>Due to the noisy annotation, I excluded all samples with a mos-score of &gt;2.0 in the subsequent training stage, reaching a score of 0.472. After re-labeling with the new model, I removed all samples with a WER above 0.5 for my model and a WER above 0.5 for YellowKing. <strong>This resolved the local overfitting issue</strong>, ensuring that local improvements correlated with better public lb scores. I trained the model over 210k steps (bs 16) with a learning rate of 8e-5, achieving a score of 0.452 without a language model on top.</p> <p>Additionally, I trained a model with a <a href="https://huggingface.co/facebook/wav2vec2-xls-r-1b" target="_blank">larger backbone</a>. Unfortunately, due to lack of time, I could not fully exploit the potential, but after 135k steps I could already achieve a score of 0.454. The learning rate for the larger model had to be much smaller (1e-5).</p> <p><strong>Key Takeaways:</strong></p> <ul> <li>Quality filtering of data significantly enhanced training performance.</li> </ul> <p><strong>Ensembling</strong><br> I tried early to average the model logits of different finetuned models. However, it doesn´t work because the predictions may not be aligned. This is even the case for models of the same architecture. In <a href="https://arxiv.org/pdf/2206.05518.pdf" target="_blank">this paper</a> a procedure is described, where <strong>features of different ASR models are concatenated</strong>, followed by the addition of a transformer encoder and a CTC layer. The weights of the asr models are freezed and only the transformer encoder and the CTC layer are trained. I applied this approach to my finetuned models:</p> <ul> <li>Extract embeddings of the finetuned models</li> <li>Concatenate the features</li> <li>Concatenated features are passed into the Transformer encoder and then processed in the CTC layer</li> <li>A few steps (7k steps for batchsize 8) on the same training data were enough to increase the performance of the overall model from 0.355 to 0.344 (I justed tested it in the full pipeline, so with language model, punctuation and normalizing)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2959383%2Fadb7e22f32c07808c16ed5ece66f5032%2FBild1.png?generation=1697605346189304&amp;alt=media" alt=""></li> </ul> <p><strong>Key takeways:</strong></p> <ul> <li>Ensembling asr model outputs by concatenating the last hidden states and adding a transformer encoder with CTC layer improves performance a lot</li> </ul> <p><strong>Potentials:</strong></p> <ul> <li>I think this is where the <strong>greatest further potential lies</strong> in my solution. I didn't have enough time, but it would be interesting to see how Whisper or HUBERT in the ensemble would have improved the performance. I think because of the different architectures it should be even better because of the larger diversity in the ensemble.</li> <li>My finetuned model with the 1b backbone is undertrained. There is still potential for better performance if the model is trained for a longer period.</li> <li>I have hardly experimented with the parameters and training routine of the added transformer encoder. There is room for improvement as well.</li> </ul> <p><strong>Language Model</strong><br> I have thought a lot about the language model part. I had thought about interpreting the whole thing as a seq2seq problem and taking a pretrained transformer as the language model. In the end I just used a standard KenLM model. For this I used datasets which were shared in the forums (e.g. IndicCorp, SLR, Comp data).</p> <p>In preprocessing, I radically <strong>limited to the character set of the training data</strong>, as the<a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/433469#2410416" target="_blank"> hosts said </a> there are no new characters in the test data. I replaced numbers, other languages, and other characters with special tokens. This reduced the number of unigrams significantly which was a bottleneck in the Kaggle environment. I ended up with <strong>a 0 0 0 2 2 pruned 5-ngram</strong>. Adding more data didn´t increase the performance at a certain point.</p> <p>Much more important for the Kaggle workflow was the following fact: If you use the standard Wav2Vec2ProcessorWithLM of Huggingface and then remove the model, the <a href="https://github.com/kensho-technologies/pyctcdecode/pull/111" target="_blank">RAM is not released</a>. I therefore disassembled the pipeline and first computed all logits, stored them locally, and then read them back in for the decoding process. Afterwards I removed the language model cleanly with decoder.cleanup() before deleting the decoder. Without this measure, I would not have been able to use the ensemble model because of out of memory errors.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2959383%2Fd077b0fb9e9042395af9401e0b509c65%2FBild2.png?generation=1697605762499198&amp;alt=media" alt=""></p> <p><strong>Key Takeways:</strong></p> <ul> <li>The language model is important: precise preprocessing manages size while excessive data may not necessarily improve the LM's performance. Corrupted data can even degrade the performance of the language model.</li> </ul> <p><strong>Punctuation</strong><br> I tried different punctuation models and ensembles. In the end, I used a <a href="https://github.com/xashru/punctuation-restoration" target="_blank">XLM-Roberta-Large model</a>. It improved the score by ~0.02. I could improve the performance even more by forcing the model to output a | or ? after the last word (+0.002).</p> <p><strong>Code</strong><br> Cleaned up inference code can be found here:<br> <a href="https://www.kaggle.com/code/benbla/5th-place-solution/notebook" target="_blank">Kaggle Inference</a><br> Cleaned up training code can be found here:<br> <a href="https://github.com/bd317/BengaliAI_Speech_Recognition_5th_solution" target="_blank">Github training</a></p>
Bengali.AI Speech Recognition
66th Place Solution for the Bengali.AI Speech Recognition Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Business context: <a href="http://www.kaggle.com/competitions/bengaliai-speech/overview" target="_blank">www.kaggle.com/competitions/bengaliai-speech/overview</a><br> Data context: <a href="http://www.kaggle.com/competitions/bengaliai-speech/data" target="_blank">www.kaggle.com/competitions/bengaliai-speech/data</a></p> <h2>Overview of the Approach</h2> <p>Basically, we decided to use a very straightforward approach: finetuning only one model and using the language model afterward. We thought that the key to the success in this competition is the data, as the private dataset contains different OOD samples. In our final submission we have used the <code>ai4bharat/indicwav2vec_v1_bengali</code> model, that was finetuned for 3 epochs with the data. For the data we have used: SLR53 dataset, SLR37 dataset, Common Voice 13 data, 0.3 of the train split (random) competition data, 0.85 of the validation split competition data. We deleted Common Voice samples from the validation set, however, our CV score did not really correlated with the LB score. For the language model we just used the <code>arijitx/wav2vec2-xls-r-300m-bengali</code>.</p> <h2>Details of the submission</h2> <p>In conclusion, we have tried other pretrained checkpoints and other models, but they performed worse. We have also tried some augmentations for the volume and noise, but they did not work for us.</p> <p>It is important to clean the competition dataset, for example, by pseudolabeling it. Otherwise you get poor correlation between CV and LB as we did. It is also important to train model on several datasets, then model trains better. We suppose, that it is due to "unlearning". If we finetune the model only on a competition data, then the model starts forgetting previous knowledge. For the OOD data model should remember many variations of speech. </p> <p>We also did not do the punctuations part due to the lack of time, but it is essential, if you work with this type of problem.</p> <h2>Sources</h2> <p>Pretrained model: <a href="https://huggingface.co/ai4bharat/indicwav2vec_v1_bengali" target="_blank">https://huggingface.co/ai4bharat/indicwav2vec_v1_bengali</a><br> Dataset 1: <a href="https://www.openslr.org/53/" target="_blank">https://www.openslr.org/53/</a><br> Dataset 2: <a href="https://www.openslr.org/37/" target="_blank">https://www.openslr.org/37/</a><br> Dataset 3: <a href="https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized?select=train.tsv" target="_blank">https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized?select=train.tsv</a></p>
Bengali.AI Speech Recognition
67th Place Solution - Bronze - LB 0.515
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you to the organizers for arranging this exciting competition, and congratulations to the winners. I have learned a lot throughout this journey. The discussion section and public notebooks were quite helpful. I feel a bit regretful because I couldn't dedicate enough time due to my final exams.</p> <h2>My Approach</h2> <p>A few months back, I improved <a href="https://www.kaggle.com/code/nischaydnk/bengali-finetuning-baseline-wav2vec2-inference" target="_blank">the public notebook</a> by <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> just by training my own <a href="https://www.kaggle.com/code/umongsain/build-an-n-gram-with-kenlm-macro" target="_blank">n-gram language model</a>. It improved both my public LB (0.445 -&gt; 0.436) and private LB (0.534 -&gt; 0.515).</p> <h2>Preparing the Data</h2> <p>I normalized the training texts using <a href="https://github.com/mnansary/bnUnicodeNormalizer" target="_blank">bnunicodenormalizer</a> as well as the normalizer developed by <a href="https://github.com/csebuetnlp/normalizer" target="_blank">CSE BUET NLP</a>. Some data could not be normalized because of the confusion which contained the character " ঃ" (<a href="https://bn.wikipedia.org/wiki/%E0%A6%AC%E0%A6%BF%E0%A6%B8%E0%A6%B0%E0%A7%8D%E0%A6%97" target="_blank">বিসর্গ</a>) as a separate word. As it is mostly used in place of ":" (colon), I replaced them with colons instead.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4772132%2Fc86799e65036ebc9228f9f709a98a309%2FScreenshot%20from%202023-10-20%2020-15-23.png?generation=1697811401235581&amp;alt=media" alt=""></p> <h2>Training the N-gram Model</h2> <p>Before training the n-gram model, I removed the punctuation from the normalized text data because Wav2Vec2 does not predict any punctuation. I trained the n-gram model using <a href="https://github.com/kensho-technologies/pyctcdecode" target="_blank">pyctcdecode</a>.</p> <h2>Solution Code</h2> <ul> <li>Data preparation: <a href="https://www.kaggle.com/code/umongsain/macro-normalization" target="_blank">https://www.kaggle.com/code/umongsain/macro-normalization</a></li> <li>Training code: <a href="https://www.kaggle.com/code/umongsain/build-an-n-gram-with-kenlm-macro/notebook" target="_blank">https://www.kaggle.com/code/umongsain/build-an-n-gram-with-kenlm-macro/notebook</a></li> <li>Inference code: <a href="https://www.kaggle.com/code/umongsain/67th-place-solution-inference-notebook" target="_blank">https://www.kaggle.com/code/umongsain/67th-place-solution-inference-notebook</a></li> </ul> <h2>Things I Tried But Did Not Work</h2> <p>Later, I tried to fine-tune multiple XLS-R and MMS models from scratch. As I had limited resources, I focused on low-resource training, using only 30k samples from the Common Voice 13. Unfortunately, they could not beat my previous baseline. But what I found is that <a href="https://huggingface.co/facebook/mms-1b" target="_blank">MMS 1B</a> works quite well for low-resource languages. Sadly, it is available only for non-commercial uses. The checkpoints can be found here:</p> <ul> <li><a href="https://huggingface.co/Umong/wav2vec2-xls-r-300m-bengali" target="_blank">https://huggingface.co/Umong/wav2vec2-xls-r-300m-bengali</a></li> <li><a href="https://huggingface.co/Umong/wav2vec2-large-mms-1b-bengali" target="_blank">https://huggingface.co/Umong/wav2vec2-large-mms-1b-bengali</a></li> </ul> <h2>Things That I Could Have Tried</h2> <p>I had plans to use QLoRa to fine-tune on a few out-of-domain (OOD) hand-labeled samples for domain adaptation after fine-tuning on the crowd-sourced data. I wish I had a bit more free time.</p> <p>I am glad to have my 1st Bronze competition medal. Again, thanks to the organizers.</p>
Bengali.AI Speech Recognition
699th Rank for Bengali.AI Speech Recognition Challenge!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ul> <li>Business context: <a href="url" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech</a></li> <li>Data context: <a href="url" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/data</a></li> </ul> <p>I'm relatively new to competitions and submissions, and also this is my first time working with Audio data. I had zero knowledge about ASR and Audio models. During this competition, I gained a lot of experience with ASR and Transformers and implementing and training models.</p> <p><strong>Model</strong> : OpenAI Whisper-small pre-trained model <br> <strong>Training step</strong> : trained up to step 200,000 <br> <strong>Learning rate</strong> :1e-5<br> <strong>Batch_size</strong> =12<br> <strong>Trained on</strong> : 12GB RTX 3080 Ti<br> <strong>Trainer</strong>: Hugging face trainer<br> <strong>External Dataset</strong>: MUSAN<br> <strong>Normalization</strong>: Normalize Training Data with librosa.util.normalize<br> <strong>Audio Augmentation</strong>:Add 4 types of Audio Augmentation randomly to the data during the preparation process for training</p> <ul> <li><strong>Pitch</strong>: Change the pitch of the audio clip with librosa.effects.pitch_shift </li> <li><strong>Speed</strong>: Change the speed of the audio clip with librosa.effects.time_stretch </li> <li><strong>Noise</strong>: Add randomly chosen background noise audio clips from the MUSAN dataset to the Training Data</li> <li><strong>Music</strong>: Add randomly chosen background music instruments audio clips from the MUSAN dataset to the Training Data</li> </ul> <p>The reason why I chose the Whisper model was because, in the paper of the dataset, they mentioned that Whisper had better results than any other model. <br> I have done some audio augmentation on training data but it did not help a lot in getting a better score, which means these audio augmentation steps did not close the gap between test data and train data. <br> After the competitions and reading the leaderboard solution, I noticed that I should have spent more time on cleaning data from noisy audio clips, poorly annotated data, and poor-quality audio clips and making the training data more generic towards the test dataset and its diverse domain.</p> <p>Thank you to everyone who takes the time to read this, and I really appreciate any suggestions you may have to help me enhance my machine learning skills.</p> <p>Inference notebook: <a href="url" target="_blank">https://www.kaggle.com/code/saeidehmousavi/bengali-competition-notebook</a><br> Model Training notebook:<a href="url" target="_blank">https://www.kaggle.com/code/saeidehmousavi/audio-augmentation-notebook</a></p>
Bengali.AI Speech Recognition
81st Place Solution for Bengali.Ai Speech Recognition.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>1. Fine Tuning Wev2Vec2:</h2> <p>Training Notebook: <a href="https://www.kaggle.com/code/takanashihumbert/bengali-sr-wav2vec-v1-bengali-training" target="_blank">https://www.kaggle.com/code/takanashihumbert/bengali-sr-wav2vec-v1-bengali-training</a></p> <p>Dataset Used: <a href="https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized" target="_blank">https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized</a></p> <p>Hyperparameters:</p> <p>LR: 1 e-4<br> Epochs: 15<br> Logging step: 5000</p> <h2>2. Use Optuna to find the Best decoding Parameters.</h2> <p>Notebook: <a href="https://www.kaggle.com/code/royalacecat/lb-0-442-the-best-decoding-parameters" target="_blank">https://www.kaggle.com/code/royalacecat/lb-0-442-the-best-decoding-parameters</a></p> <p>best_params = {'alpha': 0.3802723523729998, 'beta': 0.053996879617918436, 'beam_width': 1024}</p> <h2>Score:</h2> <table> <thead> <tr> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>0.432</td> <td>0.518</td> </tr> <tr> <td>0.435</td> <td>0.518</td> </tr> </tbody> </table>
Bengali.AI Speech Recognition
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for the additional questions.</p> <p>Assume the following case for the CTC tokenizer vocabulary.<br> Have: "," and "?"<br> Does not have: "-" and "!"</p> <p>In this case, the possible words in the LM corpus are "house", "house," and "house?"</p> <p>I wasn't sure if wav2vec could handle the symbols well, but I thought they should be recovered from the information in the acoustic model. (e.g., inflection and breath between words). However, in the top solution, the punctuation model with only textual information seems to work well enough. I was a bit surprised.</p> <p>Thank you. Please feel free to ask additional questions if anything is unclear.</p> <p>1.No specific LMs have been created.<br> kenLM scores used in Wav2Vec2ProcessorWithLM were used.</p> <pre><code> kenlm.LanguageModel() model.score(pred_1) </code></pre> <p>My understanding is that the more natural the sentence, the better the score.</p> <p>2.The top four symbols in the competition data were included. <code>[',', '-', '?', '!']</code><br> 3-4.The LM corpus only retains symbols and vocabulary contained in the tokenizer vocab. At this time, I did not single out consecutive symbols (but I think I should have done so). I replaced it with one in the predict sentence. <br> 5.single A6000 or RTX 4090.</p> <p>Thank you for organizing this competition.<br> I am new to speech recognition and have learned a lot. Thanks to the competition participants and hosts.</p> <h1>Solution Overview</h1> <ul> <li>Dataset cleaning</li> <li>Punctuation</li> <li>Simple Ensemble (Rescoring)</li> </ul> <h1>Dataset</h1> <p>First, I applied a data filter. Since I'm not familiar with the audio domain, I used a simple filter.<br> One was the ratio between the transcribed text and the length of the audio data. I filtered the data based on a 20 characters per audio second. If it's higher than that, I considered the possibility that there are too many characters for the length of the audio data. It could be abnormally fast speech or corrupted audio data.<br> Another filter I used was based on the prediction results from the yellow_king model. I filtered out data with scores above a certain threshold (CER &gt; 0.6).<br> By training on clean data, I was able to start off with a good score.</p> <p>Because the competition data set was so large, training was repeated on a subset of it rather than using all of it. Once the model achieved some performance, the WER for the remaining data set was calculated and added to the data. This round was repeated several times.</p> <h2>external data</h2> <p>kathbath, openslr37, 53, ulca<br> These external data were also filtered</p> <h1>Model</h1> <p><code>IndicWav2Vec</code><br> This model was very powerful for the test set.<br> Since the previous competition report suggested that retraining a model once it had converged would not perform well, I used a pre-trained model instead of bengali's fine-tuned checkpoints (converted to HF model from faireseq checkpoints). Another reason was that I customized vocab (Mentioned in the section on punctuation.).</p> <h2>LM Model</h2> <p>5-gram Language Model (using all of Indiccorpus v1 and v2, BangaLM, and mC4). The text corpus totaled 45GB, with the 5gram.bin file being around 22GB in size.<br> Using pyctcdecoder, alpha, beta and beam_width were adjusted with example audio CV. This initially provided a 0.01 boost to the LB score, but the benefit decreased as the score improved, eventually settling at around 0.003.</p> <h1>Punctuation</h1> <p>punctuation is very important.<br> Instead of creating a punctuation model, I took the simple approach of including punctuation in the vocab and LM corpus. Even when including punctuation, wav2vec was largely unable to predict it. However, it was found that including them in the vocab allowed them to be considered as candidates during LM decoding. Simply by including punctuation in both the model and the LM corpus, it was possible to improve the score from 0.44 to 0.428.</p> <h1>Ensemble (Rescoring)</h1> <p>Due to the structure of the output, it was not possible to simple ensemble at the logit level. Therefore, a strategy was adopted to calculate the LM Score for decoded sentences using the LM model, and the sentence with the highest score was selected.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1197187%2F5e6ee0ffb1f1b28b93f1a744fe77c438%2Fbengali.png?generation=1697624102630269&amp;alt=media" alt=""></p> <p>Sentence used the output of three models with variations on training data and decoding parameters. Both public and private models improve by about 0.006 - 0.008.</p> <h1>Not working for me</h1> <ul> <li>xls-r-300m<ul> <li>Common Voice's CV is competitive with indicwav2vec, but performs very poorly for the test set. Not effective when added to an ensemble.</li></ul></li> <li>Speech Enhancement<ul> <li>It was found that the score was poor for example audio with reverb, so noise removal was performed using DeepFilterNet. There were cases where this boosted the LB score and cases where it didn't, so it was finally not used.</li></ul></li> <li>Adding Assamese<ul> <li>my understanding is that it shares characters with Bengali. Addition did not improve score.</li></ul></li> </ul>
Bengali.AI Speech Recognition
90th Place Solution for the Bengali.AI Speech Recognition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank the hosts for the wonderful task! I feel so bad that I had only 3 days for this competition and didn't join earlier.</p> <p><strong>Context</strong></p> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/bengaliai-speech/overview" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/bengaliai-speech/data" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/data</a></li> </ul> <p><strong>Overview of the Approach</strong></p> <p>As almost everyone else I used pretrained on bengali speech model. Main parameters for fine-tuning are listed below</p> <pre><code>=, =, =, =, =, =, =, =, =, =e-, = </code></pre> <p>and the following configuration was used</p> <pre><code>.freeze_feature_extractor() .freeze_feature_encoder() </code></pre> <p><strong>Details of the submission</strong></p> <p>It took me 16h to fine-tune a model on 10% of train data for 3 epochs on my GPU — thats where I felt the need for time.</p> <p>I found that increasing <code>beam_width</code> in decoder from default to 2048 lead to 0.003 boost both on public and private (and inference time is ~8h, so we cant really increase further).</p> <p><strong>Sources</strong></p> <p><a href="https://www.kaggle.com/code/takanashihumbert/bengali-sr-wav2vec-v1-bengali-training" target="_blank">https://www.kaggle.com/code/takanashihumbert/bengali-sr-wav2vec-v1-bengali-training</a> — notebook that I used for fine-tuning<br> <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/435300" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/discussion/435300</a> — datasets and model checkpoints for resource efficient training</p> <p>Hm, thats very interesting. Because this was my second submission and I got 0.521</p> <p>Yeah. The one with 0.439 public score</p>
Bengali.AI Speech Recognition
92th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Bengali.AI &amp; Kaggle for organizing this competition. </p> <h1>Overview of the Approach</h1> <h2>ASR Model</h2> <ol> <li>Dataset: I only use small subset of competition dataset with filter audios which duration is less then 1 sec. as outlier. Removing all kinds of punctuation. Removing all kinds of punctuation then Normalization</li> <li>Augmentation: BackgroundNoise, Gain</li> <li>Pretrain model: arijitx-full-model/wav2vec2-xls-r-300m-bengali</li> </ol> <p>I don't have the time or resources to train more epochs (the model only train on 1 epoch). If possible the results could be even better!</p> <h2>Language Model</h2> <p>Same as public notebook: 5-gram LM (arijitx-full-model/wav2vec2-xls-r-300m-bengali)</p>
Bengali.AI Speech Recognition
A low ranked solution!!!😅
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Summary: I used the "ai4bharat/indicwav2vec_v1_bengali" acoustic model and worked with the competition dataset. Since I don't have any physical GPU, I relied on Kaggle's free GPU. Training the model with the entire competition dataset was impossible for me, so I decided to focus on 29k validation samples that has more accurate labels and randomly selected an equal number of training samples. I still couldn't train with these 58k samples, so I further reduced it to 30k training samples by random selection and 2k validation samples. Before training, I normalized the labels also.</p> <p>I experimented with various open-source acoustic models and found that "ai4bharat/indicwav2vec_v1_bengali" performed the best. I fine-tuned this model with my modified dataset in Kaggle. I saved checkpoints after 12 hours of training and then reload the checkpoint in another kernel and repeated this process three times (but you can't compare it with a continuous run, as the learning rate returns to its initial). Unfortunately, due to hardware limitations, I couldn't make further progress without a personal offline GPU workstation. That's why I gave up on the idea of using augmentations, incorporating a punctuation model, and so on.</p> <p>I noticed that many sentences ended with "|" , so I added "|" at the end of all labels. But then I realized, generally bengali imperative sentences start with words like {'কি', 'কী', 'কেন', 'কিভাবে','কবে','কখন'}, so I used the "?" sign instead of "|" , after all the sentences which started with the words that indicate imperatives in Bengali. Additionally, I incorporated a 5-gram language model into my approach. So, that's it.</p>
Bengali.AI Speech Recognition
My Fails (Solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a ton!</p> <p>1)I'm surprised none of the high scoring teams converted audio to spectrograms and applied image algorithms on them. Spent quite some time trying this but nothing worked.</p> <p>2)Next I tried running noise reduction on the audios during infrence. Doesn't add a lot of overhead and noise reduction algorithms work pretty well, this was just making my score worse though. I still think this is a viable option and there should be some(even though slight) improvement especially in OOD domains if we find the sweet spot for noise reduction.</p> <p>3)What i tried next was implementing/using a linear probe instead of finetuning. So finetuning effects OOD generalization and theoretically linear probes should preform better. (My half baked implementation couldn't though) . I was out of time at this point.</p> <p>Hopefully this helps someone somewhere.</p>
Bengali.AI Speech Recognition
🥈24th in two weeks and `No space left on device`!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/ravi20076" target="_blank">@ravi20076</a> </p> <h2>First of all, THANK YOU ALL!</h2> <p>As a late joiner, It was so helpful to read insightful discussions of <a href="https://www.kaggle.com/imtiazprio" target="_blank">@imtiazprio</a> <a href="https://www.kaggle.com/reasat" target="_blank">@reasat</a> <a href="https://www.kaggle.com/tugstugi" target="_blank">@tugstugi</a> <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> <a href="https://www.kaggle.com/mbmmurad" target="_blank">@mbmmurad</a> and list goes on!</p> <h2>Approach</h2> <ul> <li>Code: <a href="https://github.com/bayartsogt-ya/bengali-speech-2023" target="_blank">https://github.com/bayartsogt-ya/bengali-speech-2023</a></li> <li>Inference: <a href="https://www.kaggle.com/code/bayartsogtya/submit-to-restore-punctuation/notebook" target="_blank">https://www.kaggle.com/code/bayartsogtya/submit-to-restore-punctuation/notebook</a></li> <li>Backbone Model: <a href="https://huggingface.co/facebook/wav2vec2-xls-r-300m" target="_blank"><code>facebook/wav2vec2-xls-r-300m</code></a></li> <li>LM: KenLM 5-gram (16G) trained on <a href="https://github.com/AI4Bharat/IndicBERT#indiccorp-v2" target="_blank">IndicCorpV2 corpus</a> and <a href="https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/tree/main" target="_blank">Bengali Hate Speech Dataset</a></li> <li>More Data: Competition data + MadASR2023 + OpenSLR53</li> <li>Data Augmentation: <code>audiomentations.AddBackgroundNoise</code> using subset of "Bollywood Music", "Applause" and "Theme Music" from <a href="https://research.google.com/audioset/dataset/index.html" target="_blank">AudioSet dataset</a></li> <li>Restore Punctuation <a href="https://github.com/xashru/punctuation-restoration" target="_blank">https://github.com/xashru/punctuation-restoration</a></li> </ul> <h2>Important lesson for future me!</h2> <ul> <li><strong><code>[No space left on device]</code></strong> Just write your own custom dataset class!!!<ul> <li>Look at <a href="https://github.com/bayartsogt-ya/bengali-speech-2023/blob/main/train2.py" target="_blank">https://github.com/bayartsogt-ya/bengali-speech-2023/blob/main/train2.py</a>.</li> <li>Just increase <code>dataloader_num_workers</code> if you have enough cores. Preparing input and use <code>datasets.Dataset.set_transform</code> is complicated and <strong>not</strong> efficient.</li> <li>Be simple! read it from a file system in <code>__getitem__</code> and apply whatever you want on the fly!</li></ul></li> <li><strong><code>[Quality vs Quantity]</code></strong> 0.475 on only validation VS 0.421 on train (filtered) validation madasr openslr53 😂<ul> <li>It is obvious that filtering on big datasets helps!</li></ul></li> <li><strong><code>[Manually Check Output]</code></strong> See where your model is making mistake on your validation data.<ul> <li>This helped me to see that punctuations (dari, comma, question mark, etc…) were counted as substitutions and deleted.</li></ul></li> <li><strong><code>[Stop procrastinating on small things]</code></strong> You could have checked different chunk_length_s way before deadline. But you did not here! -&gt; This is not calling out you did not try to train whisper!</li> </ul> <h2>Guilt of Overfitting to LB!</h2> <p>Because test data (Out of Distribution) data was so different from train datasets, it was really about overfitting to public leaderboard.</p> <pre><code>!kaggle competitions submissions -v bengaliai-speech &gt;&gt; ./bengali-speech-submissions.csv df = pd.read_csv() np.corrcoef(df.publicScore, df.privateScore)[, ] ... sns.lineplot(df, y=, x=, hue=) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5055010%2Ff73eba4301b1d7ec468eb29fba3ddb02%2Fpublic_vs_private.png?generation=1698196214469291&amp;alt=media" alt=""></p> <h2>In the End</h2> <p>It is all about learning!<br> Even though it is always so frustrating to feel you were so close or so much could have done or should have done, I appreciate this learning path and that's why I joined to Kaggle in the first place! 🫡</p>
BirdCLEF 2021 - Birdcall Identification
12th place in one shot
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>I am sorry to be the one preventing you from being solo gold!</p> </blockquote> <p>Oh, don't worry, you're not the only one :) And I'm glad I didn't push you out of solo gold, you earned it.</p> <p>This post omits some details, they will be given in a BirdCLEF working notes submission and linked here.</p> <h2>Background</h2> <p>As in the <a href="https://www.kaggle.com/c/birdsong-recognition" target="_blank">Cornell Birdcall Identification</a> challenge, I didn't find time to join early. Inspired by <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>'s late strong start into the <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/" target="_blank">Rainforest Connection Species Audio Detection</a>, I decided to try the same. My first models finished training 3 weeks before the deadline, but I wanted to add in some more ideas for my first submission, seeing that the public LB got better and better. As the days passed, I got curious if it would be possible to do a single-sub solo gold. In the end I missed it by one place and one submission, but it was still a fun exercise. Well done, everyone! And thanks a lot to the organizers for the challenge and the helpful <a href="https://www.kaggle.com/c/birdclef-2021/discussion/230000" target="_blank">getting started resource collection</a>!</p> <h2>Outline</h2> <p>My solution is based on <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183571" target="_blank">my work for the Cornell challenge</a>, given that the setup was almost the same: Train on weakly-labeled focal recordings from xeno-canto, predict on soundscapes in 5-second windows. The only difference was a higher number of annotated soundscapes available, which I used solely for model selection and tuning of thresholds.</p> <p>All my models are SED models trained on random 30-second crops, using foreground and background labels as binary targets (treating foreground and background the same). I used the ensemble of three <a href="https://github.com/qiuqiangkong/audioset_tagging_cnn" target="_blank">pretrained Cnn14 (PANNs)</a> as a basis, added some variants, and chose a final blend of 18 models. To predict on soundscapes, I compute the union of 30-second crop detections as the set of allowed species, then report the 5-second window detections with a low threshold.</p> <p>I spent some time trying to make use of the geocoordinates, but was not able to improve results on the training soundscapes and thus did not include it in my submission.</p> <h2>Base models</h2> <p>Models are the same as in <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183571" target="_blank">my Cornell challenge writeup</a>: A vanilla CNN, a small ResNet, and PANN's Cnn14. I've also experimented with PANN's ResNet38, which scored just a little below Cnn14 on the xeno-canto recordings, but performed really poor on the soundscapes. This is probably due to its excessively large receptive field (around 55 seconds if my receptive field calculation code is correct). <a href="https://www.kaggle.com/yaroshevskiy" target="_blank">@yaroshevskiy</a> found the same for <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183300" target="_blank">EffNet in the Cornell challenge</a>.</p> <h2>Augmentation</h2> <p>Training data was augmented with bird-free background noise from two public datasets, as explained in <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183571" target="_blank">my Cornell challenge writeup</a>.</p> <p>I experimented with pitch shifting again, again implemented by varying the mel filterbank, but this time separately per example, not per batch. I allowed pitch to vary by 5%.</p> <p>From <a href="https://www.kaggle.com/vlomme" target="_blank">@vlomme</a>'s <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183269" target="_blank">2nd place solution in the Cornell challenge</a>, I took the idea of warping the magnitudes in the mel spectrogram. PANN's Cnn14 frontend has a "log(1 + 10^a * x)" magnitude transformation in my implementation, with "a" initialized to 5 and trained by backpropagation. I changed "a" randomly by +/- 50% during training, shifting the result such that the maximum output value matches the unmodified "a" (@vlomme did not need to pay special attention because he normalized each excerpt by its maximum).</p> <p>Also from <a href="https://www.kaggle.com/vlomme" target="_blank">@vlomme</a>, I copied an augmentation lowering a random fraction of high frequencies (up to 50% of the spectrum by up to 50% magnitude, with a linear fade to low frequencies instead of a hard cut).</p> <h2>Variations</h2> <p>I varied the following aspects in models, augmentation and training:</p> <ul> <li>Architecture: Vanilla, small ResNet, PANN's Cnn14 and ResNet38</li> <li>For vanilla and small ResNet: Having the mel filterbank end at 10 kHz or 15 kHz (no consistent difference)</li> <li>For the PANN: Subtracting the median over time from the input or not (improved score on the xeno-canto recordings, with mixed results on soundscapes)</li> <li>For the PANN: Using magnitude warping augmentation or not</li> <li>For the PANN: Using frequency damping augmentation or not</li> <li>Having 1% of examples consist of background noise only, with no labeled birds, or not</li> <li>Training on stereo recordings with randomly downmixed channels instead of mono recordings (only for those recordings that were included in previous challenges, to avoid crawling xeno-canto), or not (did not make much of a difference)</li> <li>Setting background bird targets to 0.6 instead of 1.0, or not</li> <li>Making the log-mean-exp pooling sharpness trainable per class, or fixing it to 1.0</li> <li>Using 8-fold multi-sample dropout, or not (as done in the <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183339" target="_blank">4th place Cornell challenge solution</a>; generally seemed to improve results)</li> </ul> <p>In total, I had 27 models in the end.</p> <h2>Inference</h2> <p>The inference procedure was kept almost unchanged from <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183571" target="_blank">the Cornell challenge submission</a>:</p> <ol> <li>the set of species for a recording is established by predicting on 30-second windows (which matches the training crop length) with 50% overlap and a threshold of 0.5</li> <li>the species per 5-second window are established by predicting on that window with a threshold of 0.15 or 0.08, limited to the set of species established for the recording</li> </ol> <p>Ensembling is done by averaging the logits of the models after pooling (for each 30-second or 5-second window). Averaging the probabilities or majority voting for each 5-second window produced worse results.</p> <p>The threshold of 0.08 was hand-optimized for the final 18-model ensemble based on F1-score on the training soundscapes. For all experiments before, I used 0.15, hand-optimized in the same way for a single model.</p> <p>For the final 18-model ensemble, inference takes 6 seconds for a 10-minute file on a GTX 1080 Ti.</p> <h2>Model selection</h2> <p>The first five models I trained were the three PANN variants from <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183571" target="_blank">the Cornell challenge submission</a>, the best vanilla CNN and the best small ResNet. I trained an additional ResNet with proper Glorot initialization instead of the PyTorch default, and bagged the six models. Separately, they achieve F1 scores of 0.73, 0.71, 0.70, 0.70, 0.69, 0.67. Bagged, they get 0.760.</p> <p>On the final day, I had 27 models, the best of which got 0.75 on the training soundscapes. I tried to manually form an ensemble of five to six models, but in five attempts, only one scored a little better (0.763) than the old ensemble. I wrote a script that would start with 24 models (all but the ResNet38) and greedily try removing models to improve the ensemble, started it three times and went for a walk. It found an 18-model ensemble scoring 0.766. Optimizing the threshold for that model (from 0.15 to 0.08), it went up to 0.773.</p> <h2>Results</h2> <p>The 18-model ensemble I submitted scored 0.7595 on the public leaderboard, lower than I hoped. Afraid that I overfitted to the training soundscapes, after some hesitation, I gave up the single-sub idea and also submitted the old 6-model ensemble, which scored 0.7521.</p> <p>On the private leaderboard, the 18-model ensemble scored 0.6715, the 6-model ensemble did 0.6663.</p> <h2>What didn't work</h2> <p>I spent some time trying to make use of the geocoordinates of the training recordings and test soundscapes.</p> <ol> <li>I built species lists by checking which species occur in the xeno-canto training recordings in a radius of 60km of each site.<ol> <li>I tried using these lists to filter the predictions for the training soundscapes, with no improvements.</li> <li>I tried training separate 5-model ensembles for the two training soundscape sites, each limited to the expected species (using all recordings that have these species in the foreground or background, not only the recordings close to the site). This resulted in worse performance than ensembles trained on all species.</li></ol></li> <li>I tried training a separate model for the two training soundscape sites, weighting each xeno-canto recording in the loss function based on its distance to the site. Again, this resulted in worse performance, and was also not useful when added to an existing ensemble.</li> </ol> <p>On the final day, I experimented with a cheap form of test-time augmentation: scaling the input waveform. With the model I tried, this improved results a little (e.g., predicting on three waveforms scaled by 0.75, 1.0, 1.1). With different models or an ensemble, it had an adverse effect or no effect at all.</p> <h2>What would have worked a bit better</h2> <p>The inference procedure used the same kind of pooling (log-mean-exp) for the 30-second windows and the 5-second windows. When doing this writeup, I wondered if max-pooling would be better for the 5-second windows. With an adapted threshold (0.55 instead of 0.08), it indeed works better, improving training soundscape F1 score from 0.773 to 0.778. Rerunning the model selection script gave an 11-model ensemble scoring 0.785. This would have given 0.7709 public and 0.6761 private score, ranking 9th place.</p>
BirdCLEF 2021 - Birdcall Identification
22nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations on all teams that have managed to secure top places and develop well performed solutions. It was a tough problem with main challenges the shift domain &amp; noisy/weak labelling as described by many others. What follows is not rocket science, mostly is based on ideas from previous audio competitions, however I thought that It might be useful to describe it. </p> <hr> <p><strong>TLDR</strong></p> <p><strong>Our team's submission is a weighted ensemble of 3 groups of models (CNNs + SED) with different backbones, training procedures, durations etc + PP (threshold optimisation)</strong> </p> <ul> <li><p>Read about our team's ensemble approach by <a href="https://www.kaggle.com/rohitsingh9990" target="_blank">@rohitsingh9990</a> </p></li> <li><p>Read about our <a href="https://www.kaggle.com/c/birdclef-2021/discussion/243349" target="_blank">9th Public solution outline &amp; PP</a> by <a href="https://www.kaggle.com/jaideepvalani" target="_blank">@jaideepvalani</a></p></li> </ul> <p><img src="https://i.ibb.co/kGsm45V/birdclef-inference-07818.png" alt=""></p> <hr> <p><strong>Modelling approach (my journey)</strong></p> <p>Before I start working on this problem I gave some time to read past year's top-10 solutions and view/analysed many many spectrograms. In overall, I performed more than 60 training experiments (single fold most of them) with different pipelines. Next, I describe briefly the main approaches</p> <p>1) CNNs trained with precomputed mel-specs (5, 7, 10 sec)</p> <p>2) CNNs with different pooling/heads a) attention; b) plain dense head; that takes the raw audio as input (10, 15 sec) and compute mel-spec transformations on the fly. This increased training time but gave more flexibility trying different params as the penultimate goal was to add some diversity to the ensemble.</p> <p>3) SED models (few experiments at start of comp. however none of them used in team's ensemble, <a href="https://www.kaggle.com/rsinda" target="_blank">@rsinda</a> had better results with sed models so I leave it for him to describe this part)</p> <p>For all three cases above numerous backbones tested but the following worked better for me:</p> <ul> <li>backbones: effnet-B0, regnetx, densenet121</li> </ul> <p>The basis for all training pipelines was the 2nd solution code from last year's comp shared generously by <a href="https://www.kaggle.com/vlomme" target="_blank">@vlomme</a> with many modifications along the road - I guess the most important one was to use the audio ratings as loss weights (sample wise) to account for the "noisy" low rating recordings.</p> <p>Mel-spectrograms: <code>n_fft=2048</code>, <code>n_mels=128</code>, with diff <code>hop_lengths = [375, 512, 878]</code> <br> 5, 7, 10 sec audio crops, started with 3 channels but towards the end I used single channel mels </p> <p>Optimizer/Loss: Adam / BCELogits &amp; FocallLoss (tried many others but mainly these two)</p> <p>Metrics to monitor: f1 at 03,05,07, max_f1, precision, recall (during training and on SC)</p> <p><strong>Training - other tips</strong></p> <ul> <li>use of secondary labels</li> <li>pink + white noise</li> <li>use Time-Freq masking</li> <li>Mixup / Mixor (for some of them)</li> <li>use Multi-sample dropout a.k.a. big dropout <a href="https://arxiv.org/abs/1905.09788" target="_blank">(see paper)</a> for better generalisation (tried on last days - haven't tested on LB alone but didn't see improvement using it in team's ensemble)</li> <li>used wandb for logging experiments &amp; excel sheet with more details</li> <li>check heatmaps with model predictions for each Train SC (eg see figure below)</li> </ul> <p><img src="https://i.ibb.co/WzSVZjf/fig2.png" alt="img"></p> <p><strong>Things that didn't work for me / didn't show improvement</strong> </p> <ul> <li>training a second stage model on hot-slices (selected by the best at the time 1st stage model)</li> <li>training with reduced bird classes ~300-330</li> <li>using deltas for 3-channel</li> <li>using diff random_power() - ie, change contrast</li> <li>finetuning checkpoints from previous year comp (trained on 264 classes) from <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> 3rd place solution</li> <li>tweaking <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> type model from last year comp <a href="https://www.kaggle.com/iafoss/cornell-birdcall" target="_blank">described here</a></li> </ul> <p><strong>Acknowledgments</strong></p> <p>Thanks to hosts and kaggle for organizing this competition &amp; every single kaggler that shared his notebooks/tips/concerns in public. </p> <p>Thanks to my teammates <a href="https://www.kaggle.com/rohitsingh9990" target="_blank">@rohitsingh9990</a>, <a href="https://www.kaggle.com/jaideepvalani" target="_blank">@jaideepvalani</a>, <a href="https://www.kaggle.com/rsinda" target="_blank">@rsinda</a>, <a href="https://www.kaggle.com/tomohiroh" target="_blank">@tomohiroh</a> and especially our team leader <a href="https://www.kaggle.com/rohitsingh9990" target="_blank">@rohitsingh9990</a> for his efforts to integrate/test all models to our final ensemble.</p> <p>You're welcome!! :)</p>
BirdCLEF 2021 - Birdcall Identification
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ol> <li><p>Padding with zeros worked pretty well, actually the whole goal of this setup is that the model will partly ignore the irrelevant chunks.</p></li> <li><p>Applied it to all at the same time, so each chunk might be mixed with any other chunk.</p></li> </ol> <p>+0.01 to everything</p> <p>Partly, yes and some from this years data (see also text). Did you use any?</p> <p>Thanks! Similar idea as blending 5sec with 10sec preds for example.</p> <p>It is just standard mixup, nothing special.</p> <p>Yes all mixup is done on spectrograms. We didnt have time to try on raw audio in this comp, it might have also cpu bottlenecked us.</p> <p>I think it was our strongest boost on pure models out of all training related things, but we introduced it quite early so I can't fully say how much it contributed in the end with additional stuff.</p> <p>Thanks to Kaggle, the hosts, and our fellow competitors for this very interesting competition. In the following, we want to give a rough overview of our 2nd place solution. We joined this competition in the last 3 weeks and worked hard to fill the knowledge gap to last and previous years’ participants. </p> <p>As always, this has been an incredible team effort and has been equal contribution by <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>, <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a> and <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> - I was just the lucky one winning the roll to post the solution :)</p> <p><strong>TLDR</strong></p> <p>Our solution is an ensemble of several CNNs, which take a mel spectrogram representation of a 30 sec wav-crop as input. We used mixup and added background noise as an augmentation method to improve generalization of our models. For inference, we predict on 5 sec snippets and refine the result by a binary bird/nobird classifier and postprocessing to account for metadata.</p> <p><strong>Validation</strong></p> <p>I am sure most participants are aware that a robust validation setup is quite difficult in this competition given the fact that test contains different species, and specifically also two additional sites for which we have no validation labels at all. We still tried to come up with a somewhat robust validation setup. </p> <p>All our models are only fit on short clips and we always evaluate on train soundscapes. That means that the idea of having multiple folds is redundant here and we basically have only one full validation set containing all soundscape files. <br> One thing we quite soon noticed was that if you evaluate on full soundscapes, the validation F1 score is significantly higher than on LB. Our final validation on the full soundscapes was close to 0.84. We figured that this has mostly to do with the presence of 3 full songs in validation that do not contain any calls at all. We also saw on the sample submission score that at least public LB contains more birds than the full soundscape dataset would suggest. So as a first step, we mostly focused on evaluating all but these three songs for our validation score, let’s call it CV-3 (~0.81).</p> <p>To make it even more robust, we decided to introduce bootstrapping with the following steps:</p> <ul> <li>Remove 3 songs without calls</li> <li>For k times (e.g., 10) sample 80% of the remaining songs - this should emulate the full test dataset (public+private)</li> <li>Apply any kind of threshold selection technique, post processing, etc. on this data as we have to do the same when submitting (as we have a combination of public / private there and don’t know what is what).</li> <li>For j times (e.g., 50) sample 65% of the remaining songs - this should emulate the private test dataset.</li> <li>Calculate the score on each of these j samples.</li> <li>Report average, median, min, max, std scores across all k times j (e.g., 500) subsets</li> </ul> <p>This is how such an evaluation then looks like:</p> <p><img src="https://i.imgur.com/4yOisgS.png" alt=""></p> <p><strong>Code Pipeline and data setup</strong></p> <p>We used github for code storage and versioning and neptune.ai for logging and sharing our experiments. To reduce CPU bottleneck we could have preprocessed mel spectrograms to disk, but in order to be flexible with respect to trying different hyperparameters we instead performed mel spec transformation on GPU using <a href="https://pytorch.org/audio/stable/index.html" target="_blank">torchaudio</a>. We also did mixup augmentation on the GPU and used mixed precision training to further speed up runtime. For all models we used pytorch with CNN backbones from <a href="https://github.com/rwightman/pytorch-image-models/" target="_blank">timm</a>. </p> <p><strong>Binary classifier</strong></p> <p>We trained a binary classifier to predict bird / no bird in order to try various ideas with respect to pre- and postprocessing. In the end, we only use it for one postprocessing step. For this, we used 3 datasets containing binary labels of 10sec recordings (freefield1010, warblrb10k, BirdVox-DCASE-20k) available <a href="http://dcase.community/challenge2018/task-bird-audio-detection" target="_blank">online</a>. The model is very similar to SED model used in several past solutions.</p> <p>Backbones: seresnext26t_32x4d, tf_efficientnet_b0_ns</p> <p><strong>Bird classifier</strong></p> <p>Our models were pretty similar and were all trained on 30 sec random crops of the train_short data. 30 seconds was beneficial as we do not know where the labels are (weak labels). To account for the 5sec snippet format of test data, we reshaped the 30sec crops into 6x 5sec parts before feeding through the backbone. After the backbone we reshaped the data again to re-arrange to the 30sec representation by concatenating the respective time segments and then used simple pooling of time and frequency dimension before forwarding through a simple one layer head which gave us the 398 bird classes. We naively used the union of primary and secondary label as target. For inference then we directly fed 5sec snippets to the model.</p> <p><img src="https://i.imgur.com/M81KcGr.png" alt=""></p> <p>We used the following backbones: resnet34, tf_efficientnetv2_s_in21k, tf_efficientnetv2_m_in21k, eca_nfnet_l0</p> <p>We trained with BCE loss using Adam optimizer and cosine annealing schedule. We saw improvements using the following tricks:</p> <ul> <li>Use the rating for weighting the recordings contribution to the loss. The assumption is that recordings with a lower rating have worse quality with respect to audio and label and should contribute less to model training. In detail we weight each sample by rating/max(ratings).</li> <li>Label smoothing. We used label smoothing to account for noisy annotations and absence of birds in “unlucky” 30sec crops.</li> <li>Clever augmentation. Similar to past solutions we used no-bird background noise and mixup as main augmentation methods. For background noise we used a mix of no-call parts of this years validation set and past years data. We also not only used mixup between recordings but also within a recording by mixing between the 5 sec parts. In mixup we also weight the labels and sample weights accordingly.</li> </ul> <p><strong>Ensembling</strong></p> <p>The ensembling of our models was straightforward since all output the same shapes. We took a simple mean of the predictions after a step of post-processing which is explained in the next paragraph. At the end we used 9 models which differ mostly on hyperparameters and backbones and fitted each model with 6 different seeds. Our final kernel ran in approximately 1h, so there was still quite some room in the kernel.</p> <p><strong>Post processing</strong></p> <p>The first step for post processing involved choosing an appropriate threshold for making hard predictions for which birds are present in a 5 second segment in soundscapes. As we all know, given the f-score metric, this is one of the most crucial steps of the solution. Even though optimizing a hard threshold on validation and applying it on LB worked quite well, we understood that there are some issues with that approach.</p> <p>First, we quickly realized that test and regular validation had different proportions of nocalls and calls which was also apparent from the different sample submission scores (only nocalls). This means that in general you wanted to predict more birds on LB meaning lowering thresholds to a certain degree could be helpful for improving public LB. We also accounted for this imbalance in our validation setup by removing the three nocall songs (see above, CV-3).</p> <p>Second, choosing hard thresholds can be problematic when you introduce new blends to your solution. Each new model has certain shifts in probabilities for all and certain birds, so the global thresholds can shift quite a bit. Now it became hard for us to properly judge if new models work well in the blend on validation and LB based on the merit of the models, or only based on some arbitrary probability / threshold shifts that emerged from it. And it was unclear what is a result of random fluctuation, or model properties.</p> <p>To that end, we decided to move to a percentile based thresholding approach. In detail, this meant that we set a certain percentile of predictions we want to do on a validation or test set, and calculated the according threshold that way. We did this by flattening all predictions, and then calculating the threshold. On CV-3 this looked for example like that:</p> <p><code>threshold = np.percentile(y_preds.flatten(), 0.9987)</code></p> <p>The more birds a set contains, the lower the percentile can be if predictions are decently ranked. The good thing now with this approach was that we could keep the percentile stable, and just exchange models, blends and other post processing and if the quality in our ranking of predictions improved, also the score improved given this fixed percentile, because we always predict the same amount of records.</p> <p>After we had this setup, we played a bit with changing the percentile on LB to check how test differs in that sense. We found the optimum on public LB to be at around 0.9980 meaning that quite a few more birds are present. In our final sub we chose 0.9981 and made another gamble with 0.9973. The better sub was clearly 0.9981, and actually even a bit higher could have neted us a potential first place (closer to best percentile on validation).</p> <p>In theory the gamble was legit, because private LB even had more birds as imminent from sample submission. But at the same time it seems that the ranking of predictions was worse, so that lower percentiles introduce too many FPs, meaning that more conservative setting was better. By and large, our choice based on a combination of validation and LB was a very robust one in the end, and we believe that this percentile based approach was way more stable and robust than individual threshold optimization.</p> <p>Additionally, we employed several smaller post processing steps to improve the predictions including attempts like: (1) increasing the probability of birds in songs based on their average prediction probability, (2) smoothing neighboring predictions, or (3) adjusting predictions by the predictions from our binary models. We also removed some unlikely predictions based on distance in space and time given the metadata very similar to how 4th place did.</p> <p><strong>What did not work</strong></p> <p>In the end quite a few things we tried ended up in the blend fostering the diversity in it. But naturally, there are also many different things that did not work. One thing to note is TTA which we could not make work. We had quite some time left in the kernel runtime, so this was a natural area to explore, but TTA with mel spectrograms is not as straightforward as with usual CV data. Furthermore, we tried to explore pseudo tagging in different versions, but also could not improve our blends with it.</p> <p>Thanks for reading. Questions are very welcome. <br> Christof, Pascal &amp; Philipp</p> <p>I think in F-metric based competitions there is always a higher luck factor than in some others. You can only try to mitigate the impact of luck on your standing which is what we tried.</p>
BirdCLEF 2021 - Birdcall Identification
3rd place submission
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you to the Host, the Kaggle Team for having organised such a good competition. I am very happy to have participated in this one.</p> <p>In the following, I want to explain roughtly what was the approach I used.</p> <p><strong>Summary</strong>:</p> <p>The final solution was an ensembling of 18 checkpoints trained on different CNNS and fold. During training each models are trained on a clip of 20 seconds. A post processing is also applied durgin inference based on the prediction at the "clip level" (20 seconds) and the "segment level" (5 seconds)</p> <p><strong>Explanation:</strong></p> <p>The first observation we can make is that :</p> <ul> <li>For training, we have weak label. So we can use a supervised learning method based on stochastic gradient descent to minimise the loss function.</li> <li>For testing, we are looking to predicting every 5 seconds. Therefore, there is a gap, between the label given to the model during training (sample level) and during inference (5-second segment-level) .</li> </ul> <p>To overcome this issue, I implement an architecture which allows getting prediction for a clip (20 seconds or more depending of your hardware) and also make prediction every 5 seconds in the clip. </p> <p><em>Why training on a clip of 20 seconds and not 5 seconds ?</em> <br> The hypothesis I have made is this one : *training on small clip will introduce noise to our training. *<br> Indeed, we don’t know where and which birds are singing in the 5-second clip , so it is possible to select a clip with nocall for instance. Increasing the length of the clip, allow to reduce this noise.<br> Then, these clips are divided into segments of 5 secondes. For each of these segment, I create a mel-spectrogram based on the torchlibrosa package. Therefore, for a clip of 20 seconds, 4 spectrograms will be generated. We can use these spectrogram to feed a Deep Learning model.</p> <p><strong>Model :</strong></p> <p>The architecture of the model is composed of :</p> <ul> <li>A backbone based on SED model (CNN) which can use different architectures such as seresnet50, EfficientNetB2, EfficientNetB3, etc with the model coming from this github : <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">https://github.com/rwightman/pytorch-image-models</a><br> the backbone has two outputs : the first one is for classifying each segment (one output for every 5 seconds), the second one is for classifying each timestep of the segment. I use the second output to decide if a bird is present in the 5 seconds or not for inference on test (I took the max probability across the time-step axis for each class). I used the first one for feeding the attention block of my "sequential model" used during training.</li> <li>An attention blocks in order to compute the final output (clip-level prediction) which will be needed for the training. The Attention block is applied on the first output of the backbone (the dimension should be (BS, 4, num_class) )<br> During the training, we are trying to minimise the output of the attention block with a cross entropy loss. We are trying here to predict the birds present in the 20 seconds clip. I use label smoothing of 0.05 as the label was still noisy. That could help to reduce overfitting. The metric used to select best checkpoint is the PR auc, as F1 score is based on precision and recall, I thought it would be a good idea to use PR AUC for each class then average it in order to have a robust estimation of the model.<br> During inference, I am using the outputs of each timesteps of the segments. The dimension should be (BS, 4, timesteps, num_classes). I take the max probability across the timesteps dimension and got a vector of (BS, 4, num_classes). These vector corresponds to the prediction for each 5-second segment inside the 20 seconds clip. Then a post-processing is applied, using the final output (prediction at 20-second clip-level).</li> </ul> <p><a href="https://ibb.co/hFfvZ6n"><img src="https://i.ibb.co/nMCSnGV/Processus.jpg" alt="Processus"></a></p> <p>Different CNNs have been trained such as SeResNet50, EfficientNetB2, EfficientNetB3, EfficientNetB4, EfficientNetB5, EfficientNetB6, EfficientNetB7. My initial goal was to train each of these model with a 5-folds strategy but because of hardware issues(too long to train), I did not train them on all folds. After that, I just ensemble these models by CNN type. </p> <p><strong>Post processing Inference :</strong></p> <p>To do that I create two ensemblings : one based on the final output (20 sec clip) and the second one based on the max of the prediction of the timesteps segments. Then, to decide if a bird appeared, I need to look first if the bird appeared in the ensembling's prediction of the max timesteps segments (based on a threshold t1) and if the bird also appeared in the ensembling's final prediction (based on a threshold t2)</p> <p><a href="https://ibb.co/mF6B9Mx"><img src="https://i.ibb.co/JF5dvMh/ensembling.jpg" alt="ensembling"></a></p> <p>Data augmentation used : mixup (signal and spectrogram worked)</p> <p>Hardware : GTX 1080TI + Colab </p>
BirdCLEF 2021 - Birdcall Identification
42 solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thanks to the hosts, kaggle<br> thanks to <a href="https://www.kaggle.com/kkiller" target="_blank">@kkiller</a> and his awosome code</p> <p><a href="https://www.kaggle.com/kneroma/clean-fast-simple-bird-identifier-training-colab" target="_blank">https://www.kaggle.com/kneroma/clean-fast-simple-bird-identifier-training-colab</a></p> <h2>3 step</h2> <ol> <li>train different model (resnet34, resnet50,resnest50,resnext50_32x4d)</li> <li>find best ensemble model, depend on train_soundscapes</li> <li>post preprocess, use season and position to filter bird</li> </ol> <h3>train</h3> <p>I use kkiller public datasets, and I try to add some augument, but not increase much, less 0.01<br> I add second label, with prob 0.3, have some improvement, I study a lot last year's solution</p> <p><a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183208" target="_blank">https://www.kaggle.com/c/birdsong-recognition/discussion/183208</a><br> <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183269" target="_blank">https://www.kaggle.com/c/birdsong-recognition/discussion/183269</a><br> <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183199" target="_blank">https://www.kaggle.com/c/birdsong-recognition/discussion/183199</a><br> <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183204" target="_blank">https://www.kaggle.com/c/birdsong-recognition/discussion/183204</a></p> <p>I also tried effientnet b0~b5, not work, I don't known why, wished players can give some advice</p> <h3>ensemble</h3> <p>I trained diffent model, and use train_soundscapes f1 to check best epoch<br> I choose 2 sulotion, one for best local f1, and one add some random, use model not sort by local f1</p> <h3>post preprocess</h3> <p>I use uber h3 calculate Latitude and Longitude to get segment id, and use train_metadata.csv to get bird belong which segment, this give 0.01 improve</p> <h3>finally</h3> <p>thanks all, I am NLPer, not very familier with sound engineer, but kindly public code helped me a lot</p> <p>my inference code:</p> <p><a href="https://www.kaggle.com/jt120lz/clean-fast-simple-bird-identifier-inference" target="_blank">https://www.kaggle.com/jt120lz/clean-fast-simple-bird-identifier-inference</a></p> <p>just for loop, choose best f1 on train_soundscapes</p>
BirdCLEF 2021 - Birdcall Identification
49 place solution - meh models, +0.04-0.05 private LB postprocessing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>TL;DR</p> <ul> <li>6 models ensembling (5-fold each):<ul> <li>EfficientNet B2 pcen 128x500, same with mel, trained on random 5 second slices</li> <li>Same two models, trained on 5 second slices from first 20 seconds of the short audio files</li> <li>Rexnet200 pcen 128x400, same with mel, trained on random 5 second slices</li></ul></li> <li>Normalizing model predictions so any single model would not drown other models when averaging</li> <li>Post-processing based on false positive/true positive ratio of the birds</li> <li>Post-processing based on time and distance</li> <li>Safe threshold choosing</li> </ul> <p><strong>Data preparation</strong>:</p> <p>Spectrograms from all "short" train audio files were computed and saved on disk. (~150 GB each).<br> Pcen spectrograms were used with fmin=100, fmax=12000, fft = 1024, hop = 320.<br> Mel spectrograms were used with fmin=100, fmax=16000, fft = 1024, hop = 320.<br> Additionally, mel spectrograms were computed several times with different power setting (in cycle from 1.0, 1.2, … 3.0), until average brightness of resulting spec was &gt;20, and that result was chosen. That was done to avoid very dark and very bright results.</p> <p><strong>Training</strong>:</p> <p>AdamW on default mostly, for ~15 epoch with LR 0.001 for first ~10-12, 0.00015 for next ~3 and 0.00003 for the last 1-2. SGD did not work at all.</p> <p>EfficientNet B2 was the best for me, Rexnet200 trailed ~0.015 CV behind and pcen specs were superior to mels by ~0.001. ResNeSt was significantly behind for some reason. Did not try other heavy weight nets because of computing power limitations.</p> <p>Due to weak GPU (RTX 2060 Super) and large amount of data - even with mixed precision training i could realistically train at most one 5-fold a day, and so experimenting was limited. At the end i do not think that i managed to beat kkiller's public notebook with any of my models, but some were in 0.01 range from it.</p> <p><strong>Development</strong>:</p> <p>First of all i was confused when i read that bagging improved score from 0.63 to 0.73. In my submissions simple averaging sometimes even reduced the score compared to individual models!</p> <p>After some investigation i found that logits on which i averaged were extremely different between models (probably due to different training schedule), and, probably, any subtle difference between classifications were overshadowed by that scale difference.</p> <p>Normalizing models predictions improved scores by ~0.02, but still nowhere near such giant jumps, just more in line of usual ensembling.</p> <p>Then i started predicting with 1 second steps and average all predictions that were done for some second, and taking max of five 1-second results in the soundscape. This way bird/nobird transitions were more smooth, and any possible "bird call got sliced in two and failed to be predicted" situations disappeared. That improved score by another ~0.01.</p> <p>Then i spent ~10 days trying to improve my models with pseudolabeling, hand labeling, location/time concats to features, different losses, different augments, different training schedules, different type of models, etc etc. Nothing worked good enough. Some models went into final ensemble, but at the end my strongest single model was my first one, EfficientNet B2 128x500 on pcen specs with barely any augments. This extra models for ensemble added ~0.01 to the final score.</p> <p>At this time there was a large 0.74+ group and a large 0.72- group on the leaderboard. I decided to find the difference between them and started to try post-processing.</p> <p><strong>Post-processing</strong>:</p> <p>Then, when looking at submission (on train soundscapes), i noticed large flocks of grhowl. From there post-processing based on false_positive / true_positive ratio was born. grhowl was the sole, giant, woo-hoo-ing offender. Any sufficiently noisy record was grhowl. Rest of the birds were significantly more behaved, and even if i slightly reduced some birds in the end, it made almost no dfiference to just removing grhowl. That improved score by ~0.015.</p> <p>Next were experiments with distance and time. First, most simple attempt was finding minimum distance from short train audios to recording locations and predicting zero for any bird above some threshold. That kinda worked, but improvement was minimal (~0.005)</p> <p>Next i decided to take distance of n% closest samples, instead of the closest one - for example, for 500 records birds i took low ~2%, and kept the distance of 10th closest sample - to avoid outliers. This worked significantly better, and improved score by another ~0.015.</p> <p>Linear threshold is not based on something real, though. In real life birds do not hit the edge of the map and just stop there. The next iteration of post-processing used gradually increasing probability from ~7 degrees distance to ~2 degrees distance, and it worked better - another ~0.01.</p> <p>At the end, time was added to the same code as third dimension (with appropriate scale factor), and all settings were tuned on train soundscapes. Because amount of data points was not very big, i was forced to look at low ~6% of the samples when determining "distance" and increase said "distance" by quite a bit. It still improved score by another ~0.005.</p> <p><strong>Threshold choosing</strong>:</p> <p>Train soundscapes and public test set had significantly different optimal thresholds. That alerted me to the possibility of large threshold swings on the full private set. I estimated the final optimal threshold would probably be between them, but slightly to the train soundscapes side, and decided to play it safe. Two different thresholds were chosen - one to intentionally undershoot by a bit, one to intentionally overshoot by a bit - but still at the points where validation scores were relatively flat and only slightly lower than optimal one.</p> <p>I expected to lose ~0.005-0.01 from that decision in exchange for relative safety from shakeups, but conservative high threshold actually landed on the exact optimal spot of the private LB data.</p> <p><strong>Conclusion</strong>:</p> <p>That was a fun month and a fun journey from 0.54 private LB to 0.63 private LB. I did not spend a lot of time on this competiton compared to Rainforest ones, but the result was as good.</p> <p>Final version of postprocessing (grhowl is on by default, it might be just my models quirk) - <a href="https://www.kaggle.com/fffrrt/location-and-time-postprocessing-birdclef-2021" target="_blank">https://www.kaggle.com/fffrrt/location-and-time-postprocessing-birdclef-2021</a></p> <p>My best submission scored 0.63/0.68 with it, and 0.59/0.62 without it (with the same threshold selection process).</p>
BirdCLEF 2021 - Birdcall Identification
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Instead of getting two predictions from 10s to 5s, I get predictions from the central 5s and the whole 10s. Therefore, the number of inferences is the same as the normal 5s prediction.<br> The disadvantage is that it takes longer to create a logmelspec than with 5s. <br> Also, inference near the edge of the clip repeats the clip to make up for the missing.<br> So the global inference at the edge of the clip may be incorrect: (<br> <img src="https://user-images.githubusercontent.com/16153860/122066104-68adc180-ce2d-11eb-98b1-8869ba7cd4bd.png" alt=""></p> <p>With standard 5s prediction, the optimal threshold was close to 0.2.<br> Maybe, the use of global and local prediction can be used to stabilize and increase the sensitivity of your inferences.</p> <p>I have released the experimental code and the inference notebook for the best submission.</p> <p>Code: <a href="https://github.com/tattaka/birdclef-2021" target="_blank">https://github.com/tattaka/birdclef-2021</a><br> Inference notebook: <a href="https://www.kaggle.com/tattaka/birdclef2021-submissions-pp-ave?scriptVersionId=64016465" target="_blank">https://www.kaggle.com/tattaka/birdclef2021-submissions-pp-ave?scriptVersionId=64016465</a><br> Aggregation of the number of birds for post-processing: <a href="https://www.kaggle.com/tattaka/make-month-and-site-mask" target="_blank">https://www.kaggle.com/tattaka/make-month-and-site-mask</a></p> <p>In cleaning the code, I've removed duplicate experiments and non-reproducible ones that didn't specify a seed, but you should get similar scores.<br> Please forgive me if my inference notebook is not organized due to laziness and time-saving.</p> <p>I used 1080ti x 3, it takes about 10~15 hours to train 50epoch.</p> <p>Some of the mono_to_color functions used in the previous Birdcall competition have been applied to (bs, 1, mel, time).</p> <pre><code>class NormalizeMelSpec(nn.Module): def __init__(self, eps=1e-6): super().__init__() self.eps = eps def forward(self, X): mean = X.mean((1, 2), keepdim=True) std = X.std((1, 2), keepdim=True) Xstd = (X - mean) / (std + self.eps) norm_min, norm_max = Xstd.min(-1)[0].min(-1)[0], Xstd.max(-1)[0].max(-1)[0] fix_ind = (norm_max - norm_min) &gt; self.eps * torch.ones_like( (norm_max - norm_min) ) V = torch.zeros_like(Xstd) if fix_ind.sum(): V_fix = Xstd[fix_ind] norm_max_fix = norm_max[fix_ind, None, None] norm_min_fix = norm_min[fix_ind, None, None] V_fix = torch.max( torch.min(V_fix, norm_max_fix), norm_min_fix, ) # print(V_fix.shape, norm_min_fix.shape, norm_max_fix.shape) V_fix = (V_fix - norm_min_fix) / (norm_max_fix - norm_min_fix) V[fix_ind] = V_fix return V </code></pre> <p>Seems to work the same<br> <img src="https://user-images.githubusercontent.com/16153860/120405827-6c007200-c384-11eb-86b7-456b2e686e9e.png" alt=""></p> <p>First of all, I would like to thank the organizers of the competition and the Kaggle team for the competition hosting. I would also like to thank all the participants who have shared their knowledge so generously, including the previous Birdcall competitions. Two weeks before the end of this competition, I didn't expect it to be so hard competition.</p> <h2>Keys of my solution</h2> <ul> <li>Ensemble<ul> <li>Max 62model (Best private: 47model)</li></ul></li> </ul> <ul> <li><strong>Inference with global information for SED model</strong></li> <li>Post-processing</li> </ul> <h2>Training</h2> <h3>Preprocess</h3> <p>The logmelspectrograms were calculated using TorchAudio and normalised using the means and variances per a image.</p> <pre><code>logmelspec_extractor = nn.Sequential( MelSpectrogram( , n_mels=, f_min=, n_fft=, hop_length=, normalized=, ), AmplitudeToDB(top_db=), NormalizeMelSpec(), ) </code></pre> <p>For Augmentation in waveform, gaussian and uniform, pink noise are added random. </p> <p>As an augmentation on logmelspec, I used Mixup. As a parameter, I used a probability of 0.2~0.5 and alpha=0.8.</p> <h3>Modeling</h3> <p>I used the SED model like <a href="https://www.kaggle.com/hidehisaarai1213/pytorch-training-birdclef2021-starter" target="_blank">@hidehisaarai1213 's kernel</a>.</p> <p>Using various backbones, the seconds(max 30s, min 10s) used training has been adjusted so that each has a batchsize of 36. </p> <h3>Training</h3> <p><code>clipwise_pred</code> is optimized directly. By doing this, I was able to suppress the generation of NaN on backward.</p> <pre><code>loss = bce_with_logits(torch.logit(clipwise_pred), target) + * bce_with_logits((framewise_logit.()[], target) </code></pre> <p>Other points are that</p> <ul> <li>use a secondary label<ul> <li>soft labels such as 0.5 did not work</li></ul></li> <li>trained 40~50 epochs </li> </ul> <h3>Pseudo labeling</h3> <p>I have tried some patterns using a combination of the following steps.</p> <ul> <li>Simple per-audio file relabeling<ul> <li>Use clipwise_pred and threshold = 0.2</li></ul></li> </ul> <ul> <li>Inspired by <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183204" target="_blank">@hidehisaarai1213 's solution</a> in the last competition, I saved <code>framewise_pred</code> and <code>time_att</code> with the whole audio file as input, and computed <code>clipwise_pred</code> by cropping the period used for training.</li> </ul> <ul> <li>Create a pseudo label using the second method only for audio files without a secondary_label</li> </ul> <ul> <li>If the pseudo label was below a certain value (0.05), the probability of 0.1 was used to set the primary and secondary labels to 0.</li> </ul> <p>All patterns improved the performance of the single models by about 0.01, and although the score on train_soundscape did not contribute to ensemble, it did work somewhat on the public LB.</p> <h3>30s finetuning</h3> <p>Larger models are trained with shorter segments, such as 10s. Therefore, the labels become noisy. To solve this, only the Attention module was trained with 30s segments on 10 additional epochs. There is a slight improvement in score with train_soundscape, and it is included in ensemble.</p> <h3>Checkpoint selection</h3> <p>The checkpoint with the smallest oof loss was selected for validation using the same number of seconds as for training.</p> <h2>Inference with global information</h2> <p>This is my favorite and most effective part of my solution. </p> <p>To begin with, the idea that I had before joining this competition was that if the SED model could accurately perform "Sound Event Detection", then it would be possible to improve predictions for the shorter time segment by cropping from the longer time segment the necessary parts of predictions.</p> <p>In this way, longer term features can be used for inference.</p> <pre><code>framewise_pred_5s = self.fix_scale(feat[:, :, start:end]) att_5s = torch.softmax(time_att[:, :, start:end], dim=-) clipwise_pred_5s = torch.(torch.sigmoid(framewise_pred_5s) * att_5s, dim=-,) </code></pre> <p>By implementing this idea, I can improve both score and public LB in train_soundscape by about 0.03. In practice, I used 30s as the segment for the longer period and implemented it so that the 5s interval I want to predict is in the center. This idea is also used to create a pseudo label in the second pattern. </p> <p>Secondly, in <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183571" target="_blank">7th place solution in previous Birdcall competition</a>, the birds that were observed to appear using a high threshold for each audio file can be expected to appear in short segments with a high probability, and the sensitivity can be increased by lowering the threshold. </p> <p>Inspired by this observation, I used a high threshold (0.05) in <code>clipwise_pred_30s</code> to generate a list of possible birds and a low threshold (0.025) in <code>clipwise_pred_5s</code>, to perform AND operations.</p> <pre><code>((clipwise_pred_30s &gt; high_threshold) + (clipwise_pred_5s &gt; low_threshold)) &gt;= </code></pre> <p>To get an idea of these thresholds, I used <code>scipy.optimize.dual_annealing</code> to optimize for train_soundscape. This seems rather risky, but I thought it was somewhat reasonable due to the simple public LB probing described below.</p> <p>This double thresholding improved the score of public LB and train_soundscape by 0.03. </p> <p>In the ensemble I took a simple average. Initially, hard voting was considered, but the score did not change much, so the simple method was chosen. </p> <h2>Post-processing with location and date</h2> <p>A list of birds observed within 450 meters around each site and a list of birds appearing in each month was made, and those not appearing were removed from the submission.</p> <p>Here, because the two sites "COR" and "COL" are relatively close, the birds have only been removed if both are zero.</p> <p>I have also quadrupled the thresholds for some rare classes of birds observed in the vicinity of the sites.</p> <p>This post-processing resulted in a consistent improvement of less than 0.01 for both train_soundscape and public LB.</p> <h2>Simple public LB probing</h2> <p>Since train_soundscape and public LB were correlated to some extent, I was concerned that the distribution of public LB might be extremely similar to train_soundscape.</p> <p>To verify this, I changed the site and bird predictions not included in the train_soundscape to nobird.</p> <p>If there is no change in the score, it is assumed that train_soundscape and public LB have the same distribution, but the score has decreased considerably. (bird: 0.71-&gt;0.64, site: 0.71-&gt;0.58)</p> <p>So I can see that public LB contains sites and birds that are not included in train_soundscape.</p> <p>From this, I forcefully predicted that public and private would be randomly split. (This is strictly uncertain, but there was nothing else I could do.)</p> <h2>Experimental code and the inference notebook for best submission</h2> <p>Code: <a href="https://github.com/tattaka/birdclef-2021" target="_blank">https://github.com/tattaka/birdclef-2021</a><br> Inference notebook: <a href="https://www.kaggle.com/tattaka/birdclef2021-submissions-pp-ave?scriptVersionId=64016465" target="_blank">https://www.kaggle.com/tattaka/birdclef2021-submissions-pp-ave?scriptVersionId=64016465</a><br> Aggregation of the number of birds for post-processing: <a href="https://www.kaggle.com/tattaka/make-month-and-site-mask" target="_blank">https://www.kaggle.com/tattaka/make-month-and-site-mask</a></p> <p>No, it didn't work for my solution.</p>
BirdCLEF 2021 - Birdcall Identification
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you. For me, random_power gave an improvement as well. Probably best to mix models with and without random_power </p> <p>Thanks to the organizers for an interesting contest and <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> for the motivation.</p> <p><strong>My decision is based on my public code posted here:</strong><br> <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183269" target="_blank">https://www.kaggle.com/c/birdsong-recognition/discussion/183269</a><br> <a href="https://github.com/vlomme/Birdcall-Identification-competition" target="_blank">https://github.com/vlomme/Birdcall-Identification-competition</a></p> <p><strong>The basic solution easily gets silver. What I changed:</strong></p> <ul> <li>Switched to SED +1%.</li> <li>Lowered threshold +1% . </li> <li>Left only birds from region of record +0%</li> <li>Changed ensemble averaging +1%</li> </ul> <h2>Pre-processing:</h2> <p>Used log-melspectrograms<br> n_fft = 1536, sr = 21952, hop_length = 245, n_mels = 224, len_chack 448, image_size = 224 * 448, 5 seconds</p> <h2>Models:</h2> <p>Ensemble of 14 models (sed_resnet50, sed_resnest50, sed _efficientnet-b0)</p> <h2>Augmentations:</h2> <ul> <li>For contrast, I raised the image to a power of 0.5 to 3. at 0.5, the background noise is closer to the birds, and at 3, on the contrary, the quiet sounds become even quieter.</li> <li>Slightly accelerated / slowed down recording</li> <li>Add a different sound without birds(rain, noise, conversations, etc.)</li> <li>Added white, pink, and band noise. Increasing the noise level increases recall, but reduces precision.</li> <li>With a probability of 0.5 lowered the upper frequencies. In the real world, the upper frequencies fade faster with distance</li> </ul> <h2>Train:</h2> <ul> <li>Used BCEWithLogitsLoss. For the main birds, the label was 1. For birds in the background 0.3.</li> <li>Used loss:</li> </ul> <pre><code>train_los1 = nn.BCEWithLogitsLoss()(prediction['clipwise_output'], true) train_los2 = nn.BCEWithLogitsLoss()(prediction["segmentwise_output_max"], true) train_loss = (train_los1 + train_los2)/2 </code></pre> <ul> <li>I didn't look at metrics on training records, but only on validation files (train_soundscapes)</li> <li>20-40 epochs (24 hours on gtx1060)</li> </ul> <h2>Postprocessing</h2> <ul> <li>If there was a bird in the segment, I increased the probability of finding it in the entire file.</li> <li>Model ensemble averaging</li> </ul> <pre><code>proba1 = proba.prod(axis = 0) ** (1.0/len(proba)) proba = proba**2 proba = proba.mean(axis=0) proba = proba**(1/2) proba = (proba + proba1)/2 </code></pre> <p>Since the training took a long time and other approaches didn't work, I switched to other competitions and have hardly taught any new models in the last month. It's a shame that a little was not enough, I will try to do better</p> <p>Thank you. I look forward to seeing you in other competitions</p>
BirdCLEF 2021 - Birdcall Identification
68 place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>[Interim] 68 place solution</h1> <p><a href="https://www.kaggle.com/c/birdclef-2021/discussion/233454" target="_blank">It</a> said bagging works good in this competition.<br> We thought so too from us experiment, so our strategy is training various model and they ensemble.</p> <p>Our final submission is voting by 5 models that three rexnet_200, densenet161, and resnest50d.</p> <h2>teyo part</h2> <p>my aproach is cnn classifier model</p> <h3>Data Preparation</h3> <p>I cut out 5 seconds of audio from train short audio.<br> The training data was converted to logmel using torchlibrosa, and then increased to 3 channels.<br> I used delta to increase channels, referring to past competitions. There was not much difference from simple repeat.</p> <p>Reference <a href="https://www.kaggle.com/vladimirsydor/4-th-place-solution-inference-and-training-tips" target="_blank">Cornell Birdcall Identification 4-th place solutinon</a></p> <h3>model</h3> <p>rexnet_200</p> <p>The rexnet 200 was unusually strong:)<br> I keep track of the f-1 for the train sound scape data and use the best f-1 weight.</p> <p>for ensemble [densenet161, efficientnetv2_s, resnext50_32x4]</p> <p>not worked efficientnetb0-4,resnest50d</p> <h3>data augmentation</h3> <ul> <li>NoiseInjection</li> <li>PinkNoise</li> <li>RandomVolume</li> <li>SpecAugmentation (do in model)</li> <li><a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183199" target="_blank">Modified Mixup</a><ul> <li>increase 0.02 in public LB</li></ul></li> </ul> <h3>other settings</h3> <ul> <li>split<ul> <li>StratifiedKFold</li> <li>mainly 5 fold</li></ul></li> <li>Scheduler<ul> <li>CosineAnnealingWarmRestarts</li> <li>30 epochs</li> <li>lr=1e-4</li> <li>min_lr=1e-6</li></ul></li> <li>optimizer<ul> <li>Adam</li></ul></li> <li>criterion<ul> <li>BCEWithLogitsLoss</li></ul></li> </ul> <h2>shinmura part</h2> <p>I mainly developed post-processing (And <a href="https://www.kaggle.com/c/birdclef-2021/discussion/239911" target="_blank">hand labeing</a>). </p> <p>Our post-processing improved LB very well (<strong>LB 0.65 -&gt; 0.70</strong> jump up). <br> Our post-processing is 7 ideas. But it's too long to write here.</p> <p>If you want to know our post-processing, please comment me.<br> Then I will build another thread.</p> <h2>toda part</h2> <p><strong>※ My model has not used in our team final submission, so this part is "I try, but it was not working".</strong></p> <p>I have used the ResNet18 based SED model.<br> My ingenuity points are there three:</p> <h3>1. add background noise</h3> <p>In order to make the audio of the training data as close as possible to the test data, I used the data of ESC50 to add noise to the training data. The selected audio are listed below.</p> <p>airplane, rain, water_drops, crackling fire, engine, insects, crickets, frog, wind</p> <p>I also adjusted Gain to give variation to the volume of the bird's voice, but this did not work.</p> <h3>2. make model each site</h3> <p>Birds that are observed differ depending on the area, and it is expected that some birds, such as migratory birds, will be observed only for a specific period. I decided to make a specialized model.</p> <p>Specifically, the radius was within 1000 km from the observation point of the test data, and the period was narrowed down with a buffer of one month before and after the target period (for example, January to August for SSW).</p> <p>After narrowing down, we sorted by rating of each audio file and extracted the first 5 seconds of the top 300.</p> <p>The dataset is here: <a href="https://www.kaggle.com/takamichitoda/birdclef-ssw-max300" target="_blank">SSW</a>, <a href="https://www.kaggle.com/takamichitoda/birdclef-sne-max300" target="_blank">SNE</a>, <a href="https://www.kaggle.com/takamichitoda/birdclef-cor-max300" target="_blank">COR</a>, <a href="https://www.kaggle.com/takamichitoda/birdclef-col-max300" target="_blank">COL</a></p> <h3>3. generate nocall data</h3> <p>I add nocall data that made from other site.<br> The method making nocall datasets is reversed the method making datasets each site.<br> I extracted audio that is more than 1000km away from each site and is not in the target period.</p> <p>nocall data is <a href="https://www.kaggle.com/takamichitoda/birdclef-nocall-each-site" target="_blank">here</a></p>
BirdCLEF 2021 - Birdcall Identification
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used <a href="https://www.kaggle.com/vladimirsydor/cornelli-background-noises" target="_blank">this one</a> from the last competition.</p> <p>Thanks to Kaggle, the hosts, and all the participants. It was a great opportunity for me to learn deep learning and audio signal processing. Shout out to <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> and <a href="https://www.kaggle.com/taggatle" target="_blank">@taggatle</a> (previous 1st place solution) for sharing. I build my solution based on their SED model. I learned a lot and I'm really thankful to them.</p> <p><strong>Model</strong><br> The logmelspectrograms are calculated with n_fft=1024, sr=32000, hop_length=320, n_mels=64. I used the SED model with DenseNet backbone. Training and inference are performed on 30-second clips. Secondary labels are treated the same as the primary label. The loss function is BCEloss.</p> <p><strong>Augmentation</strong><br> The waveforms are augmented with Gaussian Noise, Gaussian SNR, Gain, Pink Noise, some environment noise (rain, insect, etc), and nocall segments from training soundscapes. I also used SpecAugment and mixup.</p> <p><strong>Threshold and Ensemble</strong><br> For each model, I did a grid search on the threshold. Then I used voting to ensemble the models. I chose the thresholds that give higher recall and the vote limit that gives the best f1 score, all based on the training soundscapes. Restricting the prediction using the geographic location also improved the score a little bit. </p> <p>Thanks! 1. I randomly mask blocks in frequency and time, as described <a href="https://ai.googleblog.com/2019/04/specaugment-new-data-augmentation.html" target="_blank">here</a>. 2. Yes. I used <a href="https://github.com/iver56/audiomentations" target="_blank">Audiomentations</a></p>
BirdCLEF 2021 - Birdcall Identification
87th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I appreciate the organizers and participants!<br> I am looking forward to seeing other solutions!</p> <p>I am not good at English, but I hope this helps anyone.</p> <p>This is my 87th(LB 90th) solution.</p> <h1>Solution</h1> <p>I referred <a href="https://www.kaggle.com/kkiller" target="_blank">@kkiller</a> 's nice notebook <a href="https://www.kaggle.com/kneroma/clean-fast-simple-bird-identifier-inference" target="_blank">https://www.kaggle.com/kneroma/clean-fast-simple-bird-identifier-inference</a></p> <p>My final solution is</p> <ol> <li>Ensemble 65models (13 types x 5 folds)</li> <li>Audio clip length 6 sec in inference time</li> <li>DataAugmentation in training time</li> </ol> <h2>1. Ensemble 65models</h2> <p>I noticed increasing model type is effective local score and LB for first a few submissions, so I trained various CNN types and selected nice models seeing the result of my local score in inference time.</p> <p>Model list is below</p> <ul> <li>EfficientNet-v2 large</li> <li>EfficientNet-v2 medium</li> <li>EfficientNet-v2 b3</li> <li>EfficientNet-b6</li> <li>EfficientNet-b4</li> <li>EfficientNet-b2</li> <li>EfficientNet-b0</li> <li>ResNeSt50d x3 (2 random seeds and train methods)</li> <li>ResNet200d</li> <li>DenseNet201</li> <li>DenseNet121</li> </ul> <p>Other than these models, I tried NFNet and other ResNet types, but I have no time for training convergence.</p> <p>Final prediction labels are selected by threshold=0.2.</p> <h2>2. Audio clip , 3.Data Augmentation</h2> <p>I decided that audio clip length is 6sec in inference for maximizing local score. ( I trained some models in 7sec crop and the others in holizontal random crops(5~7sec), but the score is maximized in 6sec crops. )</p> <p>Moreover, I used not only one hot label but also second_labels for teaching labels. </p> <ul> <li>primary_label = 0.9</li> <li>secondary_labels = 0.5</li> <li>other = 0.01</li> </ul> <p>And, I used BCEwithLogits loss.</p> <h2>Training settings</h2> <ul> <li>StratifiedKFold 5fold</li> <li>RandomSeed = 42, 46</li> <li>LearningRate = 0.0008</li> <li>CosineAnnealingLR</li> <li>Mixup</li> </ul> <h1>not improving</h1> <ol> <li>clip length unsemble in inference time</li> <li>typical data augmentation (parital dropout, random cutout, gaussian noise, random changing brightness and contrast)</li> <li>threshold based on every bird from appearance frequencies in training data</li> </ol> <p>At the end, I want to thank everyone in this competition and kaggle again!</p> <p>Thank you for comment!</p> <p>The lables I used had no theoretical meaning.<br> I wanted to try other label strategies, but I have no time to try them.</p> <p>Partical dropout means CoarseDropout in pytorch's albumentation library.</p>
BirdCLEF 2021 - Birdcall Identification
8th place writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and the hosts to hold an interesting competition. This competition was quite similar to the one held last year, Cornell Birdcall Identification, but was improved also from the last one.</p> <p>Along with this post, we also made the <a href="https://www.kaggle.com/hidehisaarai1213/birdclef2021-infer-between-chunk" target="_blank">inference notebook</a> public, so please refer to it if you are interested in the details of model and/or post-processing.</p> <h2>Solution summary</h2> <p>Our solution is weighted blending of several CNNs with classification heads for SED task. All the backbone models are from EfficientNet families and EfficientNet V2 families. We train our models with either 20s chunks or 5s chunks. For inference, we predict on 40sec chunks and apply two thresholds.</p> <h2>Preprocess</h2> <p>No special things - used log-melspectrogram (We used torchaudio). Most of the models used<br> n_fft=2048, sr=32000, hop_length=512, n_mels=256. Only two models used n_mels=320.</p> <h2>Models</h2> <p>We used 29 models with SED heads. They are almost the same architecture as that used in <a href="https://www.kaggle.com/hidehisaarai1213/pytorch-inference-birdclef2021-starter" target="_blank">my public Notebook</a>.</p> <h2>Augmentations</h2> <p>We used Gaussian Noise, Pink Noise, and Volume Control for all the models. We found that PitchShift works well, however it puts a lot of load on CPU, therefore only a few models in our ensemble use PitchShift. We also found mixup works, but with the similar reason, we didn't use this for our final models.</p> <h2>Training</h2> <p>We use BCEFocal2WayLoss which was introduced in <a href="https://www.kaggle.com/hidehisaarai1213/pytorch-training-birdclef2021-starter" target="_blank">my training Notebook</a>. We use secondary_labels along with primary_label and weighted the same. Some of our models are used 2stage strategy - in the first stage we train our model in a usual manner. Then we make the models predict on out-of-fold clips and get soft pseudo-labels on the whole training data set. In the second stage, we use the pseudo labels to mask the loss term for some of the classes.</p> <h2>Inference</h2> <p>We make the models infer on 40s chunks. Basically, the longer the input chunk, the better. Of course, score saturate when it gets longer, so we chose 40s which was in a good balance between memory consumption and performance. As a kind of TTA, we also make the model predict for the chunks that overlaps with the previous chunk. For example, our models first predict for 0s - 40s, then we feed the models to predict for 20s - 60s, and so on. </p> <h2>Ensemble</h2> <p>After we get predictions for 40s chunk we apply weighted blending. The weights are decided using optuna. When we search for these weights, the thresholds (NOTE: we used two different thresholds. Described in the section behind) are also included as hyperparameters.</p> <h2>Postprocessing (Thresholds search &amp; masking)</h2> <p>We used two different kind of thresholds - call threshold and nocall threshold. All the predictions that exceed call threshold is labeled as positive. If all the predictions in the 5s chunk does not surpass nocall threshold, we always add <code>nocall</code> label. Therefore, for some prediction we might get prediction like "moudov nocall". To search for this threshold we simply used grid search.<br> We then mask out some species that should not exist in that site. This is a rule based post-processing.</p> <h2>Validation</h2> <p>To match the nocall ratio of the public, we used<br> overall_f1 = 0.54 * nocall_f1 + 0.46 * call_f1<br> It turns out that CPMP also used this validation scheme.</p> <h2>Things that didn't work for us</h2> <p>Quite a lot of things. We tried many backbones, augmentations. We also searched for better ways to use secondary_labels, but in the end, we found treat the secondary_labels as the same as primary_label is the best. We also worked for using the location information. We tried using location information by OHE encoding but it didn't help. We also tried made a table that shows which species might appear in the four locations(SNE/SSW/COR/COL) and used that to train location-wise model, but it wasn't effective also.</p> <blockquote> <p>you had entries with both nocall and some birds?</p> </blockquote> <p>Yes. I learned this technique from one of the participant of Cornell Birdcall Identification.</p> <blockquote> <p>Congrats on the result and your GM title.</p> </blockquote> <p>Thank you!</p> <p>Thank you!</p> <p>Thank you!</p> <blockquote> <p>Would you mind open-source your checkpoint dataset so that we can play around with it?</p> </blockquote> <p>I'll ask for this to my teammate. It's a dataset of him.</p> <p>Thank you!!!</p>
BirdCLEF 2021 - Birdcall Identification
9th Place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to all the organizers for hosting such a challenging competition. Also, thanks to all other competitors for sharing valuable information in forums and notebooks. Special thanks to <a href="https://www.kaggle.com/vlomme" target="_blank">@vlomme</a> for sharing his code on github from last competition.</p> <p>Prayers for all the folks and their families suffering from Covid. </p> <h3>Solution Summary:</h3> <p>For most of my models, I used 5-7 second clips for training. I gave lower weight to secondary predictions and used modified mixup just likes last year's 1st and 3rd place solutions. </p> <h4>Ensembling</h4> <p>Mostly, the strategy was to get as much diversity possible into the ensemble to make predictions more robust.</p> <ul> <li>Melspecs with different temporal resolution (hop_length - 200 and 320)</li> <li>resnest50, efficientnetb0 and densenet121 backbones</li> <li>Noise augmentation (white noise, pink noise, band noise, nocall clips) same as what <a href="https://www.kaggle.com/vlomme" target="_blank">@vlomme</a> used in previous competition</li> </ul> <h4>Post processing</h4> <p>This was the trickiest part. We had following information at hand in addition to raw probabilities for each 5s clip:</p> <ul> <li>Maximum probability of bird over whole audio clip</li> <li>Average probability of bird over whole audio clip</li> <li>Count of birds from <code>train_metadata.csv</code> within +/- x deg of soundscape location</li> <li>Maximum probability of bird over whole day</li> </ul> <p>Initially I tried fitting a meta estimator (LightGBM and RandomForest, LogisticRegression) over above variable but results using custom rule were better on both validation and public LB (Thinking retrospectively, I think this is where I overfit).<br> Custom rule was something like: <code>p1 * a1 + np.clip(p2, 0, t1) * a2 + ...</code></p> <h4>Other tricks</h4> <ul> <li>Raise melspecs for different models to different powers to add little more diversity</li> <li>Squeeze width of test soundscapes by 2-5% (mostly to reverse far field effects)</li> </ul> <h4>Things that didn't work</h4> <ul> <li>Using last name from common name (e.g. <code>parakeet</code> from <code>Orange billed parakeet</code>) as an additional label during training.</li> <li>Different Attentions on backbone feature map (I guess with just 5-7 s clips, adding attention confuses the model even more instead of helping. Simple sum/mean over seems to work best.)</li> <li>Bigger models, I had really hard time getting effcientnet-b2/b4 to converge. I just gave up on them due to lack on time and resources.</li> <li>Manual labelling and class balancing. Looking at results from my models, it was quite evident that birds with less no of samples were not getting predicted (e.g. <code>hofwoo1</code>). So, I split them into 10s clips and hand labelled them to increase their sample size but this didnt help. Also, class balancing gave lower validation loss to ditched it as well.</li> </ul> <p>Thank you :). <br> We fell for the mirage created by public LB. Hopefully, I am able to do better in my next competitions</p> <p>Thanks <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>! Yes, I did try training on 30s clips few days before deadline with same noise setup. But that did not give me good results on train soundscapes (~0.68 f1) which was too low to be even included in ensemble. <br> It could be I had some bug in my pipeline or using too much augmentation was causing some issues. Anyway, I guess I decided to drop it too early without much experimentation.</p> <p>Thanks <a href="https://www.kaggle.com/tanulsingh077" target="_blank">@tanulsingh077</a> !</p> <ul> <li>Discriminative LR helps efficientnet converge. Backbone lr for efficientnets were 0.1 or 0.5 times max_lr. Works with different heads. </li> <li>Post processing function \</li> </ul> <pre><code>def post_process2(probas, test, bird_loc_cnts, max_w=0.5, mean_w=0.5, cnt_w=1, max_w2=0.1): proba = probas.copy() dates = test.filepaths.str.split('/').str.get(-1).str.slice(-12,-4).values audio_ids = test.audio_id.values proba_date_max = groupby_np(dates, probas, np.max) proba_audio_max = groupby_np(audio_ids, probas, np.max) proba_audio_mean = groupby_np(audio_ids, probas, np.mean) for audio_id in test.audio_id.unique(): rows = test.audio_id.values == audio_id site = test.loc[rows, 'site'].iloc[0] bird_cnts = bird_loc_cnts[site] cnt_prob1 = np.log1p(np.repeat(np.array([np.clip(bird_cnts.get(INT2CODE[i], 0), 0.0, 1) for i in range(398)]).reshape(1, -1), sum(rows), 0)) cnt_prob2 = np.log1p(np.repeat(np.array([np.clip(bird_cnts.get(INT2CODE[i], 0), 0.0, 100) for i in range(398)]).reshape(1, -1), sum(rows), 0))/100 # max_probs = np.max(proba[rows], axis=0, keepdims=True) # thresh_cnt = np.sum(proba[rows] &gt; thresh, axis=0, keepdims=True) # mean_probs = np.mean(proba[rows], axis=0, keepdims=True) max_probs = proba_audio_max[rows] mean_probs = proba_audio_mean[rows] max_probs2 = proba_date_max[rows] if cnt_w &gt; 0: proba[rows] *= cnt_prob1*1.0 proba[rows] = proba[rows] + max_w*np.clip(max_probs, 0, 0.67) + max_w2*np.clip(max_probs2, 0, 0.67) + mean_w*np.clip(mean_probs, 0, 0.1) + cnt_w*np.clip(cnt_prob2, 0, 0.01) # + thresh_w*np.clip(thresh_cnt, 0, 5)/5.0 proba[:, 397] = probas[:, 397] return proba </code></pre> <p>One more thing, you could use different thresholds for bird and nocall. For example, you could make prediction like this <code>chswar nocall</code>. This helped slightly.</p>
BirdCLEF 2021 - Birdcall Identification
Learnings From the Competition , 39th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,<br> I really wish I could have started this competition sooner but I did the best I could in 10 days . It never feels good to go down in private even if its 1 place , but what really was more disappointing was that we failed to pick our best submission which was giving <b> 0.66 </b> on private. It was an absolute grind for us , tons of hard work</p> <p>Our bird didn't fly but it was a great experience . <b> I would like to thank <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> for agreeing to join this competition this late and putting in immense work , an absolute gem to have as a teammate </b> and my other teammate <a href="https://www.kaggle.com/parthdhameliya77" target="_blank">@parthdhameliya77</a> , <a href="https://www.kaggle.com/vatsal" target="_blank">@vatsal</a> and <a href="https://www.kaggle.com/drthrevan" target="_blank">@drthrevan</a> for accepting us to the team</p> <p>It was my first serious experience with audio data and <a href="https://www.youtube.com/watch?v=iCwMQJnKk2c&amp;list=PL-wATfeyAMNqIee7cH3q1bh4QJFAaeNv0" target="_blank">this</a> youtube playlist helped me get up to speed and really get the hang of audio data. Thanks to kaggle community and winners of the previous Birdcall competition sharing their solutions , it was easy for us to get the baselines up and ready</p> <h1>Things that worked</h1> <ul> <li>Using Delta for stacking instead of stacking the same spectograms 3 times ( Idea taken from Birdcall's 4th place solution )</li> <li>RMS for cropping audio instead of random cropping ( Idea taken from Birdcall's 4th place solution)</li> <li>Resnext and Nfnet series of models</li> <li>Post Processing </li> </ul> <h1>Things That didn't work</h1> <ul> <li><p>I tried to mimic last year's 1st place solution SED model <a href="https://www.kaggle.com/tanulsingh077/sed-model-tanul-s-version" target="_blank">here</a> but the results weren't great because one epoch was an hour long and it took atleast 35 epochs to converge , we were outdone by the limited hardware power and time here . But we learned the in and out absolute working of the model</p></li> <li><p>CPMP's sampling method : Instead of taking random crops , crop from the begining and the end</p></li> <li><p>Efficientnet Family of models , I tried to mimic last year's fourth place training strategy but it didn't work out</p></li> <li><p>Longer time durations (10 secs , 15 secs melspecs) , different temporal dimensions (changing hop size and frame size)</p></li> <li><p>Vision transformer models , we used code provided by the organizers to generate a square mel specs but that mess up the temporal dimension I guess and hence the results were not that good</p></li> </ul> <h1>Things that we missed</h1> <ul> <li><p>We used already saved mel spectograms to train our models as we were limited by hardware and time and hence we were not able to use any of the audio level transforms and now I feel we missed the most there .<b> The key was to make the model robust to noise and overfit less </b></p></li> <li><p>Failing to use the meta-information given to us in terms of location and time</p></li> </ul> <h1>Our Final Solution Summary</h1> <p>Our final model consisted of Nfnet_l0 (5folds) and 2 resnext50 (5folds) (with and without delta stacking)trained on 7 secs mel specs ,sigmoid level ensemble with a post-processing layer . The post-processing was something which gave us the biggest boost , it was <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> great idea , so I will let him explain it in the comments.</p> <p>I take away a lot of learnings from this competition and I will be waiting eagerly for the next audio competition </p> <p>Thanks <a href="https://www.kaggle.com/kamal" target="_blank">@kamal</a> basics are always the key to win kaggle competition</p> <p>Hi <a href="https://www.kaggle.com/jonathanbesomi" target="_blank">@jonathanbesomi</a> Using Delta our resnext cv improved from 0.712 --&gt; 0.722 and our nfnet's cv increased from 0.728 ---&gt; 0.730 </p> <p>Nfnet was better than resnext , but their ensemble did not work for some reason and it gave bad cv as well as lb</p> <blockquote> <p>How would you have known this? Have you learned something now that would have let you pick this submission?</p> </blockquote> <p>Not before reading Psi's solution , our CV was same as everyone else's we used the train soundscapes 20 files to evaluate and chose the best threshold , but since we were training using 7 second clips and were also selecting the clips randomly , moreover our cv on train soundscapes correlated well with the lb , that made us careless and we didn't think much</p> <blockquote> <p>It is interesting to see that 3 of the things that did not work work you did marvel for me. Devil is in details probably.</p> </blockquote> <p>You are right devil is in the details , we might have made it work if only we had time , but I learned a lot and I am fortunate that I participated ,maybe I can do well in next year's competition</p>
BirdCLEF 2021 - Birdcall Identification
My journey (11th solution, with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I shared code and the paper I submitted to the workshop: <a href="https://github.com/jfpuget/STFT_Transformer" target="_blank">https://github.com/jfpuget/STFT_Transformer</a></p> <blockquote> <p>I'd be curious how it compares to training from scratch.</p> </blockquote> <p>Me too actually, but I didn't felt motivated enough to run the experiment. I may do it now that dust settled a bit.</p> <p>Thanks. Yes, I am quite proud of how I used vision transformers.</p> <blockquote> <p>Did you use soundscape level predictions too? </p> </blockquote> <p>like everyone else.</p> <p>People tuned thresholds using model predictions on train soundscape.</p> <p>In your case, the issue is probably that you did not correct for the different proportion of no calls. Compute F1score on no call rows separately fromF1 score on bird call rows, then compute final score with:</p> <p>score_all = 0.54 * score_nocall + (1 - 0.54) * score_birds</p> <p>If you compute CV this way then CV and public LB are almost identical.</p> <p>I still don't get why you say it is luck.</p> <p>ViT and DeiT have position embeddings on a 24x24 grid. I interpolated them to the new grid, for instance to a 16x36 grid.</p> <p>For this I modified timm's implementation of ViT and Deit position embeddings as follows.</p> <pre><code>def resize_pos_embed(posemb, new_size, num_tokens=1): # Rescale the grid of position embeddings when loading from state_dict. Adapted from # https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224 if num_tokens: posemb_tok, posemb_grid = posemb[:, :num_tokens], posemb[0, num_tokens:] else: posemb_tok, posemb_grid = posemb[:, :0], posemb[0] gs_old = int(math.sqrt(len(posemb_grid))) posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) posemb_grid = F.interpolate(posemb_grid, size=new_size, mode='bilinear', align_corners=False) posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, new_size[0] * new_size[1], -1) posemb = torch.cat([posemb_tok, posemb_grid], dim=1) return posemb </code></pre> <p>Thanks! We missed you in this comp.</p> <p>Still what is specific to me here? All top team used soundscape to tune thresholds.</p> <p>Given you used 3 soundscape data you are fitting to it, and relying on other soundscape looks leaky to me. </p> <p>Thanks. </p> <p>1) I used stratified k fold with shuffle.</p> <p>2) For mixup have a look at my Cornell writeup, I provide all details there. </p> <p>Thanks, I spent quite some time with vision transformers to make them work here.</p> <p>Why would all my subs be lucky then?</p> <p>How come my best CV is best on both public and private LB?</p> <p>I don't think it has to do with luck to be honest.</p> <p>Let's agree to disagree ;)</p> <p>My hunch is that private LB has less nocalls, hence I probably have optimized for a too high proportion of no calls. Maybe that's what you want to say afterall?</p> <blockquote> <p>Used same idea but with: 0.56* score_nocall + (1 - 0.56) * score_birds</p> </blockquote> <p><a href="https://www.kaggle.com/imeintanis" target="_blank">@imeintanis</a> why 0.56? If you predict only no call it would be wrong.</p> <blockquote> <p>I think this probably caused your shake-down.</p> </blockquote> <p><a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> why is that? Everyone used soundscape to tune rounding thresholds. What is specific to my case that would be worse than for other teams?</p> <p>I am really curious, you maybe right but I don't get why.</p> <p>Thanks. Yes, I spend a LOT of time tuning ViT/Deit training and I am very happy to have succeeded somewhat. It is why I am not too disappointed by the end result ;)</p> <p>I am now convinced that training only on 5 second clips is not the best, and next time I'll certainly have a look at your SED model ;)</p> <p>And congrats on becoming GM soon!</p> <p>Thanks!. Training ViT was tricky as they are quite unstable. Maybe I'll focus on this part in my note as the rest is just fine tuning previous solutions.</p> <blockquote> <p>What was your guys highest CV on all soundscape recordings? </p> </blockquote> <p>0.80, same as LB</p> <p>For CV I used the same as in my Cornell solution. Compute F1score on no call rows separately fromF1 score on bird call rows, then compute final score with:</p> <p><code>score_all = 0.54 * score_nocall + (1 - 0.54) * score_birds</code></p> <p>This way a nocall sub CV is the same as the nocall submission LB.</p> <p>This made my CV and LB identical for most of my submissions.</p> <p>Updating my post as this is important.</p> <blockquote> <p>you did in cornell but somehow it gave worse results , did you do the same in this comp?</p> </blockquote> <p>Yes, as I wrote, i used the same approach as in Cornell.</p> <p>I wrote it ;)</p> <blockquote> <p>The same 5 folds CV model (efficientnet b3 on 5 seconds mel spectrograms)</p> </blockquote> <p>Yeah, I thought that skipping the stem convolution and directly input the time slices as patch embeddings would be hard to train from scratch because training data is small. Reusing pretrained wieghts was safer.</p> <blockquote> <p>Congrats <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> ! I couldn't find it, did you train on whole clips or x seconds clips? </p> </blockquote> <p><a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> thanks. I updated the post to make it clearer. I used first and last 5 seconds of short train audio.</p> <blockquote> <p>Private LB has 0.47 sample submission btw.</p> </blockquote> <p>I am too lazy to check, but I bet that if I change my formula to this then Cv and private LB match:</p> <p><code>score_all = 0.47* score_nocall + (1 - 0.47) * score_birds</code></p> <p>I decided to join early this competition because I was frustrated by the previous two bird song competitions. In Cornell competition I joined too late. In Rainforest competition, missed some key train/test distribution insights and stagnated in the LB after a great start.</p> <p>The same 5 folds CV model (efficientnet b3 on first and last 5 seconds mel spectrograms) as in <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183219" target="_blank">Cornell competition</a> plus <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220304" target="_blank">improvements from my Rainforest solution</a> gave 0.69 public LB (0.60 private LB) at my first submission. This made me very happy as I took the lead on the public LB with it.</p> <p>The main difference with Cornell competition is that we were provided train soundscape. As many I trained models on short audio records and tuned rounding thresholds on soundscape. For CV I used the same as in my Cornell solution. Compute F1score on no call rows separately from f1 score on bird call rows, then compute final score with:</p> <p>score_all = 0.54 * score_nocall + (1 - 0.54) * score_birds</p> <p>This way a nocall sub CV is the same as the nocall submission LB.</p> <p>This made my CV and LB identical for most of my submissions.</p> <p>Tuning thresholds improved my public LB to 0.75 (0.64 private) at my 5th submission, after 2 days. CV and LB were identical so far. At the time everyone else score was well below 0.70.</p> <p>This great start made me will to hide my score and I stopped submitting until someone matched my score. It took 2 weeks. At this time I submitted the same model bagged twice, i.e. the original model averaged with a second model trained with a different seed. This scored 0.78 on public LB (0.64 LB). I knew it was lucky as CV was 0.76, but it looked great still. It took another 2 weeks for someone to beat this public score. Later, after improving a bit the training procedure I got 0.80 public LB (0.66 private) with a 2 seeds x 5 folds submission of my baseline.</p> <p>It means that my baseline alone gives me a top 20 final rank.</p> <p>Given single models were so great I assumed ensembling would move me ahead further and I decided to focus on creating a wide range of individual models. This is my main mistake, I should have worked on ensembling way earlier. </p> <p>I only submitted ensembles after the submission outage, 3 days before deadline, and discovered that it was hard to have a blending ensemble that beats all its individual component models. I beat my best individual model only in my last submission. It has both a CV and a public LB of 0.80 and is also my best private LB at 0.67. </p> <p>I started a stacking model last day, but this was too late… So be it.</p> <p>While I was holding top public score I decided to not submit and I explored lots of different models. In particular I explored vision transformers. I started with ViT and Deit. First try with 384x384 mel spectrograms were disappointing. Then I realized that i could use other image dimensions instead of a 24x24 grid of 16x16 patches. I tried 12x48, i.e. a 192x768 spectrogram, and also a 16x36 grid (256x576 spectrogram). The only trick is to modify the position embeddings to match the new grid dimension.</p> <p>This led to 0.75 public LB (0.64 private).</p> <p>But the most interesting one was to forget about square patches altogether. A 196x576 spectrogram can be seen as 576 time slices of size 196. Each slice contains 16x16 entries. It means that I could just use the time slices as input patches. Here is how this input looks once it is reshaped as a 24x24 grid of 16x16 patches:</p> <p><img src="https://i.imgur.com/F3zVGVM.pngd" alt="time slices"></p> <p>Maybe surprisingly, vision transformers are happy with this input. The main advantage is that there is no longer any issue with translation on the frequency axis, which is the main issue with CNNs applied to spectrograms.</p> <p>Blending Deit trained on this input with my baseline gave my best sub. The Deit model alone scores 0.77 on public LB and 0.66 on private LB.</p> <p>Although I am disappointed by my final result, I am happy to have explored lots of vision models and devised some new ways to use them. And being disappointed by a solo gold in a deep learning competition is something I would not have imagined one year ago anyway ;)</p> <p>Special thanks to Ross Wightman for his timm package. It made my model exploration seamless. </p> <p>Edit. I shared code and the paper I submitted to the workshop: <a href="https://github.com/jfpuget/STFT_Transformer" target="_blank">https://github.com/jfpuget/STFT_Transformer</a></p> <p>I won't. I don't feel motivated to clean and document it. Sorry.</p> <p>;) You made me share my postprocessing: <a href="https://www.kaggle.com/c/birdclef-2021/discussion/243378" target="_blank">https://www.kaggle.com/c/birdclef-2021/discussion/243378</a></p> <p>Which is the strong model and the weak model?</p> <p>Anyway, I give details about my model in my Cornell writeup, link in the post.</p> <p>I had submitted my deit model actually, alone it scores 0.77/0.66 which is not too bad.</p> <p>Thanks.</p> <p>It was the first time I did this, and in hindsight I think it was a mistake. It was a mistake because my real score never went above 0.80, which was the public score.</p> <p>The main issue is rather to stay motivated without being pressured by followers on the LB. I did my best but I see I made more progress during last week, when others had closed the LB gap with me, than in the previous 6 weeks.</p>
BirdCLEF 2021 - Birdcall Identification
One trick that can give you a bronze medal or above
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you Kaggle and the hosts for this interesting competition! Congrats to all the winners and people who have participated! While my team is working on the writeup, I'd like to share the post processing we used in this competition.<br> Just by adding the post processing alone, the kkiller's public inference notebook can achieve 0.62(in bronze medal range) on private leaderboard. Just for reference, the original inference notebook is 0.55. So that's a +0.07!</p> <p>Back to the main topic, our post processing idea is from <a href="https://www.kaggle.com/triplex" target="_blank">@triplex</a> 's Cornell writeup(<a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183255)" target="_blank">https://www.kaggle.com/c/birdsong-recognition/discussion/183255)</a>.<br> To quote his words:<br> """<br> if model gets a confident prediction of any bird, then lower threshold for this bird in the same audio file<br> use thr_median as initial threshold<br> use thr_high for confident prediction<br> if any bird with probability higher than thr_high in any clip, lower threshold to thr_low for this specific bird in the same audio file<br> """<br> His post processing code can be found here: <a href="https://github.com/xins-yao/Kaggle_Birdcall_17th_solution/blob/master/inference.py" target="_blank">https://github.com/xins-yao/Kaggle_Birdcall_17th_solution/blob/master/inference.py</a></p> <p>I adopted his idea and used optuna to tune the thr_dict to get the optimal threshold according to cv. <br> Besides this, I've also tried using location information to adjust threshold. We did not end up using location information in our submission because it consistently gives us -0.01 on public leaderboard but in fact gives us a big boost on the private leaderboard(our best submission is 0.68 with location processing). </p> <p>Here's the general idea for adding location:<br> In the train audios, I computed the distance between the location of the recording and the 4 sites(COL,COR,SNE,SSW) based on their coordinates. I also extracted the month information from the metadata. During test time, I use the location and month data of each test audio and get a list of birds that are not present near that location(I set the radius to 360km but with bird domain knowledge you can definitely change this number) during that month based on the location and month. information I extracted from the train audio data. I then increase the threshold for non-present birds. This threshold is also tunable!<br> I will do more experiment on location once I get more gpu time. </p> <p>Thank you for reading!</p> <p>I also used thr_bottom as well! Here's the thr_dict that I used in one of my notebooks(public:0.6766 private: 0.6162): thr_dict = {'high': 0.6219136066238313, 'median': 0.21652254656576478, 'low': 0.16546842334609826, 'bottom': 0.021796972061225543}. With some further finetuning, I believe you can get the score higher for sure. Hope that helps!</p>
BirdCLEF 2021 - Birdcall Identification
Part of the 18th place solution (Iafoss's model)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> . I think your previous year solution still was the most influential, at least for our team.</p> <h2>Summary</h2> <ul> <li>Sequence based model trained on weak labels</li> <li>Multi-head self-attention applied to entire sequences</li> <li>Train-Segment-Shift steps: generation of PL, building a segmentation model, and domain shift mitigation.</li> <li>Rainforest noise</li> <li>postprocessing based on 2.5s shifted sequence</li> <li>best <strong>single model performance 0.7598/0.6486</strong> at public/private LB</li> </ul> <h2>Introduction</h2> <p>Congratulation to all participants. To begin with, our team would like to thank organizers and Kaggle team for making this competition possible. I also would like to express my gratitude to my teammates: <a href="https://www.kaggle.com/urvishp80" target="_blank">@urvishp80</a> , <a href="https://www.kaggle.com/whurobin" target="_blank">@whurobin</a> , <a href="https://www.kaggle.com/philipkd" target="_blank">@philipkd</a> , and especially to <a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a> : he helped me today until 3:30 AM of his time to assemble the final submissions. Below I'll share the key point of our approach, and <a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a> may publish a separate post describing specific details of his part.<br> This competition was relatively short for me: I had just ~2 weeks after finishing another competition. Below I provide the key points of my portion of our solution.</p> <h2>Model</h2> <p>My part of the solution is largely influenced by the method <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183258" target="_blank">I proposed in 2020 Cornell Birdcall Identification challene</a> targeted at building a segmentation model based on weak classification labels.</p> <p><img src="https://i.ibb.co/1bZ0Ghg/image.png" alt=""></p> <p>I collapse the frequency domain into dim of size 1 and then consider the produced tensor as a sequence and then apply multi-head self-attention blocks to it, like in transformers. The produced output with stride of ~0.3s is merged with logsumexp (LSE) pooling to produce the prediction global prediction. In this year I quite modified this approach, and the model predictions look much better in comparison with ones from the previous year. First, following <a href="https://arxiv.org/pdf/1411.6228.pdf" target="_blank">this article</a> I introduced temperature to LSE equal to 5. Another trick is enforcing the minimum of predicted sequence to be 0: bird call is only present in a short time frame. It is achieved by an additional LSE output with -1 temperature followed by applying a loss with zero GT label. These modifications drastically boosted the model performance. Another important thing is using sufficiently short audio segments during training. Long segments help to overcome label weakness, but I realized that a model trained on long segments learns only easy sounds ignoring the rest (it is often just enough to find an easy example in a long sequence to assign the corresponding label). To encourage the model to learn difficult examples I used 5s chunks during training (but I used PL based sampling to deal with weak labels). Important component of the model is a multi-head self-attention applied to entire sequence: in my experiments this component boosted F1 of sequence based predictions (10 min) by 0.01-0.02 in comparison with CV evaluated based solely on 5s chunks.</p> <p><strong>Additional details:</strong><br> Backbone: ResNeXt50<br> Loss: Focal loss, corrected to be suitable for soft labels<br> Augmentation: <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183199" target="_blank">MixUp with max label</a> (proposed by <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> ), pink noise, external noise<br> <strong>[important]</strong> n_ftt = 2048. It appeared that models trained with n_ftt = 1024 are very susceptible to the domain shift, and n_ftt = 2048 leads to much better performance on soundscapes.</p> <h2>Train-Segment-Shift</h2> <p>It is the concept that was used to address the competition problem. <strong>There are 2 major issues to address in this challenge: (1) label weakness and noise, and (2) domain mismatch</strong> between train and test data. My approach consists of 3 steps: train, segment, and shift.</p> <p>(1) I begin with creating a segmentation model based on the original labels assigned to entire clips. The main difficulty is that clip labels are not equal to chunk labels because the clips may have no bird calls in particular parts or bird calls from other species. So <strong>direct training of a model on clip labels assigned to chunks cannot produce a good model</strong>, but it gives PL for the next step.</p> <p>(2) Run similar setup as step 1 but do sampling based on PL segmentation, making sure that the primary label bird call is included in each selected audio segment. In addition, I applied segmentation loss to region having high confidence pseudo labels, while masking regions where confidence is not sufficient. It is done to guide the model, and it improved the quality of segmentation labels at the end. <br> The image below shows the model predictions from the same audio for top 5 classes. Blue color corresponds to the primary label, while orange corresponds to the secondary label. The bottom image shows an example of segmentation PL used at step2 and step3. Red bars indicate high confidence PL, while orange ones correspond to low confidence PL (used for masking).<br> <img src="https://i.ibb.co/SXqcYw6/image.png" alt=""><br> At this step I got ~0.85 F1 CV (based on short audio) evaluated by taking max values for predictions within entire clip train files (so the provided weak labels can be used for model evaluation). However, if I try to apply this model directly to train soundscapes (which are similar to test data), I get only 0.67 CV (soundscapes), which indicated the domain mismatch between short audio and soundscapes.</p> <p>(3) That's why the next step is crucial: domain shift accommodation. In this competition the mismatch is not as bad as one year ago (in that case soundscapes were recorded with 16 kHz rate only). Though, if you plot the spectrograms or listen soundscapes, they are still drastically different from short train clips. <br> Initially I tried to add noise extracted from train soundscapes (with proper CV split) to each loaded audio segment (I didn't use train soundscapes directly, only noise from them) + pink noise. It didn't work well giving 0.7231/0.6401 at public/private LB. Next, I realized that using <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/data" target="_blank">Rain Forest data</a> works really well to eliminate FP. This data contains multiple background sounds and sounds similar to bird calls but not bird calls. <strong>So the model learns to ignore sounds similar to bird calls as well as bird calls from unknown species.</strong> This approach boosted the single model performance to 0.7598/0.6486 at public/private LB.</p> <p>For example, the image below shows a single model single fold predictions for the first two minutes of 10534_SSW_20170429.ogg soundscape for top 10 classes. The predictions look really confident for detecting nocalls. There is some confusion between some classes though. Also, I noticed that GT labels missed several calls: it is quite clear that the calls are present, but probably experts were not confident enough about the species because of the noise.</p> <p><img src="https://i.ibb.co/tpmRS8w/Bird-CLEF2021-test.png" alt=""><br> One may ask, why do we need step2 and high accuracy segmentation PL. Without noise the model is able to localize the areas with corresponding bird calls. Meanwhile if extensive noise is added, the model needs hints on localization of birdcalls and ignoring the rest. Segmentation PL help to guide training.</p> <h2>Postprocessing</h2> <p>One important portion of our solution is postprocessing. We averaged the prediction for a given 5s chunk with predictions for chunks shifted by 2.5 seconds forward and backward (one can consider a shifted audio), so <code>p = 0.5*p0 + 0.25*p_r + 0.25*pl</code>. This postprocessing is especially effective to deal with cases when a birdcall is located at a chunk boundary. In addition, <a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a> was using a postprocessing based on multiplying all chunk predictions for a given class by 1.3 if the maximum predicted value for the given class over the entire sequence is reaching a particular threshold.</p> <h2>Ensembling</h2> <p>Our best final submission is combination of 2 models having 0.7471/0.6206 and 0.7598/0.6486 public LB and private LB scores which gives 0.7736/0.6605 for weighted ensemble.</p> <p>You are very welcome</p> <p>Thanks so much, I'm glad that we worked together here. It was short but quite intense competition for me. </p> <p>Thanks <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a></p> <p>Thanks so much <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>. Congratulations to you as well with getting solo gold! I hope you are not very disappointed by the results, but I was really impressed by the score you had during the competition. It motivated me to look into more things. </p> <p>Regarding the model, it may be a misunderstanding, but I worked with mel spectrograms as well… (I think u may get confused with my distribution of predicted labels over time). I approached the task as a segmentation problem and tried to improve my 2020 method as much as I could (I didn't use SED because in 2020 it didn't work well for me). </p> <p>Thanks.<br> (1) Actually, as I mentioned training on longer sequences helps to deal with weak labels but degraded the model ability to handle difficult cases (balance is important). Training on longer clips was helpful in 2020, but here I tried to incorporate the things related to loss (quite helped to improve the model - you can check my previous year plots) and sampling based on PL. I didn't have time to experiment with longer sequences used for training (there were just a few attempts where I tried it without any noticeable improvement). However, longer sequences were very helpful at inference (given the model architecture I use). For example, my model got 0.72-0.73 CV on 5s chunks. While if I do the same validation on 10 min clips passed to the model I got 0.75-0.76, as I remember, and shift based postprocessing boosted the things to ~0.78.<br> (2) Yes, it's the same as in the previous year solution. The important thing, though, is using nfft=2048 - it is one of the key points of elimination the domain gap (though private LB performance was similar for nfft=1024 and nfft=2048)<br> (3) The time was very short for me. So I just took the model I built in 2020. The thing I'd change or wanted to experiment with? Probably use of a SWIN like transformer base block in the head or attention that incorporates relative positions within fixed window.</p> <p>For 5s chunks I have 128x512 size (I set hop length to ensure this dim).</p>
BirdCLEF 2021 - Birdcall Identification
Secondary labels are effective for bronze medal (76th solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for the interesting competition, and congratulations to the winning teams!</p> <p>This is our 76th (Private: 0.6109, Public: 0.6861) solution.</p> <h3>Our solution summary</h3> <p>(1) Baseline framework is kkiller’s public notebook (Thanks to <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a>!)<br> <a href="https://www.kaggle.com/kneroma/clean-fast-simple-bird-identifier-training-colab" target="_blank">https://www.kaggle.com/kneroma/clean-fast-simple-bird-identifier-training-colab</a></p> <p>(2) Just adding secondary labels information</p> <ul> <li>primary_label: 0.995</li> <li>secondary_labels: 0.5 (Using 0.7 or 0.995 is also same performance)</li> <li>others: 0.0025 </li> </ul> <p>(3) Choose threshold that is optimal for train soundscapes</p> <p>We also tried data augmentation (noise, mixup … ), test time augmentation, and so on. However, these things did not work in our environment.</p> <p>Thank you!</p>
BirdCLEF 2021 - Birdcall Identification
[1st Place] Quick Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2021 - Birdcall Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!<br> We thought we couldn't beat you, so we were surprised to see the results.</p> <p>Thank you. It was our goal to become masters. I'm happy!</p> <p>We are planning to share the code for all stages. We are working on cleaning it up, so please wait!</p> <p>In the 2nd stage, we got f1 scores 0.73~0.76 on train soundscapes (on 1 fold). The number of epochs is 20~80.<br> The learning process is almost the same as kkiller's notebook, but we did StratifiedGroupKFold grouped by authors and some data augmentation(mixup) worked.<br> I will share the detailed learning process later!</p> <p>In the 2nd and 3rd stage train_soundscapes was used as validation.</p> <p>We were worried that our model might be overfitting the train_soundscapes because the score was not good enough in public LB, but it was top in private LB!</p> <p>Thanks!<br> The pipeline is complicated and it's taking a while to clean up…</p> <p>Thanks, and congratulations on becoming a Competitions Master!</p> <p>Since we had converted the audios to melspectrograms beforehand, the computational cost for CNN training was quite low.<br> I think Colab Pro is a sufficient computational resource in this competition.<br> I will share the details later!</p> <p>Thank you!<br> Your teams entered this contest at about the same time and submitted about the same number of times, so I was watching you.<br> Congratulations on winning 2nd place!</p> <p>Thanks to all the hosts, participants, and teammates <a href="https://www.kaggle.com/startjapan" target="_blank">@startjapan</a> , <a href="https://www.kaggle.com/namakemono" target="_blank">@namakemono</a> .<br> We are also surprised by this result. We think it is because our solution was quite different from the other teams.</p> <p>I'm not confident in my English, but I hope you all can understand.</p> <p>We didn't expect to get the top spot, so we weren't prepared to share our solution.<br> Here is a brief explanation of the solution. There are three stages.</p> <h2>1st stage</h2> <p>We used freefield1010 to classify whether it is a nocall or not from the melspectrogram.<br> <a href="https://academictorrents.com/details/d247b92fa7b606e0914367c0839365499dd20121" target="_blank">https://academictorrents.com/details/d247b92fa7b606e0914367c0839365499dd20121</a></p> <h2>2nd stage</h2> <p>We used short audio to predict which bird is singing.<br> kkiller( <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a> ) 's notebook helped us a lot. I think it would have been difficult for us to win without it.<br> The short audio has noisy labels, so we used the results of the 1st stage to weight the labels.<br> We used train sound scapes as validation.<br> The number of models used in the final submission was 10.</p> <h2>3rd stage</h2> <p>We extracted 5 candidates from the results of the 2nd stage, and together with the information from meta data, we trained lightgbm to predict whether the bird would be included in the answer.<br> Only short audio was used for training, and train sound scapes were used for validation.</p> <h2>Post-processing</h2> <p>The optimal threshold is determined using Ternary search.<br> Since the threshold varies depending on the percentage of nocall, we used both the case where the percentage of nocall is not changed and the case where the percentage is reduced to 54% as the final submission. However, it was better not to change it.</p> <h2>Machine</h2> <p>All three of us used Colab Pro as our main machine.</p> <p>There are a few other tricks.<br> I'll share the detailed solution later. Please look forward to it.</p> <h2>6/4</h2> <p>My teammate <a href="https://www.kaggle.com/startjapan" target="_blank">@startjapan</a> wrote about our detailed solution.<br> If you're interested in our detailed solution, please check it out !</p> <p><a href="https://www.kaggle.com/c/birdclef-2021/discussion/243927" target="_blank">https://www.kaggle.com/c/birdclef-2021/discussion/243927</a></p> <p>Thanks for your great notebook! Throughout the long period of the competition, you have been our goal !</p>
BirdCLEF 2022
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We experimented quite a bit with EfficientNet(V2) as these indeed typically do very well on all tasks, but the results were much lower than the resnext backbones which is why we didn't really include it in the blend. It was also hard to tune weights for a blend as there was no validation set of soundscapes available.</p> <p>Closing time! Very painful to see us finish 1 spot shy from a gold medal, but you win some you lose some. <strong>EDIT:</strong> due to one team being disqualified, we managed to actually get the last gold spot!</p> <p>Our final solution is an ensemble of five different models trained with different backbones or spectrogram configurations. Each of those models are heavily inspired by the <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">second place solution of last year</a> which was re-implemented on this dataset by <a href="https://www.kaggle.com/julian3833" target="_blank">@julian3833</a> in <a href="https://www.kaggle.com/code/julian3833/birdclef-21-2nd-place-model-submit-0-66" target="_blank">this notebook</a>.</p> <h1>Phase I: tuning the threshold</h1> <p>Early in the competition, we were still kind of doubting whether to commit or not, due to the lack of a true validation set. I decided to just start from the notebook mentioned above but apply the same thresholding technique as the 2nd place solution of last year in which they predicted the top-K percentile of probabilities to be True. Last year, the threshold was extremely high at around <code>0.9987</code>, this year, a threshold of <code>0.69</code> (nice) gave us a score of around 0.73-0.75 on the LB. In the end, the threshold of our ensemble was set to be <code>0.78</code>, it turned out <code>0.75</code> even worked slightly better on the private.</p> <h1>Phase II: improving the pipeline</h1> <p>We decided to try and add different things to the pipeline in order to improve it. Most of the things we tried ended op in section 4 of this write-up unfortunately. But one thing that worked really well was simple augmentation on spectrogram-level (<a href="https://arxiv.org/abs/1904.08779" target="_blank">SpecAugment</a>). We just masked time &amp; frequency bands.</p> <h1>Phase III: the ensemble</h1> <p>As mentioned, two different spectrogram configurations and different backbones were used. The spectrogram configurations were:</p> <p>A:</p> <pre><code>cfg.window_size = 1024 cfg.hop_size = 320 cfg.sample_rate = 32000 cfg.fmin = 50 cfg.fmax = 14000 cfg.power = 2 cfg.mel_bins = 64 cfg.top_db = None </code></pre> <p>and</p> <p>B:</p> <pre><code>cfg.window_size = 1024 cfg.hop_size = 512 cfg.sample_rate = 32000 cfg.fmin = 16 cfg.fmax = 16386 cfg.power = 2 cfg.mel_bins = 128 cfg.top_db = 80.0 </code></pre> <p>The five models in our ensemble were:</p> <ul> <li><code>seresnext26t_32x4d</code> with A and B (all scored 0.81 on public)</li> <li><code>eca_nfnet_l0</code> on A and B (scored 0.81 on public)</li> <li><code>seresnext50_32x4d</code> with B (scored 0.81 on public)</li> </ul> <h1>Phase IV: post-processing</h1> <p><a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243343" target="_blank">One post-processing trick</a> proposed by <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> worked rather well here too. Here, the audio was slided by some offset (in our case 1.5 seconds forward and backward) and then aggregated as follows: <code>p = 0.5*p0 + 0.25*pr + 0.25*pl</code>. Another trick that worked marginally well is increasing the probabilities of a bird in all clips of a soundscape if we were very confident (90-percentile within that soundscape) of that bird appearing in another clip within that same soundscape.</p> <h1>Things that did not work</h1> <ul> <li>waveform augmentation</li> <li>adding background noise from ff1010 dataset</li> <li>including audio clips with rating &lt;= 2</li> <li>balanced class weights as the metric was something close to macro F1</li> <li>post-processing based on co-occurrences</li> <li>using YamNET as a bird/no-bird classifier and using it to sample crops from our clips</li> </ul> <h1>Our team name</h1> <p>When we had to think of a team name, <a href="https://www.kaggle.com/moeflon" target="_blank">@moeflon</a> googled <code>the dumbest bird in the world</code> and found Kakapoo. The name of that bird contains both 💩 <code>poo</code> 💩 and <code>kaka</code> (which is Dutch for poo) so it really was a no-brainer.</p> <p><img src="https://i.imgur.com/3MVG3Vk.jpg" alt="Kakapoo bird"></p> <h1>Credits</h1> <p>A huge thanks goes out to my teammates <a href="https://www.kaggle.com/moeflon" target="_blank">@moeflon</a> <a href="https://www.kaggle.com/gertjandemulder" target="_blank">@gertjandemulder</a> <a href="https://www.kaggle.com/emield" target="_blank">@emield</a> <a href="https://www.kaggle.com/jeroenvdd" target="_blank">@jeroenvdd</a> </p>
BirdCLEF 2022
15th place summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to team member and all participants.<br> I enjoyed the speech recognition competition very much,<br> but I couldn’t grasp the behavior of the metric and LB until the end.<br> Also, crinoid ’s EDA was very helpful in formulating a strategy.<br> The detailed solution is shown below:</p> <p><strong>Melspectrogram preprocessing parameter</strong></p> <pre><code>sr = 32000 n_mels = 128 fmin = 20 fmax = 16000 n_fft = 32000//10 hop_len = 3200//4 power = 2 top_db = 80 </code></pre> <p><strong>Model Architecture</strong><br> we was using very simple CNN architecture.</p> <pre><code>class Model(nn.Module): def __init__(self,name,pretrained=False): super(Model, self).__init__() self.model = timm.create_model(name,pretrained=pretrained, in_chans=1) self.model.reset_classifier(num_classes=0) in_features = self.model.num_features self.fc = nn.Linear(in_features, cfg.CLASS_NUM) def forward(self, x): x = self.model(x) x = self.fc(x) return x </code></pre> <p>As a backbones we have used:<br> ・eca_nfnet_l0<br> ・convnext_tiny<br> ・resnest50d<br> Model Training<br> we have trained models on 5 sec and 7sec chunks and with secondary labels.<br> At First, we made pretrain models by 2020, 2021, 2022 without scored birds competition data.<br> Next, we have finetunned pretrain models the scored birds datas.</p> <p><strong>Ensemble and TTA</strong><br> We ensemble the single trained model (head 30s train tail5s validation 7s segment) and 5foldmodel (5s segment) and apply TTA:</p> <pre><code>if i &lt;= 2: pred = model(images).sigmoid().detach().cpu().numpy() else: pred1 = model(images[:,:,:,0:201]).sigmoid().detach().cpu().numpy() pred2 = model(images[:,:,:,40:241]).sigmoid().detach().cpu().numpy() pred3 = model(images[:,:,:,80:281]).sigmoid().detach().cpu().numpy() pred = 0.25*pred1 + 0.5*pred2 + 0.25*pred3 </code></pre> <p><strong>Post Processing</strong></p> <p>Model weighted<br> Models with high LB are weighted larger and models with lower LB are weighted smaller.</p> <p>and then we apply voting method:<br> After weight averaging the model’s predictions, we conducted pred re-weighting by voting score. Voting score was calculated by judging ,for each species, whether each model’s pred exceeds the given threshold or not. By doing this, we aimed to grab more rare bird’s TP.</p> <p>how to decide threshold<br> we apply percentaile approach(2021 2nd).</p>
BirdCLEF 2022
19th place solution, single CNN model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your question. </p> <p>During the pre-training stage, I was trying to focus on the generalization ability of the model. <br> Firstly, I saved the model's checkpoints based on LRAP value for all birds in the validation fold. <br> After, I evaluated these checkpoints on BirdCLEF2021 (public + private lb). Finally, the last decision was made considering only a weighted sum of public and private scores (w_public=0.35 and w_private=0.65, which are similar to the public-private distribution in the BirdCLEF2021)</p> <p>To begin with, thanks to the Kaggle, group of organizers and other participants. </p> <p>Here you could find my solution based on the single ResNeSt50d model trained on 5 folds. One of the hardest parts in this competitions was validation. In <a href="https://www.kaggle.com/competitions/birdsong-recognition/" target="_blank">Cornell Birdcall Identification competition</a> our team suffered badly from the shakeup thus I have tried to prevent such painful experience at all costs in this competition. </p> <p>I have cut the description of my solution as it was possible, so if you have any questions feel free to ask :)</p> <h4><strong>Data:</strong></h4> <ul> <li><strong>Pre-training:</strong> <ul> <li><strong>BirdCLEF2021 (train_short_audio) + BirdCLEF2022 (train_audio)</strong></li></ul></li> <li><strong>Finetuning data:</strong> <ul> <li><strong>BirdCLEF2022 (train_audio)</strong></li></ul></li> <li><strong>Validation data:</strong><ul> <li><strong>BirdCLEF2021 (public + private lb scores)</strong> were used to compare models after the pre-training stage + to check some of the post-processing tricks;</li> <li><strong>BirdCLEF2022 (validation folds)</strong> were used for model selection based on OOF scores calculated only for scored birds (metric: label ranking average precision);</li> <li><strong>BirdCLEF2022 (public lb)</strong> was used to tune threshold;</li> <li><strong>Artificially created soundscapes</strong> were used for the model selection.<br> This dataset was created using mixtures (with different SNRs) of scored birds from the BirdCLEF2022 (train_audio) and nocalls from the part of BirdCLEF2021 train soundscapes, which weren't used as train augmentations. The segments with scored birds were filtered using nocall detector by the 0.9 percentile. I have tried hard to make this data similar to BirdCLEF2022 (public lb) data in order to tune threshold and try post-processing tricks properly, but all attempts were far from the reality. However, I found this created dataset slightly useful for the model selection (in addition to the validation folds)</li></ul></li> </ul> <h4><strong>Pre-processing:</strong></h4> <ul> <li>128-dim melspec + normalization, duration: 7 seconds (I have modified <a href="https://www.kaggle.com/code/kneroma/birdclef-mels-computer-public" target="_blank">notebook</a> from <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a>)</li> </ul> <h4><strong>Models:</strong></h4> <ul> <li>The only one ResNeSt50-based CNN, 5 folds based on StratifiedGroupKFold (grouped by author);</li> <li>Resnet-based nocall detector (used only for pre-training, training losses and for the creation of artificial soundscapes).</li> </ul> <h4><strong>Loss:</strong></h4> <ul> <li>BCE loss on classification head for bird classes:<ul> <li>Weights for scored birds were multiplied by 1.5 (slight boost)</li> <li>Weight for each item was multiplied by the call probability from nocall detector <a href="https://www.kaggle.com/c/birdclef-2021/discussion/243304" target="_blank">idea from top1 BirdCLEF2021 solution</a> (medium boost)</li> <li>Weight=0.3 for secondary classes;</li> <li>Label smoothing;</li></ul></li> <li>Two BCE losses (weighted 0.15 for each) on classification heads for two levels of a species taxonomy: family and order (<a href="https://arxiv.org/pdf/2110.03209.pdf" target="_blank">idea from Mixit paper</a>)</li> </ul> <h4><strong>Augmentations:</strong></h4> <ul> <li>Background noise from ff1010 and from the part of BirdCLEF2021 train soundscapes (this augmentation was used only for pre-training);</li> <li>MixUp;</li> <li>Pink noise;</li> <li>Bandpass noise; </li> <li><a href="https://arxiv.org/pdf/2103.16858v3.pdf" target="_blank">SpecAug based on the mixture masking</a> (thanks to <a href="https://www.kaggle.com/shinmurashinmura" target="_blank">@shinmurashinmura</a> for sharing)</li> </ul> <h4><strong>Post-processing:</strong></h4> <ul> <li>Constant threshold;</li> <li>The probabilities of 6 underrepresented birds (based on the total duration of each bird and on the outputs of model) were multiplied by 2.0 (it has boosted my both public and private scores by 0.03);</li> <li>If the bird is found in the 5-second segment, the probabilities will be increased for this bird in two neighboring segments from each side (public and private scores were increased by ~0.02);</li> <li>Nocall threshold.</li> </ul> <h4><strong>The methods that didn’t work or worked worse:</strong></h4> <p>To be honest, there is gonna be a very long list of things, but I will reduce it to the most promising methods (based on my opinion, in descending order)</p> <ul> <li><a href="https://bird-mixit.github.io" target="_blank">Mixit</a> (Their separation model + my classification model have given 0.75 on private lb);</li> <li><a href="https://arxiv.org/abs/2110.05069" target="_blank">PASST</a> (and other transformer-based models);</li> <li>[PANNs];</li> <li>Finetuning on 79 classes only (scored birds + all birds which occur in the training audio with scored birds);</li> <li>Finetuning on 21 scored classes only;</li> <li>Quantile-based threshold for each soundscape;</li> <li>Dynamic duration for the audio during training (for each batch the duration is sampled from [5, 10] interval for example);</li> <li>Train each model for a specific daytime (for example, one model for calls in the morning, second - for calls at night, etc..);</li> <li>Using the nocall detector for the submission directly.</li> </ul>
BirdCLEF 2022
1st place solution models (it’s not all BirdNet)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Kaggle Team, <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a> and all participants </p> <p>As you may know, in our final blend we have used BirdNet along with other models provided by me, <a href="https://www.kaggle.com/ivanpan" target="_blank">@ivanpan</a>, <a href="https://www.kaggle.com/realsleim" target="_blank">@realsleim</a> and <a href="https://www.kaggle.com/selimsef" target="_blank">@selimsef</a></p> <p>In this post I will cover my models and approach</p> <p><strong>Model Architecture</strong></p> <p>I was using SED architecture, proposed and used by <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a> - <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243293" target="_blank">post</a></p> <p>As a backbones I have used:</p> <ul> <li>tf_efficientnet_b3_ns</li> <li>eca_nfnet_l0</li> </ul> <p>I have changed the first stride from (2,2) to (1,1) in order to have larger (in terms of length and number of frequencies ) output of CNN encoder ( I have taken this trick from <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a> <a href="https://www.kaggle.com/competitions/seti-breakthrough-listen/discussion/266385" target="_blank">SETI</a>)</p> <p><strong>Model Training</strong></p> <p>I have trained model on 15 sec chunks and with secondary labels</p> <p>I have used following augmentations:</p> <ul> <li>GaussianNoise</li> <li>PinkNoise</li> <li>OR Mixup on waveforms</li> <li>BackgroundNoise. For training - <a href="https://www.kaggle.com/datasets/mmoreaux/environmental-sound-classification-50" target="_blank">this dataset</a>. For finetuning - esc50 + nocall from soundscapes of 2021 BirdClef Comp</li> </ul> <p>Proposed by <a href="https://www.kaggle.com/selimsef" target="_blank">@selimsef</a> I have used weights (computed by <code>primary_label</code>) for Dataloader and Loss in order to cope with unbalanced dataset (especially for <code>scored_birds</code>) </p> <p>As for Loss I have used simple BCE on <code>clipwise</code> logits</p> <p>I was tracking 3 best checkpoints by LB metric and Validation loss and then averaged 3 model weights (kind of naive SWA)</p> <p><strong>Training stages</strong> </p> <p>For training I have used 2 stage training:</p> <ol> <li>Pretrain on 2021 and 2022 comp data </li> <li>Finetune on data from pretrain BUT filtered by the next rule - <code>Take samples which contain scored_bird in primary_label OR secondary_labels</code></li> </ol> <p><strong>Inference</strong></p> <p>Having SED model I have tried 2 options for inference:</p> <ol> <li>Proposed by <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a> - using AND rule for <code>long</code> and <code>short</code> clipwise predictions. short prediction - 5 sec, long - 15 sec</li> <li>Feed model 15 sec chunk BUT apply head only on centered 5 sec reduced CNN image and use max(framewise, dim=time)</li> </ol> <p>Overall second option worked better for me</p> <p>Choosing threshold. Here I have tried also 2 options:</p> <ol> <li>Use ordinary threshold. Optimal values varied for me from 0.2 - 0.3</li> <li>Use quantile threshold, originally proposed by <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">post</a>. Optimal value for <code>quantile_tresh</code> was 0.25</li> </ol> <pre><code>tresh = np.quantile( test_model_probs[:, scored_bird_ids].flatten(), 1 - quantile_tresh, ) </code></pre> <p>For solo model (5 folds) second worked better and for ensembles first worked better</p> <p><strong>Validation</strong></p> <p>I have used 5 CV Stratified training (also <code>maupar</code> sample was splitted on 5 samples in order to have consistent OOF score)<br> Compute LB metric using next prediction scheme:<br> For each validation sample - slice it on pieces -&gt; predict each piece -&gt; max(sample_predictions, dim=pieces)<br> And then compute metric using <code>all_labels=[primary_label] + secondary_labels</code><br> Also I have computed this metric only on samples which contain scored_bird in all_labels and taking into account only scored_birds<br> And finally optimize threshold with step 0.1 </p> <p><strong>Results</strong></p> <ul> <li>tf_efficientnet_b3_ns: Val = 0.87894692; Public = 0.82; Private = 0.78</li> <li>eca_nfnet_l0: Val = 0.88640918; Public = 0.82; Private = 0.78</li> </ul> <p><strong>Inference Kernel</strong> - <a href="https://www.kaggle.com/code/ivanpan/fork-of-fork-of-cls-exp-1-870246-021187-967146/notebook?scriptVersionId=96433080" target="_blank">https://www.kaggle.com/code/ivanpan/fork-of-fork-of-cls-exp-1-870246-021187-967146/notebook?scriptVersionId=96433080</a><br> <strong>GitHub Repo</strong> - <a href="https://github.com/Selimonder/birdclef-2022" target="_blank">https://github.com/Selimonder/birdclef-2022</a></p>
BirdCLEF 2022
23th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>It was nice to be on this competition with (agaisnt) you!</p> </blockquote> <p>Thank you for saying so. See you in another competitions.</p> <p>Thresholding strategy is very delicate problem on this competition.<br> I can't exactly tell on which strategy is the best, but I think the one by <a href="https://www.kaggle.com/shinmurashinmura" target="_blank">@shinmurashinmura</a> 's (threshold down)[1] is impressive. This strategy is based on the prior knowledge: "a bird call appears in a segment of a clip, then this bird call is more probable to appear on other segments in the same clip".<br> I think some other strategy including mine are just empirical, but this strategy makes more sense.<br> The author also conducted an ablation study, and find out this strategy substantially improves public LB score by +0.02. The author also made effort to clear label noise by human-in-the-loop, I guess it also affected the score about +0.02-0.03.</p> <p>According to shared solutions, they say using 2021 data as extended dataset sometimes improves the score and sometimes not, so it's not clear this strategy truly does it better.</p> <ul> <li>[1] <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/327187" target="_blank">https://www.kaggle.com/competitions/birdclef-2022/discussion/327187</a></li> </ul> <p><a href="https://www.kaggle.com/hinepo" target="_blank">@hinepo</a> Thanks :)</p> <p>The understanding of the competition metric was definitely one of the reasons to push me up to silver medal.<br> However, it wasn’t enough to get gold or higher rank. I am very impressed with some top rankers solutions where actively decrease threshold on a clip when a bird detected. I think post-processing techniques like this were key to the gold medals.</p> <p>As for PaSST, I on the first place expected to perform better than PANNs, but actually, it wasn’t. In fact, it perform slightly worse than PANNs in a single model performance. However it seems to contribute somewhat in ensembling. (I didn’t tried but I wonder only ensembling with PANNs are enough to score the same as with PaSST.)</p> <p>Finally, thanks to encouraging me with upvoting my discussions. That definitely supported my motivation.</p> <p>Updated: the result of late submission (sub10) shows ensemble without PaSST is slightly better than that includes PaSST (0.7626-&gt;0.7645). So, the PaSST have no positive effect on my experiments.</p> <p>Tips: seeing ROC AUC on each species group helps to visualize the model's performance well.<br> However, the CV/LB correlation is poor excluding <code>mid_top5</code>.</p> <p><a href="https://ibb.co/NyRqXx7"><img src="https://i.ibb.co/x7Zbyh3/Screen-Shot-2022-05-25-at-19-54-33.png" alt="Screen-Shot-2022-05-25-at-19-54-33"></a></p> <h2>Appendix: Why Score Drop on Public/Private LBs?</h2> <p>Using LB probe tip II[1], I estimated the score contribution of each species groups.<br> Since the competition is ended, we can know on both public and private LB.</p> <p>As the result, it shows the model's performance for <code>mid_top5</code> is extremely differed between public and private test set.<br> The assumed cause is:</p> <ul> <li>the ratio of ground truth is different</li> <li>feature of the audio is different</li> </ul> <p><strong>Table 1. comparison of estimated LB score per species group</strong></p> <table> <thead> <tr> <th></th> <th>public LB</th> <th>private LB</th> <th>diff</th> </tr> </thead> <tbody> <tr> <td>top5</td> <td>0.684</td> <td>0.684</td> <td>0</td> </tr> <tr> <td>mid_top5</td> <td>0.978</td> <td>0.789</td> <td>-0.189</td> </tr> <tr> <td>mid_low5</td> <td>0.852</td> <td>0.852</td> <td>0</td> </tr> <tr> <td>low6</td> <td>0.7225</td> <td>0.74</td> <td>0.0175</td> </tr> </tbody> </table> <h2>Reference</h2> <ul> <li>[1] <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/322419" target="_blank">https://www.kaggle.com/competitions/birdclef-2022/discussion/322419</a></li> </ul> <h1>TL;DR</h1> <p>My solution features the following.</p> <ul> <li>Local validation:<ul> <li>train with all un-scored 131 species + 66% of scored 21 species, validate with 33% of scored 21 species</li> <li>StratifiedGroupKFold (Group by <code>author</code>)</li></ul></li> <li>Ensemble of PANNs + PaSSTs (GeM pooling (p=3))</li> <li>Thresholding per group (fixed quantile)</li> </ul> <h2>Local Validation Strategy</h2> <ul> <li>All 131 un-scored species are used for training</li> <li>21 scored species were split into 3-folds with <code>StratifiedGroupKFold</code></li> </ul> <p>To see the effect of domain shift, I made sure that the same Author did not appear in both training and evaluation data [1].</p> <p><a href="https://ibb.co/VQrSQdP"><img src="https://i.ibb.co/Tbp8bZF/Screen-Shot-2022-05-25-at-19-36-58.png" alt="Screen-Shot-2022-05-25-at-19-36-58"></a></p> <h2>Model</h2> <p>The model consists of an ensemble of PANNs and PaSSTs.</p> <h3>PANNs</h3> <p>The code for PANNs was adapted from the 2020 6th rank solution [2].<br> The changes are as follows.</p> <ul> <li>time window is set to 20 seconds</li> <li>mixup + cutmix (adapted from a public notebook [3])</li> <li>backbone: ResNet-34</li> <li>use FocalLoss</li> <li>use AdamW</li> <li>train 40-100epoch</li> </ul> <h3>PaSST</h3> <p>The source code was adapted from the official implementation [4]. The changes are as follows.</p> <ul> <li>audio-based augmentation such as Gauss noise</li> <li>Knowledge distillation using pseudo-labels from learned PANNs (ResNet-34)</li> <li>Use of FocalLoss</li> <li>Using AdamW</li> <li>train 40 epochs</li> </ul> <h3>About Knowledge Distillation</h3> <p>The loss function is the average of the loss with the pseudo-label as the label and the loss calculated with the original correct label.</p> <p><code>loss(pred, y, y_pseudo_label) = 0.5 * (loss(pred, y) + loss(pred, y_pseudo_label))</code></p> <h2>Ensemble Strategy</h2> <p>Predictions from a total of 8 models (PANNs x4 + PaSST x4) were aggregated by GeM pooling (p=3).</p> <h2>Thresholding Strategy</h2> <p>Following the 2021 second-order solution[5], I fixed the quantile of the predictions and set the threshold [6].<br> Where I changed is that I divide the scored species into the following four groups, and set threshold per each group.</p> <ul> <li>top5: ['skylar', 'houfin', 'jabwar', 'warwhe1', 'yefcan']</li> <li>mid_top5: ['apapan', 'iiwi', 'omao', 'hawama', 'hawcre']</li> <li>mid_low5: ['barpet', 'akiapo', 'elepai', 'aniani', 'hawgoo']</li> <li>low6: ['ercfra', 'hawpet1', 'puaioh', 'hawhaw', 'crehon', 'maupar'])</li> </ul> <p>The thresholds for each group of the final submitted model are as follows. These were optimized for Public LB.</p> <p>top5, mid_top5, mid_low5, low6 = [0.100, 0.550, 0.350, 0.334]</p> <h1>Evaluation Results</h1> <p>The evaluation results for the top submissions, including the final submission, are shared below.</p> <p>Updated: the result of late submission (sub10) shows ensemble without PaSST is slightly better than that includes PaSST (0.7626-&gt;0.7645). So, the PaSST have no positive effect on my experiments.</p> <p><a href="https://ibb.co/vX1ymnS"><img src="https://i.ibb.co/TWcjR3s/Screen-Shot-2022-05-27-at-22-37-59.png" alt="Screen-Shot-2022-05-27-at-22-37-59"></a></p> <h1>What didn't worked</h1> <ul> <li>soft balanced accuracy loss</li> <li>binary classifier</li> </ul> <h2>References.</h2> <ul> <li>[1] <a href="https://www.kaggle.com/code/tatamikenn/birdclef22-meta-sub-clip-60sec-group-by-author/notebook" target="_blank">https://www.kaggle.com/code/tatamikenn/birdclef22-meta-sub-clip-60sec-group-by-author/notebook</a></li> <li>[2] <a href="https://www.kaggle.com/competitions/birdsong-recognition/discussion/183204" target="_blank">https://www.kaggle.com/competitions/birdsong-recognition/discussion/183204</a></li> <li>[3] <a href="https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0" target="_blank">https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0</a></li> <li>[4] <a href="https://github.com/kkoutini/PaSST" target="_blank">https://github.com/kkoutini/PaSST</a></li> <li>[5] <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">https://www.kaggle.com/competitions/birdclef-2021/discussion/243463</a></li> <li>[6] <a href="https://www.kaggle.com/code/tatamikenn/birdclef22-sub7-1-3-6-8-9-10-passtx4-panns-x4/notebook" target="_blank">https://www.kaggle.com/code/tatamikenn/birdclef22-sub7-1-3-6-8-9-10-passtx4-panns-x4/notebook</a></li> </ul> <blockquote> <p>I expected PaSST was used in many of the top solutions, but surprisingly it was not.</p> </blockquote> <p>Yep. As for me, the single best model was PANNs. (PaSST, not as expected, performs as good as PANNs).<br> I guess the model choice or training methods was not critical, but heavy augmentation strategy or extended data use contributes much in this competition.</p> <p>Thank you for commenting.</p>
BirdCLEF 2022
35th Experiment
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi. I am disappointed that I could not keep the top position until the end of the competition.<br> I would like to leave some notes about the experiments I did during the competition.<br> (日本語版は<a href="https://sqrt4kaido.hatenablog.com/entry/2022/05/29/034449?_ga=2.139101755.906933634.1654160869-182752224.1648474566" target="_blank">こちら</a>)</p> <h2>Feature</h2> <p>I considered methods more robust to noise than the melspectrogram, or other conversion methods from the melspectrogram.</p> <ul> <li><a href="https://www.kaggle.com/code/tatamikenn/birdclef22-per-channel-energy-normalization" target="_blank">pcen</a></li> </ul> <p>I experimented with the image with low noise and high expectations, but the score did not improve.</p> <ul> <li>linear spectrogram</li> </ul> <p>Thinking that birdcall features might exist in frequency bands that are difficult for the human ear to hear, I created an image using an unbiased moving average (which I call a linear spectrogram). However, the score did not improve either.</p> <p><img src="https://raw.githubusercontent.com/root4kaido/kaggle-discussion-image/main/BIRDCLEF2022/mel.png" alt="image"></p> <p>This is just a tip, but the melspectrogram with torchaudio looks slightly different from the mel spectrogram with librosa.<br> The same thing happens when taking logs. Please wait to upload the consistent code on kaggle notebook.</p> <p>In the end, the best scores were obtained using the regular melspectrogram. pcen and linear spectrograms were tried in combination with the melspectrogram and by shifting the frequency band used for each channel, but to no avail.</p> <h2>Model</h2> <p>I have considered/implemented three models.</p> <ul> <li>PANNs-based model</li> </ul> <p>This is a <a href="https://www.kaggle.com/code/hidehisaarai1213/introduction-to-sound-event-detection" target="_blank">familiar model</a> for bird competitions. The Linear layer after passing through the backbone was changed to LSTM in order to capture more time-series features. This led to a slight improvement in score.</p> <ul> <li><a href="https://github.com/jfpuget/STFT_Transformer" target="_blank">STFT Transformer</a></li> </ul> <p>CPMP invented this in last year's bird competition.<br> He scored highly using this method in that competition, but in my experiments, I could not surpass PANNs' score.</p> <ul> <li>STFT Swin SED model</li> </ul> <p>This is a model that I wanted to implement from the beginning of the competition. It is based on the above STFT Transformer, but is capable of outputting class classification results for each frame.<br> Since the above STFT Transformer cannot output frame-by-frame classification results, I thought that adding a mechanism more suitable for SED tasks with a similar architecture to the STFT Transformer would lead to higher scores.<br> I changed the method of embedding patches to the STFT Transformer, based on <a href="https://arxiv.org/abs/2202.00874" target="_blank">the paper</a> on the SED model using the Swin Transformer.<br> However, I could not exceed the PANNs score.</p> <h2>Class imbalance</h2> <ul> <li>over sampling</li> </ul> <p>There are two methods that have improved scores.</p> <p>Oversampling was applied to wavs of species with fewer than 20 wavs. To prevent overfitting of the oversampled wavs, augmentation was strongly applied only to the oversampled wavs. This improved the score slightly.</p> <p>I used A method called Context-rich Minority Oversampling (<a href="https://arxiv.org/abs/2112.00412" target="_blank">CMO</a>). This method oversamples a small number of classes at the time of CutMix, and was effective when applied to the mel-spectrogram. I tried different cutting methods, such as CutMixing only in the time direction, but the simple CutMixing was the best.</p> <ul> <li>loss</li> </ul> <p>In addition to focal loss, I also examined several effective losses for class imbalance. I implemented <a href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf" target="_blank">CBLoss</a>, <a href="https://arxiv.org/abs/2009.14119" target="_blank">ASL</a>, and <a href="https://openaccess.thecvf.com/content/ICCV2021/html/Park_Influence-Balanced_Loss_for_Imbalanced_Visual_Classification_ICCV_2021_paper.html" target="_blank">IBLoss</a>, but in the end I could not achieve an accuracy higher than that of focal loss. Among them, the method called IBloss, which reduces the weight of samples near the decision boundary from half of the total epoch, was implemented with high expectations. However, although the CV score increased, the Public score did not improve.</p> <h2>Local metric</h2> <p>I used the following code to calculate CV scores, but in the end I could not correlate them with LB at all. Therefore, my strategy was to calculate this metric score for the 21 classes to be evaluated and the f1 scores (samples) for all 152 classes at various thresholds, and submit the results if they were above the majority.</p> <pre><code>def calc_tpr(true, pred, threshold): true = (true &gt; 0.5) * 1 pred = (pred &gt; threshold) * 1 tp = np.sum(true * pred) fn = np.sum(true * (1-pred)) tpr = tp / (tp + fn + 1e-6) return tpr def calc_tnr(true, pred, threshold): true = (true &gt; 0.5) * 1 pred = (pred &gt; threshold) * 1 tn = np.sum((1-true) * (1-pred)) fp = np.sum((1-true) * pred) tnr = tn / (tn + fp + 1e-6) return tnr class Metric(): def __init__(self): self.tgt_index = list(map(lambda x: CFG.target_columns.index(x), CFG.test_target_columns)) def calc_tgt_f1_score(self, gt_arr, pred_arr, th, epsilon=1e-9): scores = [] for i in self.tgt_index: a = list(gt_arr[:, i]) uni_, counts = np.unique(a, return_counts=True) # posi_weight = counts[0] / len(a) # nega_weight = counts[1] / len(a) posi_weight = 0.5 nega_weight = 0.5 tpr = calc_tpr(gt_arr[:, i], pred_arr[:, i], th) tnr = calc_tnr(gt_arr[:, i], pred_arr[:, i], th) score = posi_weight * tpr + nega_weight * tnr scores.append(score) return np.mean(scores) </code></pre> <h2>Inference</h2> <p>A much lower threshold seemed to be appropriate for this competition, I thought there are two reasons for this.</p> <ul> <li>It is important to increase the TPR even if the FPR is increased since the number of positive classes is quite small in the test data.</li> <li>The decision boundaries are quite narrow, especially for the few classes, and the threshold must be lowered considerably in order to be judged as positive.</li> </ul> <p>Because of the second reason, I thought the majority class would be able to detect a positive class even with a higher threshold value. Therefore, I determined the threshold based on the number of data per class. For example, the threshold for the class with the most training data was set at 0.3, the threshold for the class with the least training data was set at 0.05, and the rest were scaled linearly based on the number of data.</p> <p>There is one more innovation. Initially, when a bird was found in a wav, post-processing was performed by adding 0.2 to the prediction probability of that class for the entire wav. However, for example, the threshold is set to 0.05, adding 0.2 would mean that a bird of the target class is detected in a total of that wav. Therefore, in my reasoning, I only made predictions on a per-sound-file basis for the 19 bird species for which there was little training data (i.e., the threshold value did not exceed 0.2) of the 21 species to be evaluated.</p> <p>Various other post-processing methods were also tried, but they did not improve the score. The following are some examples.</p> <ul> <li>Detecting the presence of something like a monophonic sound in the mel-spectrogram using image processing. If no birds are detected, it is assumed that no birds are singing.</li> <li>Use co-occurrence matrices to increase the probability of predicting birds that may be singing with the bird with the highest probability of prediction.</li> </ul> <h2>Other</h2> <p>I created a model targeting latitude and longitude, and for species found only in Hawaii, I determined that the target bird was not singing when the sound was determined to be from outside of Hawaii.<br> The model by itself produced a public score of about 0.6, but combining it with an existing model did not improve the accuracy.</p> <p>For pseudo labels, I tried adding additional secondary_labels using 5fold oof, and conversely, tried removing suspicious ones.<br> None of them worked. I have seen some reports of improved accuracy with manual labeling, so perhaps the original model was not accurate enough.</p> <p>That's all. Thank you very much for reading.</p> <p>A notebook matching librosa and torchaudio's melspectrogram is now available.</p> <p><a href="https://www.kaggle.com/code/nomorevotch/create-the-same-mel-from-librosa-and-torchaudio" target="_blank">https://www.kaggle.com/code/nomorevotch/create-the-same-mel-from-librosa-and-torchaudio</a></p>
BirdCLEF 2022
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, actually bird split was <a href="https://www.kaggle.com/asaliquid1011" target="_blank">@asaliquid1011</a>'s idea</p> <p>We trained all our models using 152 classes, <br> but some models which belong to Group1 made predictions only for birds included in Group1<br> and other models made predictions for birds from Group2</p> <p>Then for each group we simply average the probabilities obtained from the models and apply thresholds as described in the solution (0.05 for birds from Group1 except for "skylar", for it we set 0.35, for birds from Group2 thresholds were selected based on OOFs distribution)</p> <p>Here are raw values obtained for SED w/ focal-loss<br> <code> {'crehon': 0.0610, 'ercfra': 0.0303, 'hawhaw': 0.0386, 'hawpet1': 0.0491, 'maupar': 0.0724, 'puaioh': 0.0340} </code></p> <p>First of all, thanks to Kaggle and Cornell Lab of Ornithology for organizing this competition.</p> <p>I was lucky enough to team-up with <a href="https://www.kaggle.com/asaliquid1011" target="_blank">UEMU</a>, it was a fruitful team-up with lots of ideas coming from the both sides. <br> We've worked hard until the end and thanks to that we managed to secure 3rd place, congratz to all of the winners!</p> <p>Considering the situation, it seems necessary to mention that we didn't use BirdNet model.</p> <p>External data we used:<br> freefield1010, aicrowd2020 and nocall part of BirdCLEF 2021 soundscapes to mix 2022 samples with background noise and impove robustness.</p> <p>Now for our approach, the key points to build strong and reliable pipeline are the following:</p> <ul> <li>Use SED model &amp; training scheme proposed in <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243293" target="_blank">tattaka's 4th place solution</a></li> <li>Use pseudo-labels &amp; hand-labels for small classes in SED model training</li> <li>Use CNN proposed in <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">2nd place BirdCLEF 2021 solution</a></li> <li>Divide scored birds into two sets and use different loss functions to train models for each one</li> <li>Augmentations</li> </ul> <p>Let's get down to our best finding during competition</p> <h3>The bird split</h3> <p>When we merged together, we looked on OOFs predictions produced by our models and found out that SED w/ focal-loss performs very differently compared to mentioned CNN trained w/ BCELoss depending on number of training samples</p> <p>From our observations, SED models w/ focal-loss tend to make more conservative predictions, and due to the loss design they don't miss small classes:</p> <p>Here in blue you can see SED model w/ focal-loss, and in pink CNN model trained w/ BCE loss<br> <img src="https://user-images.githubusercontent.com/57013219/170329125-532a0640-cb54-4a81-9d8a-fadd4721d6ae.png" alt="image"></p> <p>The same can be said about advantage of models which used BCE loss during training over models with focal-loss for the large classes <br> <img src="https://user-images.githubusercontent.com/57013219/170329065-9a9d4da1-1660-46d4-b25e-9419f451f63d.png" alt="image"></p> <p>Therefore, we divided the birds into two groups according to the number of data and manual inspection of distribution plots of target-data as above, and used different models for them.</p> <p>It appears that it's optimal to include birds with number of training samples &gt;= 10 to the Group1, and all other birds into Group2.</p> <p>Group1: 14birds ['jabwar', 'yefcan', 'skylar', 'akiapo', 'apapan', 'barpet', 'elepai', 'iiwi', 'houfin', 'omao', 'warwhe1', 'aniani', 'hawama', 'hawcre'], <br> for them we ended up using CNN + SED models which were trained using BCE loss.</p> <p>Group2: 7birds, ['crehon', 'ercfra', 'hawgoo', 'hawhaw', 'hawpet1', 'maupar', 'puaioh'], <br> for this group we chose to use SED w/ focal-loss.</p> <h3>CNN model training, Group1 birds (slime part)</h3> <p>For the details of architecture of CNN model, please refer to <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">2nd place BirdCLEF 2021 solution</a><br> It's worth to note, that we didn't use temporal mix-up mentioned in the solution.</p> <p>Since the CNN model was used only for inference on large enough classes, it allowed us to build reliable validation and monitor metrics for Group1 birds only, for these models we used BCE loss to select best models on validation, however with mix-up augmentation model converged on the last epoch, - this fact allowed us to include some models which were trained on full data in the final ensemble.</p> <h4>Training strategy</h4> <ul> <li>Use two front-ends:<ul> <li>sr: 32000, window_size: 2048, hop_size: 512, fmin: 0, fmax: 16000, mel_bins: 256, power: 2, top_db=None</li> <li>sr: 32000, window_size: 1024, hop_size: 320, fmin: 50, fmax: 14000, mel_bins: 64, power: 2, top_db=None</li></ul></li> <li>Epochs: 40</li> <li>backbone: tf_efficnetnet_b0_ns, tf_efficinetnetv2_s_in21k, resnet34, eca_nfnet_l0 </li> <li>Optimizer: Adam, lr=3e-4, wd=0</li> <li>Scheduler: CosineAnnealing w/o warm-up</li> <li>Labels: use union of primary and secondary labels</li> <li>Startify data: by primary label</li> </ul> <h4>Augmentations</h4> <p>The ones that definitely helped</p> <ul> <li>mix-up (the most impactful one)</li> <li>add background noise (same as 2nd place solution 2021)</li> <li>spec-augment</li> <li>cut-mix (helped, but just a little)</li> </ul> <h4>Didn't work</h4> <ul> <li>augment only scored birds</li> <li>multiply loss for scored bird by 10 </li> <li>use weighted BCE w/ weights proportionally to number of class appearance in dataset</li> <li>use PCEN</li> <li>random power as in <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243351" target="_blank">vlomme's 2021 solution</a>, pitch-shift</li> <li>coord-conv as in <a href="https://www.kaggle.com/competitions/rfcx-species-audio-detection/discussion/220760" target="_blank">2nd place rainforest solution</a></li> <li>"rating" data didn't introduce much of a difference</li> </ul> <h3>SED model training, Group1&amp;2 birds (UEMU part)</h3> <p>The SED model hasn't changed much from the previous 4th solution.<br> The main difference is the addition of some augmentations. </p> <p>As a result, we think that the score has improved by 0.06 or more in Public LB. <br> For the Group1 model and the Group2 model, we changed the loss function, and the other settings were not changed.</p> <p>We couldn't find a good CV strategy, so most of the settings are decided by watching at public LB.</p> <h4>Training strategy</h4> <ul> <li>Use two front-ends:<ul> <li>sr: 32000, window_size: 2048, hop_size: 1024, fmin: 200, fmax: 14000, mel_bins: 224</li> <li>sr: 32000, window_size: 1024, hop_size: 512, fmin: 50, fmax: 14000, mel_bins: 128</li></ul></li> <li>Epochs: 30-40</li> <li>Cropsize: 10-15s</li> <li>backbone: seresnext26t_32x4d, resnet34, resnest50, tf_efficientnetv2s</li> <li>loss_functiuon: BCE2wayloss(Group1), BCEFocal2WayLoss(Group2) as in <a href="https://www.kaggle.com/code/kaerunantoka/birdclef2022-ex005-f0-infer" target="_blank">kaeruru's public note</a></li> <li>Optimizer: Adam, lr=1e-3, wd=1e-5</li> <li>Scheduler: CosineAnnealing w/o warm-up</li> <li>Labels: primary label=0.9995, secondary label=0.5000, other=0.0025</li> <li>Startify data: by primary label</li> </ul> <h4>Augmentations</h4> <ul> <li>GaussianSNR</li> <li>Nocall Data of trainsoundscape data in the 2021 comp</li> <li>Spec-augment</li> <li>Random_CUTMIX as in <a href="https://www.kaggle.com/code/kaerunantoka/birdclef2022-ex005-f0-infer" target="_blank">kaeruru's public note</a></li> <li>random power as in <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243351" target="_blank">vlomme's 2021 solution</a></li> </ul> <h4>Other</h4> <ul> <li>How to crop data: <ul> <li>Use Pseudo labeling. We decided the time to crop from the probability distribution estimated by pretrained SED model.</li></ul></li> <li>Oversampling for small samples class: <ul> <li>We split the training files by hand and increase data like <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/327044" target="_blank">5th place solution</a> for some classes('crehon', 'ercfra', 'hawgoo', 'hawhaw', 'hawpet1', 'maupar', 'puaioh',….)</li></ul></li> </ul> <h4>Didn't work</h4> <ul> <li>some augmentations(mixup, Randomlowpassfilter, pitch shift, etc)</li> <li>use PCEN</li> <li>use weighted BCE w/ weights proportionally to number of class appearance in dataset</li> <li>use 'rating' data</li> <li>use 'eBird_Taxonomy_v2021' data</li> </ul> <h3>How to choose thresholds:</h3> <ul> <li><p>The default threshold for every of 21 birds was decided by watching LB (in the end we used 0.05 for Group1 birds, but 0.04 gives &gt;0.82 in private LB).</p></li> <li><p>We manually set the threshold for "skylar" bird as 0.35, since our models trained with BCE loss predict it reliably.</p></li> <li><p>The probability distribution of OOFs prediction for non-targetdata when using focal-loss differs depending on the bird (see below figure), so we set the threshold for each bird from Group2 depending on the distribution. <br> We adopted the value of 91 percentile of the distribution for these birds.</p> <p><img src="https://user-images.githubusercontent.com/57013219/170404672-c95e0539-21cd-4378-a4cd-75a8c756bdf4.png" alt="image"></p></li> </ul> <h3>Final results</h3> <p>For our best public LB submission we used 8 CNN models, 8 SED models for Group1 birds &amp; 12 SED models for Group2 birds (single model here is either model trained on one fold or on the whole data)</p> <p>We've also applied time-smoothing post-processing to Group1 birds as in <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">2nd place BirdCLEF 2021 solution</a>, but it didn't help on private LB, though boosted +0.01~ on public LB</p> <table> <thead> <tr> <th>Name</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>CNN model (no augs, single fold)</td> <td>0.7715</td> <td>0.7278</td> </tr> <tr> <td>CNN model (augs, single fold)</td> <td>0.7761</td> <td>0.7359</td> </tr> <tr> <td>CNN ensemble (CNN only w/ BCE loss)</td> <td>0.8327</td> <td>0.7898</td> </tr> <tr> <td>SED model (4fold average w/ BCEFocal2wayloss)</td> <td>0.8339</td> <td>0.7823</td> </tr> <tr> <td>Combine UEMU's SED w/ focal-loss &amp; slime's CNN w/ BCE Loss using bird split mentioned above</td> <td>0.8532</td> <td>0.8052</td> </tr> <tr> <td>Same as above, but add more CNNs w/ BCE loss and SED w/ BCE loss to group1 birds (best public LB, sub1)</td> <td>0.8750</td> <td>0.8126</td> </tr> <tr> <td>Safe submission (lower thresholds, sub2)</td> <td>0.8556</td> <td>0.8071</td> </tr> <tr> <td>Best private LB (add some models which didn't work on public LB)</td> <td>0.8707</td> <td>0.8274</td> </tr> </tbody> </table> <p>We also had around 20 subs which score &gt; 0.82 on private LB and &gt; 0.87 on public LB, but we didn't select them since we chose two subs in the following manner, - one is the best public LB and the other one with much lower thresholds to prevent the shake-up, this sub also happend to be placed 3rd, we're happy :)</p> <p>Ask your questions! :)</p> <p>UPD: we further checked the performance of some nets</p> <table> <thead> <tr> <th>Name</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>ensemble SED only w/ BCE loss (models fine-tuned for Group1)</td> <td>0.8139</td> <td>0.7563</td> </tr> <tr> <td>ensemble SED only w/ focal-loss</td> <td>0.8507</td> <td>0.8135</td> </tr> <tr> <td>ensemble CNN only w/ BCE loss (models fine-tuned for Group1)</td> <td>0.8360</td> <td>0.7678</td> </tr> </tbody> </table> <p>The conclusion is that we need some way to determine proper thresholds for predictions in order to see the true perfomance of the models.</p> <p>We're sure that bird split can bring 0.02-0.03 perfomance boost w/ proper thresholds.</p> <p>UPD: 06.08.2022<br> We published the inference kernel: <a href="https://www.kaggle.com/code/asaliquid1011/birdclef2022-3rd-place-inference" target="_blank">https://www.kaggle.com/code/asaliquid1011/birdclef2022-3rd-place-inference</a><br> My part can be found in the following github repository: <a href="https://github.com/dazzle-me/birdclef-2022-3rd-place-solution" target="_blank">https://github.com/dazzle-me/birdclef-2022-3rd-place-solution</a></p>
BirdCLEF 2022
44th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your question. To be honest I was not able to create a trusted cv this time, so it was just a gut feeling. However, it is generally believed that bagging improves generalization performance, and I trusted it.</p> <p>Thank you for organizing such an interesting competition. It was a great learning experience for me.</p> <h2>Short Summary</h2> <p>My solution is SED, based on the one published by <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> in BirdCLEF2021.<br> This ranking was achieved without modifying the model, but by devising a new data set and changing the backborn.<br> A big thank you to hidehisaarai1213 for publishing this wonderful notebook!</p> <p>train : <a href="https://www.kaggle.com/code/hidehisaarai1213/pytorch-training-birdclef2021-starter" target="_blank">https://www.kaggle.com/code/hidehisaarai1213/pytorch-training-birdclef2021-starter</a><br> simple inference : <a href="https://www.kaggle.com/code/hidehisaarai1213/pytorch-inference-birdclef2021-starter" target="_blank">https://www.kaggle.com/code/hidehisaarai1213/pytorch-inference-birdclef2021-starter</a><br> infer between chunk : <a href="https://www.kaggle.com/code/hidehisaarai1213/birdclef2021-infer-between-chunk" target="_blank">https://www.kaggle.com/code/hidehisaarai1213/birdclef2021-infer-between-chunk</a></p> <h2>Preprocessing</h2> <p>As you all know, the lable this time was Imbalanced, so I adopted the approach of acquiring the sound source multiple times if the length of the sound source was short. <br> I have expressed the number of labels on three levels as follows. <br> Few tripled and VERY FEW quadrupled their data and studied. <br> As I will explain later, the data used for training is a section of the sound source(20sec), so I hypothesized that it could be used multiple times by changing the starting point. </p> <p><img src="https://user-images.githubusercontent.com/55369709/170328390-b056fc7e-a580-4497-9b77-71b6334f90b3.png" alt=""></p> <p>MANY : 'skylar', 'houfin', 'jabwar', 'warwhe1', 'yefcan', 'apapan','iiwi', 'hawcre' <br> FEW : 'hawama', 'omao', 'barpet', 'akiapo', 'elepai', 'aniani' <br> VERY FEW : 'hawgoo', 'ercfra', 'hawhaw', 'hawpet1', 'puaioh', 'crehon','maupar' </p> <h2>Augmentation</h2> <p>Weak augmentation for MANY and strong augmetation for FEW. Strong means that it transforms with a high probability. </p> <pre><code>augment_strong = Compose([ AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=0.5), AddGaussianSNR(p=0.5), Gain(min_gain_in_db=-12, max_gain_in_db=12, p=0.5), TimeStretch(min_rate=0.8, max_rate=1.25, p=0.5), AddBackgroundNoise(sounds_path=noise_dir, min_snr_in_db=3, max_snr_in_db=30, p=0.5), AddShortNoises(noise_dir, p=0.5), PitchShift(min_semitones=-4, max_semitones=4, p=0.5), Shift(min_fraction=-0.5, max_fraction=0.5, p=0.5), ]) augment_weak = Compose([ AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=0.2), AddGaussianSNR(p=0.2), Gain(min_gain_in_db=-12, max_gain_in_db=12, p=0.2), TimeStretch(min_rate=0.8, max_rate=1.25, p=0.2), AddBackgroundNoise(sounds_path=noise_dir, min_snr_in_db=3, max_snr_in_db=30, p=0.2), AddShortNoises(noise_dir, p=0.2), PitchShift(min_semitones=-4, max_semitones=4, p=0.2), Shift(min_fraction=-0.5, max_fraction=0.5, p=0.2), ]) </code></pre> <h2>Training</h2> <p>First of all, if I run the above hidehisaarai1213's notebook without preprocess with backbone as tf_efficientnet_b0_ns, I get 0.6406 for public and 0.6843 for private. </p> <ul> <li>epoch -&gt; 30</li> <li>loss -&gt; BCEFocal2WayLoss</li> <li>backbone -&gt; tf_efficientnet_b0_ns</li> <li>inf threshold -&gt; 0.02</li> </ul> <p>From here, adding the above preprocess and running it, public was 0.7025 and private was 0.6916. Since public went up significantly, I determined that the strategy of increasing FEW and VERY FEW was so effective. </p> <p>Adding a second label further increased the score from this point. 0.7053 for public, 0.6871 for private.</p> <p>That I tried but it did not lead to an increase in PUBLIC scores. but I did use it in an ensemble.</p> <ul> <li>more increase FEW and VERY FEW. Increased FEW by 4x and VERY FEW by 7x. </li> <li>Increased train sound source from 20 sec to 40 sec.<ul> <li>I thought the longer section would contain more call, but it didn't WORK.</li></ul></li> <li>Change backbone to tf_efficientnet_b3_ns.</li> </ul> <p>What worked best. </p> <ul> <li>"nocall" was added to label and increased from 152 to 153 classes.<ul> <li>nocall data found from here <a href="https://www.kaggle.com/datasets/kami634/ff1010bird-duration10" target="_blank">https://www.kaggle.com/datasets/kami634/ff1010bird-duration10</a></li> <li>This brings the score up to private 0.7473 public 0.7572</li></ul></li> </ul> <h2>Choosing subs</h2> <p>All scores above are obtained from fold0 only. The combination of fold and weight, I reached public 0.7730 private 0.7600, but I could not choose this sub because I considered that this was overfitting to public, using only fold0 and fold2 out of 5 unfolds. <br> I chose a sub that mixed all folds equally to try to reduce overfitting to the public. </p>
BirdCLEF 2022
4th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for another sound competition. My journey to kaggle started with your bird competition and I am becoming a Grandmaster in your bird competition as well. It's sad that a model trained on non-public data won, because of which, it just couldn't be beaten. </p> <p>I didn't have much time to participate, so my solution is overfitting last year's model <a href="https://www.kaggle.com/competitions/birdsong-recognition/discussion/183269" target="_blank">2020</a><br> <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243351" target="_blank">2021</a></p> <p>Differences:</p> <ol> <li>First I train on all birds, then I finish training on 21</li> <li>I select a different lb threshold for each species.</li> </ol> <p>Otherwise, my decision repeats my public decision of previous years<br> 1 fold = 0.78 private lb</p> <p>No, I trained all the layers. I tried replacing the head with a 21 class, but it didn't help much.<br> I also balanced the number of examples in each epoch</p> <p>Thank you. This victory was very important, because I became a solo Grandmaster)</p> <p>I wrote simple code</p> <pre><code>for bird in birds: tmp = all_bird_list[all_bird_list.bird == bird] if len(tmp) &lt; 10: a = 10 elif len(tmp) &lt; 30: a = 30 else: a = 60 bird_lists.append(tmp.sample(a, replace=True)) bird_list = pd.concat(bird_lists).reset_index(drop=True) </code></pre> <p>`</p> <p>I didn't use weights since I removed them)<br> But in the first step, I trained on all birds, which is the same<br> For me, the pretrain on the ImageNet allowed the model to converge faster, and the pretrain on all birds reduced overfitting because of the small dataset of the Hawaiian birds</p>
BirdCLEF 2022
50th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to every competitor and host that held this competition.<br> This is my solution and what I tried.</p> <h2>Things that I watched out</h2> <p>As I have posted in <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/321668" target="_blank">this discussion</a>, I believe that three aspects of this competition needed to be considered.</p> <ol> <li><p>Get high CV score<br> This is a given, but a better model should be used to improve cross-validation scores.<br> I have the impression that many people used the improved PANNS model used in the Notebook published by <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> in the 2021 Bird Competition. I also used this model and reached CV:0.789 (f1 Score), Public:0.739, and Private:0.711.<br> Then, in the middle of the competition, I switched to use another model (that I'll explain below.) and improved to CV:0.805(f1 Score), Public:0.7634, Private:0.7210 (best with a single model).</p></li> <li><p>Measures against Domain Shift <br> There were several models with high CV but low LB. This is probably due to the existence of a domain shift between training data and test data. As a countermeasure, Augmentation, which inserts noise, was effective. In particular, Gaussian Noise and Pink Noise worked effectively.<br> Models trained with these augmentations got relatively low CV scores, but the Public scores improved.</p></li> <li><p>Selecting the Appropriate Threshold<br> In this competition, since there were no Test Soundscapes (data collected under the same conditions as the test data), the threshold had to be set manually. This appropriate threshold was usually far from the value that was optimal during the Validation phase of the model, so tuning was an important step. Since the Public Score was the only available factor to choose threshold, we had no choice but to overfit the Public Score (despite this, LB did not shake as much as we expected).</p></li> </ol> <h2>Solution</h2> <p><img src="https://user-images.githubusercontent.com/69585273/170442235-ef15296d-33dd-475c-80fc-9e3482ca1278.png" alt="solution"></p> <h3>Single model</h3> <p>As figure shows, the sounddata was converted into a 20-second segmented mel-spectrogram, and then Augmentation (Normalize, Pink Noise) was added.<br> Augmentation (model1: Normalize&amp;PinkNoise, model2: Normalize&amp;GaussianNoise&amp;RandomVolume) was added to the input, and the features were extracted using ResNet101d(model1) and ResNeXt50d_32x4d(model2).<br> The output is a batch size × channel × frequency × time (four-dimensional).<br> Then, the features are compressed in the frequency and time directions using GeM pooling, and the result was returned to a two-dimensional tensor of batch size × channel.<br> Then fc layer is applied to obtain a 152-dimensional output. <br> I also applied MixUp as augmentation with a probability of 0.25. I believe that the robustness of the model has been improved, albeit only by a small margin. <br> As a loss function, I used Focal loss based on BCEWithLogitsLoss (<a href="https://www.kaggle.com/competitions/rfcx-species-audio-detection/discussion/213075" target="_blank">https://www.kaggle.com/competitions/rfcx-species-audio-detection/discussion/213075</a> ). <br> I also used SAM Optimizer based on Adam as the Optimizer (<a href="https://github.com/davda54/sam" target="_blank">https://github.com/davda54/sam</a> ).<br> By changing the Optimizer to SAM, the Public Score for the Sound Event Detection model improved by about 0.02 when the experiment was conducted under all conditions except for the Optimizer.</p> <h3>Ensemble</h3> <p>Instead of passing the model-by-model average of the output predictions through the threshold, the logical OR or logical product of the true/false values after passing through the threshold was<br> submitted as the final prediction result. By applying an ensemble that takes Logical OR, I obtained an output result with a Private score of 0.7292, from the predictions whose private score are 0.7176 and 0.7008 (I we believe they worked effectively)</p> <h2>what did not work</h2> <ul> <li>Use complex models as backbone (e.g. Swin Transformer) </li> <li>PaSST (<a href="https://github.com/kkoutini/PaSST" target="_blank">https://github.com/kkoutini/PaSST</a>) </li> <li>Forcibly apply Multi head attention to the obtained features </li> <li>Using the Psuedo label</li> </ul>
BirdCLEF 2022
571st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! Ah ok, that makes sense—I had heard that term thrown around but wasn’t sure what it meant exactly.</p> <p>It's probably a bit silly for me to make this note given how low my ranking was. But this was my first real kaggle competition, and I personally learned a lot, so I thought I'd write down some reflections—if for nothing else than for my present and future self. :)</p> <p>First: thanks to everyone who competed and shared their thoughts and questions, and especially to the hosts of this competition for putting so much effort in fostering community focused on using AI/ML for scientific research. It was a lot of fun to participate, and I'm looking forward to doing more kaggle competitions in the future.</p> <h2>Dataset</h2> <p>I first preprocessed the dataset into log magnitude spectrograms, with linear frequency bins from 1-10 kHz . The final resolution was a 128x256 spectrogram per 5 seconds of audio. For augmentation during training, I randomly masked out portions of the frequency and time axis, inspired by <a href="https://arxiv.org/abs/1904.08779" target="_blank">SpecAugment</a>.</p> <h2>Model</h2> <p>I saw that <a href="https://www.kaggle.com/kaerururu" target="_blank">@kaerururu</a> shared a <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/318081" target="_blank">well performing solution</a>, and that there was a ton of work that might be reusable from previous competitions. But instead of reusing their approaches, I decided to first try doing my own thing, originally planning to go back and try their ideas if I had time (but of course I ran out of time!).</p> <p>My first baseline model (which ended up being the best) was a simple 1D convolution over the time axis of the spectrograms as a feature extractor, followed by max pooling over time, then flattening the channels and frequency axis into a linear layer that projects onto the # of classes. </p> <pre><code>class DumbBirdClassifier(nn.Module): def __init__(self): super(DumbBirdClassifier, self).__init__() self.conv = nn.Sequential( nn.Conv1d(kernel_size=4, stride=2, padding=1, in_channels=128, out_channels=128), nn.ReLU(), nn.Conv1d(kernel_size=4, stride=2, padding=1, in_channels=128, out_channels=128), nn.ReLU(), nn.Conv1d(kernel_size=4, stride=2, padding=1, in_channels=128, out_channels=128), nn.ReLU(), nn.Conv1d(kernel_size=4, stride=2, padding=1, in_channels=128, out_channels=128), nn.ReLU(), ) self.pool = lambda x: x.max(-1).values self.linear = nn.Sequential( nn.Linear(in_features=128, out_features=128), nn.ReLU(), nn.Linear(in_features=128, out_features=NUM_CLASSES), nn.Sigmoid(), ) def forward(self, x): return self.linear(self.pool(self.conv(x))) </code></pre> <p>I reasoned that collapsing the time axis with max pooling would make the model robust to different lengths of audio. The model must make evaluations on 5 sec-long clips, but it could be useful to be able to give it longer clips of audio during training, since only a 5 sec clip from a training recording would likely not include all or any of the labeled birds, and therefore add noise to the learning signal.</p> <p>That was my theory at least. In practice, I found that training this model on 30 sec clips did not perform better on the leaderboard than training it directly on 5 sec clips. </p> <p>It seems many other solution reused <a href="https://www.kaggle.com/code/hidehisaarai1213/introduction-to-sound-event-detection/notebook" target="_blank">SED models</a>, which sounds somewhat similar to this idea, except that it actually works--so I will be reading a lot more about that before BirdCLEF 2023 :)</p> <p>I also tried replacing my spectrograms and 1D convolutional feature extractor that I trained from scratch with the feature extractor of <a href="https://pytorch.org/audio/0.10.0/models.html#wav2vec2model" target="_blank">a pretrained Wav2Vec 2.0 model</a>. But finetuning didn't seem to perform better than my original baseline model.</p> <h2>Training</h2> <p>One question I struggled with was how to use the 90% of training data that didn't include any of the scored birds. I tried training a model on the full 152 classes, then finetuning on the just the 21 scored birds. But that didn't perform better on the leaderboard than just training directly on the 21 scored birds.</p> <p>The other huge issue I had was dealing with class imbalance. I read quite a bit about this problem, and tried several things: reweighing loss based on class frequency, <a href="https://www.kaggle.com/code/residentmario/undersampling-and-oversampling-imbalanced-data/notebook" target="_blank">oversampling</a> based on average or max class frequency, using <a href="https://arxiv.org/abs/1708.02002" target="_blank">focal loss</a> instead of BCE -- and combinations of those things. Oversampling seemed to perform the best, but in all cases the models generally performed well on frequent birds, and mostly ignored the less frequent ones.</p> <p>All models I trained overfit. I split the data randomly 80/20 and took the model that had the highest validation score (computed by replicating <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a>'s <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/314999" target="_blank">fantastic description of the score</a>). </p> <h2>Evaluation</h2> <p>I'm still a bit confused by how the evaluation data is formatted. Is each 1 minute clip just a concatenation of random 5 second clips, or are they continuous recordings? Are there birds in the clips that aren't present in one of the 21 scored birds? Is it possible that a 5 sec clip has no bird vocalizing at all? Is the distribution of birds at all similar to training, or are they somehow evenly distributed, or forced to be more even by just ignoring labels of frequent birds? So many questions that I probably should have asked in the discussion forum--though I'm not sure if the hosts could answer.</p> <h2>Surprises</h2> <p>Many solutions operated on Mel Spectrograms. I find this odd, since <a href="https://en.wikipedia.org/wiki/Mel_scale" target="_blank">the melscale</a> is (roughly) based on what humans can hear, not birds. I do regret not using a log frequency axis, such as <a href="https://en.wikipedia.org/wiki/Constant-Q_transform" target="_blank">Constant-Q</a> (<a href="https://librosa.org/doc/latest/generated/librosa.cqt.html" target="_blank">librosa</a>).</p> <p>Also, most solutions I've read about so far used 2D convolutions for feature extraction, often taking backbones pretrained on image tasks. I'm surprised by this for two reasons: (1) I'm sure there's probably <em>some</em> out-of-tune birds out there, but I don't think there's that much of a need for translational invariance on the frequency axis and (2) I wouldn't expect a model pretrained on images of real things to transfer well to spectrograms. It seems my intuition is off on both of these things.</p> <p>I was baffled to read <a href="https://www.kaggle.com/shinmurashinmura" target="_blank">@shinmurashinmura</a>'s <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/318999" target="_blank">note on how changing the threshold</a> from 0.5 -&gt; 0.01 increased their leaderboard score from 0.56 -&gt; 0.71. For my models, I found that there was a moderate increase on my score when a threshold change from 0.5 -&gt; 0.3 (0.51 -&gt; 0.57), but none beyond that. I think my confusion here might be due to how little I understood the evaluation data. </p> <h2>Conclusions</h2> <p>This competition was overwhelming, but quite fun for me, and a huge learning experience! I look forward to learning even more from reading carefully through the other solutions, and applying these learnings next time.</p>
BirdCLEF 2022
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>This is my implementation, but I don't think I did anything special other than following <a href="https://rwightman.github.io/pytorch-image-models/feature_extraction/" target="_blank">timm's style</a>.</p> <pre><code>import timm import torch # create convnet backbone = timm.create_model( "convnext_tiny", pretrained=True, num_classes=0, global_pool="", in_chans=1, drop_path_rate=0.1, ) num_features = backbone.num_features # inference with torch.inference_mode(): mel = torch.rand((1,1,64,500)) # B C H W y_pred = backbone(mel) # =&gt; (1, 768, 2, 15) </code></pre> <p>Thanks <a href="https://www.kaggle.com/shinmurashinmura" target="_blank">@shinmurashinmura</a>, and congratulations on becoming a Competitions Master too!</p> <p>Many thanks to Kaggle and Cornell Lab of Ornithology for hosting such an interesting competition.</p> <p>My solution is a reimplementation of the BirdCLEF 2021 <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">2nd place solution</a>.<br> Special thanks to the new baseline team for publishing the amazing solution. And thanks to <a href="https://www.kaggle.com/julian3833" target="_blank">@julian3833</a> for sharing <a href="https://www.kaggle.com/code/julian3833/birdclef-21-2nd-place-model-submit-0-66" target="_blank">the great baseline!</a></p> <h1>Model</h1> <p>My final submission is an ensemble of 9 models of different seed/fold/backbone, each model using last year's 2nd place solution method.</p> <p>The backbones were:</p> <ul> <li>4x <code>eca_nfnet_l0</code></li> <li>2x <code>tf_efficientnetv2_s_in21k</code></li> <li>2x <code>resnet34</code></li> <li>1x <code>convnext_tiny</code></li> </ul> <h1>Oversampling</h1> <p>To increase the number of files for minority classes(N&lt;20), I split the training files by hand.<br> For example, <code>maupar</code> appears only in one file, but the file contains many different types of its songs and calls.<br> I cut it into segments of 10-30 seconds using a waveform editor (<a href="https://www.audacityteam.org/" target="_blank">Audacity</a>) and created multiple training data from a single file.</p> <p>The cut audio files were further augmented by applying effects such as noise reduction/reverb/gain/etc. to each segment in the waveform editor. Ultimately, I created 5-20 additional sample files per target minority class.</p> <h1>Training</h1> <ul> <li>Melspec<ol> <li>window_size=1024, hop_size=320, fmin=50, fmax=14000, power=2, mel_bins=64, top_db=80 (nfnet, effnet, convnext)</li> <li>window_size=2048, hop_size=512, fmin=16, fmax=16386, power=2, mel_bins=256, top_db=80 (resnet)</li></ol></li> <li>Data Augmentation(Waveform)<ul> <li>Background Noise (2020 nocall, 2021 nocall, freefield1010)</li> <li>GaussianNoise</li> <li>PinkNoise</li> <li>NoiseInjection</li> <li>RandomVolume</li> <li>TimeShift</li></ul></li> <li>Data Augmentation(Image)<ul> <li>SpecAug</li> <li>CutOut</li> <li>Lowpass</li> <li>TranslateY (shift in freq dim, simulated pitch shift)</li></ul></li> <li>Optimizer<ul> <li>AdamW, LR=1e-3, weight decay=1e-5, Cosine Anearling with warmup</li></ul></li> <li>Loss<ul> <li>BCEWithLogitsLoss</li> <li><a href="https://www.kaggle.com/code/julian3833/birdclef-21-2nd-place-model-train-0-66?scriptVersionId=88985374" target="_blank">Weighted loss by rating</a></li></ul></li> <li>25-30 epochs</li> <li>Training target is the union of primary and secondary labels.</li> <li>All backbones were <strong>pretrained on BirdCLEF2021</strong> data.</li> <li><strong>micro-f1@0.1</strong> for <code>scored_birds</code> was used for validation. Although a bit odd, the correlation between this score and Public/Private LB was not bad.</li> </ul> <h1>Post Processing</h1> <ul> <li>Probability averaged over previous and next chunks.</li> <li>Correct thresholds using <a href="https://github.com/ChristofHenkel/kaggle-birdclef2021-2nd-place/blob/main/configs/pp_binary_ext3_1.py" target="_blank">call/nocall binary classifier</a> probabilities.</li> <li>Threshold optimization by percentile per clesses.</li> </ul> <pre><code># thresholds per class threshold = pd.Series(np.percentile(test_df[SCORED_BIRDS].values, 90, axis=0), index=SCORED_BIRDS) </code></pre> <h1>What Did Not Work</h1> <ul> <li>SED models - could not make it past 0.72 for me</li> <li>CoordConv</li> <li>MC Dropout</li> <li>Label Smoothing</li> <li>Undersampling for majority classes</li> <li>ViT / Swin backbone</li> </ul>
BirdCLEF 2022
65th place writeup - bronze
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was an awesome competition and I learned a lot from it. This is due to the hosts, to my teammates, and the kagglers who shared ideas!</p> <p>A few things about our solution:</p> <ul> <li>Ensemble of SED models with two different backbones: seresnext26tn_32x4d and tf_efficientnet_b0_ns, both from timm, 5 folds each, 30 epochs each run</li> <li>Each backbone used a different seed</li> <li>Both backbones used primary and secondary labels in training</li> <li>Everything was trained on google colab notebooks</li> <li>The optimal threshold for the ensemble was 0.25</li> <li>Audio augmentations and image augmentations</li> <li>Cutmix and mixup</li> <li>CosineAnnealingLR (Tmax=7 for seresnext and Tmax=4 for effnet). Couldn't find any improvements using other schedulers</li> <li>seresnext was trained with the first 5 seconds of each audio file</li> <li>tf_efficientnet_b0_ns was trained with seconds 5 to 10 when audio file has more than 10 seconds and with first 5 seconds otherwise</li> <li>We used <a href="https://arxiv.org/pdf/1708.02002.pdf" target="_blank">focal loss</a>, tuned its parameters, and found that using alpha = 0.75 and gamma = 5 gave a huge boost on our models</li> <li>From the moment we started using gamma = 5 in focal loss we had good CV/LB correlation for most cases, using <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a> <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/314999" target="_blank">method</a></li> <li>We also monitored another computation of f1 score, similar to the one above but only for scored birds. So we could track the learning of scored birds, which was slower than the other birds, but in the end of training both CV strategies and calculated f1 were quite similar</li> </ul> <p>Things that didn't work:</p> <ul> <li>Noise reduction</li> <li>PCEN (Per-Channel Energy Normalization)</li> <li><a href="https://arxiv.org/abs/1711.02512" target="_blank">GeM</a></li> <li>AdamW</li> <li>Pseudo labels via oof predictions (we tried this a lot but couldn't make it work)</li> <li>Reducing dataset size or selecting samples from it</li> </ul> <p>Again, thanks to my teammates who contributed a lot to this score:<br> <a href="https://www.kaggle.com/gabrielvinicius" target="_blank">@gabrielvinicius</a> <br> <a href="https://www.kaggle.com/lucasdmr" target="_blank">@lucasdmr</a> <br> <a href="https://www.kaggle.com/paulojunqueira" target="_blank">@paulojunqueira</a> <br> <a href="https://www.kaggle.com/felipemandrade" target="_blank">@felipemandrade</a></p> <p>We put a lot of time, effort and dedication to achieve this result.</p> <p>In my opinion, what made this problem really hard was the weak labels. We tried to deal with this using pseudo-labels but we weren't successful on this.</p>
BirdCLEF 2022
6th place solution (human-in-the-loop)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's trial and trial.<br> I ran one line at a time like below.</p> <pre><code>thresh[label_dic["skylar"]] = thresh[label_dic["skylar"]] + 0.2 thresh[label_dic["hawcre"]] = thresh[label_dic["hawcre"]] + 0.2 thresh[label_dic["aniani"]] = thresh[label_dic["aniani"]] + 0.2 thresh[label_dic["hawama"]] = thresh[label_dic["hawama"]] + 0.2 #thresh[label_dic["warwhe1"]] = thresh[label_dic["warwhe1"]] + 0.2 thresh[label_dic["yefcan"]] = thresh[label_dic["yefcan"]] + 0.2 #thresh[label_dic["jabwar"]] = thresh[label_dic["jabwar"]] + 0.2 thresh[label_dic["houfin"]] = thresh[label_dic["houfin"]] + 0.2 thresh[label_dic["apapan"]] = thresh[label_dic["apapan"]] + 0.2 </code></pre> <p>Mainly, the threshold for the majority species was raised.</p> <p>First I want to thank this competition hosts and the Kaggle team for organizing such a interesting competition. And thank you to all the Kagglers.</p> <h2>Overview</h2> <ul> <li>Only using SED model</li> <li>Make clean data with <a href="https://www.kaggle.com/competitions/happy-whale-and-dolphin/discussion/319813" target="_blank">human-in-the-loop</a> for weak label</li> <li>Heavy post-processing (public LB 0.79 → 0.84)</li> </ul> <p>Perhaps my SED is the same model as yours.<br> <strong>Clean data</strong> and <strong>post-processing</strong> is important in my solution.</p> <h2>Clean data (important)</h2> <p>I made <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/239911" target="_blank">annotations</a> in BirdClef2021. And this is effective. But hand labeling is time consuming.<br> Instead of hand labeling, I used human-in-the-loop method in this competition.</p> <ul> <li>First, train SED with primary label</li> <li>Second, extract high confidence SED predictions in training data</li> <li>I listen these prediction and judge correct or not correct</li> <li>Human verification (I answer yes or no)</li> </ul> <p>I made 2,000 clean data by this human-in-the-loop. These data contain only scored bird.</p> <h2>Training data</h2> <p>I used 3 type data.</p> <ul> <li>HIL_data (=human in the loop data)<ul> <li>contain <a href="https://www.kaggle.com/code/amandanavine/hawaiian-bird-species" target="_blank">external data</a></li></ul></li> <li>other_data (not scored bird data(about 130species))<ul> <li>initial 3sec audio</li> <li>The label is primary_label</li></ul></li> <li>psuedo_data (scored bird, and not contain HIL audio)<ul> <li>initial 3sec audio</li> <li>psuedo_label = 0.25primay_label + 0.25x1st_generation_model + 0.5x2nd_generation_model </li></ul></li> </ul> <p>And training data is below ratio.<br> This is a best ratio.</p> <p>HIL_data : other_data : psuedo_data = 1 : 4 : 1</p> <h2>SED</h2> <ul> <li>Backbone: eca_nfnet_l0, dm_nfnet_f0</li> <li>Only using "clipwise_output" (training &amp; inference)</li> <li><a href="https://www.kaggle.com/code/shinmurashinmura/birdclef2022-basic-augmentation/notebook" target="_blank">Basic augment</a><ul> <li>Time shift</li> <li>Add pink noise and brown noise</li> <li>Mix other audio dataset (ESC-50: frog, rain, airplane, crackling_fire)</li></ul></li> <li><a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/307880" target="_blank">SpecAugment++</a> (mixing Mel-spec is ESC-50)</li> <li>Label smoothing (alpha=0.1)</li> <li>Optimize with Adam<ul> <li>lr=0.0001</li> <li>CosineAnealing (T=10)</li></ul></li> <li>Epoch:30 (about 14hours in Colaboratory)</li> <li>Loss: BCEWithLogitsLoss</li> <li>Input (train &amp; inference): 5sec</li> <li>STFT resolution:250x254<ul> <li>window_size: 1024</li> <li>hop_size: 630</li> <li>mel_bins: 250</li> <li>fmin: 50</li> <li>fmax: 14000</li></ul></li> </ul> <h2>Ensemble</h2> <ul> <li>Ensemble is a little impact in this competition</li> <li>I compared voting vs average.<ul> <li>Voting is good score a little.</li></ul></li> <li>Finally, I used voting with 10 models.</li> </ul> <h2>Post-Processing (important)</h2> <p>My Post-Processing is similar with <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/326979" target="_blank">12th place solution</a>.</p> <h4>prediction time shift (=predition_TA)</h4> <p>prediction_now = now + 0.5next_5sec + 0.25next_10sec + 0.5previous_5sec + 0.25previous_10sec</p> <h4>TTA</h4> <p>Let t be the target time. I used 3 type inputs.</p> <ul> <li>[t,t+5]</li> <li>[t-1,t+4]</li> <li>[t+1,t+6]</li> </ul> <p>And these ouput is used in voting.</p> <h4>Threshold optimization (=ThreshO)</h4> <ul> <li>First, I tuned threhold like <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/318999" target="_blank">this</a>. These threshold is constant.</li> <li>Second, I tuned threhold each species.</li> </ul> <h4>Threshold down (=ThershD)</h4> <p>If models detect a certain bird once in the 1 minute audio, I lower the threshold in the audio and infer it once more. </p> <pre><code>score = prediction(thresh) # 1min prediction down_species = ["hawhaw", "hawpet1", "maupar", "ercfra", "crehon", "puaioh", "hawcre"] for bird in down_species: if np.max(score[bird]) &gt; thresh[bird]: thresh[bird] = thresh[bird] - decrease score = prediction(thresh) # 1min prediction once more </code></pre> <h2>Ablation study</h2> <table> <thead> <tr> <th>post-processing</th> <th><strong>Public</strong>/Private LB</th> </tr> </thead> <tbody> <tr> <td>not using</td> <td><strong>0.79</strong>/0.76</td> </tr> <tr> <td>predition_TA</td> <td><strong>0.78</strong>/0.75</td> </tr> <tr> <td>predition_TA + TTA</td> <td><strong>0.78</strong>/0.74</td> </tr> <tr> <td>predition_TA + TTA + ThreshO</td> <td><strong>0.82</strong>/0.77</td> </tr> <tr> <td>predition_TA + TTA + ThreshO + ThreshD</td> <td><strong>0.84</strong>/0.80</td> </tr> </tbody> </table> <h2>Not working</h2> <ul> <li>AST (equally SED)</li> <li>PCEN (equally Mel-spec)</li> <li><a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/307880" target="_blank">ImportantAug</a></li> <li>ArcFace as few-shot learning method</li> <li><a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243360" target="_blank">STFT Transformer</a></li> <li>Cooccurrence species with <a href="https://www.iucnredlist.org/species/22708583/128101101" target="_blank">these map</a></li> <li>Using BirdClef2021 dataset</li> </ul>
BirdCLEF 2022
70th place solution.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was an good competition for me and I have learned a lot from it. thank the hosts and my teammates, and the kagglers who shared ideas!</p> <p><a href="https://www.kaggle.com/code/yoyobar/birdclef-2022-submit-70th-pubilc-0-77-private-0-71/settings?scriptVersionId=95983080" target="_blank">BirdCLEF 2022 submit 70th pubilc:0.77,private:0.71</a></p> <p>there is our used technique</p> <p>useful:</p> <ol> <li>Cut the input into 4 pieces and predict and solve for the mean.</li> <li>used 5 second to training.</li> <li>Mix two audio to one.</li> <li>Gaussian noise and pink noise.</li> <li>using a percentage-based threshold method (it helped we to do not test LB so many time. [1]</li> </ol> <p>used it but don't know if it works:</p> <ol> <li>PCEN.</li> <li>AdamW.</li> <li>sample equalization.</li> </ol> <p>useless :</p> <ol> <li>focal loss (Both CV and LB have dropped)</li> </ol> <p>In our final submission ,it got CV:0.82 ,public score :0.77 private:0.7124.<br> All training is done on my 3080ti.</p> <h6>#</h6> <p>non-technical things</p> <p>This is my the first time competition in my life ,and I got some friend form that.</p> <p>I have met obstacle for example such as many submissions got 49~51 public score [2] and I did not know why, I joined someone else's team,Because of some accidents [3], <br> But it all worked out in the end.</p> <p>[1] <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">2nd place solution.</a><br> [2] <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/318999" target="_blank">The lower threshhold has good score.</a><br> [3] <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/321584" target="_blank">Communication within the team.</a></p>
BirdCLEF 2022
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>If the provided labels were stronger, i.e. include time tags for the beginning and end of the bird call, everything would be super easy: just do a sound segmentation model assuming that at each moment of time only a single bird may be present at the most.</p> <p>The weakness of labels screws this simple picture. Ideally, it would be preferable to set an approach dealing with the provided annotation… One thing, as you mentioned, may be using an attention mechanism to pool a single vector out of generated sequence of vectors. The issue here, though, is how to incorporate a multi-label paradigm into it. I'm not sure at the moment… The nature of AF loss is the maximization of class separation, and expecting two labels as a prediction doesn't give a reasonable convergence(</p> <p>Another way is computing the loss in the same way as for strong labels, but using multi-head <strong>attention to weight the loss across the generated sequence</strong>. It can be seen as the following. Let's assume that there are two birds present in the record, and AF is low within these S1 and S2 segments if corresponding labels are considered as GT (S1-L1, S2-L2), and is high outside. The attention should perform a soft selection of segments S1 and S2 as considered concepts and assign them to labels L1 and L2. It will minimize the total loss. This concept is somewhat similar to Maskformer, and you can use it as a starting point for thinking, but it is not exactly the same.</p> <p>The thing I thought of is a way to reweight the loss along the sequence, i.e. the loss is computed for parts where the corresponding birds are expected to be, while other parts are having a small contribution…</p> <p>Thanks so much, <a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a>. I'm happy that I got it this time. I quite remember our hard work at the end of BirdCLEF 2021 and how close we were to the gold… I expected that you would be interested in BirdCLEF 2022.</p> <p>Thanks. <strong>In this competition CV strategy for threshold identification doesn't exist</strong>. Let me explain why. <br> (1) In BirdCLEF competitions there is a tradition to have a huge domain gap between train and test data. Test data is of poor quality with large background noise and 5s chunk annotation. At the same time, train data is often of exceptional quality with a sequence-level annotation (chunks selected during evaluation may not have bird calls). One may try to emulate test data by adding a high level of noise to val and using PL to identify nocall parts to ensure that a specific 5s chunk includes a bird call. Also if secondary birds are percent, it becomes more sophisticated… Though this procedure should be calibrated with LB to set the level of noise at least. Such CV strategy could work in BirdCLEF 2020 competition, but not here because of the following.<br> (2) <strong>The host doesn't score all chunks and all species</strong> but does cherry-picking, i.e. ignore nocall cases and considers only a few options for each chunk. It is like if one heard a birdcall and is asking if it is A or B species. Since <strong>this procedure is unknown</strong>, the threshold should be calibrated based on LB( Though it is a kind of cheating, and the host is getting an overestimated model performance because we "overfit" the procedure used to select scored species both in public and private test set.</p> <p>During training I was computing some F1 score, but to assess the model performance I mostly paid attention to the model output at the test data, like one in my post, because of the issue (1): we do not need a model that performs well without noise, we need a model that works on test data. So listening to the test audio and visual inspection of the model output on it is not a bad thing here. After participating in competitions like BirdCLEF or PANDA, where validation is quite illusional, one gets instincts on model assessment and selection… <strong>The key is following the methodology, i.e. focusing on the things that make sense.</strong> Though one should understand when LB or CV or both should be ignored, it's another key.</p> <p>Thank you <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> </p> <h3>Summary</h3> <ul> <li><strong>Constant Q-transform</strong></li> <li>Multistep training with PL</li> <li>Pretraining on 2022+2021 data with noise from Rainforest</li> <li>Finetuning on 2022 data with weighted sampling (w~N^0.5)</li> <li>Sequence-based model</li> <li>Single model performance <strong>0.7971/0.7859</strong> private/public LB, while two models give <strong>0.7964/0.7936</strong></li> </ul> <h3>Introduction</h3> <p>Congratulations to all participants and thanks to the organizers for making this competition possible. It is my 3rd BirdCLEF competition, and the first gold medal I got in it. Though, this time I again followed the tradition of joining the competition close to the end… with the first sub 5-6 days before the deadline( Something should be changed in this life.</p> <p>I'll take this opportunity and describe some of the ideas I came up with while working on the challenge. I hope they may be interesting for other participants and organizers as well. My work, to a large extent, is based on my <a href="http://ceur-ws.org/Vol-2936/paper-141.pdf" target="_blank">BirdCLEF 2021 paper</a>. Also, some details may be found in my <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243343" target="_blank">BirdCLEF 2021</a> and <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183258" target="_blank">2020 Cornell Birdcall</a> writeups.</p> <h3>Data</h3> <p>In this year's competition there were two main challenges to address: <strong>(1) considerable domain mismatch with very noisy test data</strong> and (2) <strong>long-tail class distribution with a few examples of rare classes</strong>. </p> <p>To address the first challenge, I used a traditional <strong>pseudo label (PL) 2-step training</strong>: in the first pass the model learns where the signal from birds is located, and on the second pass a heavy noise is applied. With segmentation labels the model knows where to look for the true signal under a severe noise. As a noise source <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/data" target="_blank">rainforest data</a> is used in addition to pink noise and signal weakening.</p> <p>To address the second challenge and make the model robust to different bird calls I pretrained it on a large <strong>2021+2022 BirdCLEF dataset</strong> (the size of 2022 data is ~ 5 times smaller). During fine-tuning I kept the backbone frozen and trained only the head on 2022 data. I assigned all unscored classes to "unknown" label and performed weighted sampling with probability conditioned on the number of samples of each class as w~N^0.5.</p> <p>One of the novel things I used in my solution, in comparison to traditional audio models, is <strong>Constant Q-transform</strong> instead of mel-scale spectrograms and even pcen. The quality of the produced spectrograms is much better, as shown in the figure below: lower impact of noise, high level of details in the high-frequency domain, high contrast, and easy mapping for sequence models (without the need to care about padding added in FFT). However, I didn't have time for a quantitative comparison of the model performance. CQT1992v2 is a very effective Pytorch (GPU) implementation of CQT from nnAudio.Spectrogram library and can be easily integrated into a model (make sure to disable gradients for CQT). During training I used log transformation of transform output: <code>x = (64*x + 1).log()</code>.<br> <img src="https://i.ibb.co/Trv7Qkb/spectrogram.png" alt=""></p> <pre><code>self.qtransform = CQT1992v2(sr=32000, fmin=256, n_bins=160, hop_length=250, output_format='Magnitude', norm=1, window='tukey',bins_per_octave=27,) </code></pre> <h3>Model</h3> <p>I used <a href="https://github.com/facebookresearch/semi-supervised-ImageNet1K-models" target="_blank">semisupervised pretrained</a> ResNeXt50 and MiT-B2 transformer, <a href="https://github.com/NVlabs/SegFormer" target="_blank">Segformer </a> backbone. The last conv layer is followed by a convolution, collapsing the feature map into a sequence of vectors vector (nemb=1024), with a Transformer layer, and a head, producing sequence and clip outputs using the attention mechanism (somewhat similar to SED models):</p> <pre><code>class AttHead(nn.Module): def __init__(self, n_in, n_out): super().__init__() self.attn = nn.Conv1d(n_in,n_out,1) self.cla = nn.Conv1d(n_in,n_out,1) def forward(self, x): if len(x.shape) == 4: x = x.flatten(1,2) attn = self.attn(x) cla = self.cla(x) x = (torch.softmax(attn,-1)*cla).sum(-1) return x, attn, cla </code></pre> <p>Initially, I believed that using MiT transformer backbone with a large receptive field is important: it is able to look into the entire frequency domain at one (instead of a small window for conv net) and compare bird calls in the clip across the time. But it appeared to be not really true, and my favorite ResNeXt50 performed nearly the same as MiT B2… both CV and LB. Though, in both cases using a transformer head quite helps based on my initial tests. </p> <h3>Training</h3> <p>I used multistep training: (1) pretrain the model on 2022+2021 data and generate PL; (2) train the model on 2022+2021 data with a high level of noise added and additional segmentation loss based on PL (global labels for selected chunks are also adjusted based on PL in case if there is no a bird call); (3) take the model from step 2 and freeze the backbone, finetune on 2022 data with weighted sampling considering 21+1 classes, and generate new PL; (4) the same as 3 but noise and segmentation losses are added. </p> <p>At all stages MixUp augmentation (with label max and alpha of 2, i.e. the probabilities are close to 0.5) is applied to waves. Each stage is started with using 5s chunks (32 epochs) and is finished with a few epochs with 10-15s chunks. I use Focal loss with gamma=1 (with a bug fix to accept smooth labels).</p> <h3>Inference</h3> <p>The inference is performed on entire audio files (while training is done on 5s clips followed with fine-tuning on 10-15s clips). I use sequence level output, illustrated in the image below for the test audio clip. It is split into 5s chinks, and the predictions are selected if the maximum output of the particular class reaches the selected threshold. If one compares the plot with similar plots from my previous reports, the model performance improvement is quite clear. Now the model is able to clearly distinguish bird calls even in very noisy audio. For postprocessing the model output I was using the following: <code>x = (torch.softmax(3*attn,-2)*torch.sigmoid(3*cla))</code>. Pay attention to the dimension, which is the class rather than the sequence dimension: if the model is paying more attention to a specific class at a particular moment, the signal should be enhanced accordingly. Also, I'm using temperature rescaling.<br> <img src="https://i.ibb.co/sKXxh8J/pred-test.png" alt=""></p> <p>The final submission is composed of 2 models (MiT and ResNeXt50) and is scored as 0.7964/0.7936 at public/private LB. The best single model is scored as 0.7971/0.7859. In this competition, the main focus is bird classification, and no credit is given for call/nocall separation, in contrast to 2020 and 2021 competitions. So the best scores could be achieved at the selection of very low thresholds. If more reasonable thresholds, capable of nocall separation, are used, the performance drops by ~0.05.</p> <h3>Things I didn't have time to make work</h3> <ul> <li>Incorporation of ArcFace loss into attention pooling head (dealing with sequence and global labels).</li> <li>Performing metric learning and clustering to address the issue with a few training examples for rare classes</li> </ul> <p>I think those two things might be the key to getting to the top of the LB.</p> <p>Thanks <a href="https://www.kaggle.com/muhammadabbasshareef" target="_blank">@muhammadabbasshareef</a> </p> <p>You are very welcome, <a href="https://www.kaggle.com/tjamali" target="_blank">@tjamali</a> </p> <p>Thank you so much <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> </p>
BirdCLEF 2022
8th place solution. And gold on last attempt
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It was my first Kaggle competition on which I spent a lot of time. I'm honor to be part of it, big thanks to hosts, Kaggle and all participants.</p> <h2>First step</h2> <ol> <li><p>I started with <a href="https://www.kaggle.com/kaerunantoka" target="_blank">@kaerunantoka</a> public <a href="https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0" target="_blank">notebook</a> with change mel spectrogram hop length and input shape [224, 512] and add <strong>class weights</strong> to a submission.<br> Weights array is 500 divided to amount of each birds species, and clamp it to max value 10. Then I multiply model output to weights array. <strong>0.77</strong> Public LB.</p></li> <li><p>I tried this approach with secondary labels with 0.3, 0.4 and 0.5 labels. And with ensembling I'v got <strong>0.79</strong> on LB</p></li> </ol> <h5>Augmentations</h5> <p>For waveform:</p> <pre><code>Compose([OneOf( [AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.011, p=1), NoiseInjection(p=1, max_noise_level=0.04)], p=0.4), PitchShift(min_semitones=-4, max_semitones=4, p=0.1), Shift(min_fraction=-0.5, max_fraction=0.5, p=0.1), Gain(min_gain_in_db=-12, max_gain_in_db=12, p=0.2), Normalize(p=1.) ]) </code></pre> <p>For spectrogram:</p> <pre><code>torchaudio.transforms.FrequencyMasking(24) torchaudio.transforms.TimeMasking(96) </code></pre> <h5>Model</h5> <p>SED with tf_efficientnet_b0_ns backbone</p> <h5>Training</h5> <ul> <li>Epochs = 30</li> <li>LR = 0.001</li> <li>weight_decay = 0.0001</li> <li>dropout = 0.4</li> <li>Loss: Focal BCE Loss</li> <li>Optimizer: Adam (betas=(0.95, 0.999))</li> <li>Scheduler: CosineAnnealingLR</li> </ul> <h5>Validation</h5> <p>I validate models on 7 folds CV with f1-macro metric. Thresholds for f1: [0.5, 0.1, 0.15, 0.20, 0.25, 0.30, 0.35, 0.4, 0.45, 0.5]. Best checkpoints chosen by 0.3 and 0.5 thresholds.</p> <h5>What didn't work on this step</h5> <ol> <li>Noise reduction</li> <li>Oversampling rare birds</li> <li>Trim long silence with noise reduction</li> <li>Weighted loss</li> <li>3 channels input, where 1st channel is power_to_db, 2nd melspectrogram, 3rd normalized melspectrogram</li> </ol> <h2>Second step</h2> <ol> <li>I manually trim all segments without bird's sounds from the scored audios, except skylar and houfin, on them I processed only 70 records per class. </li> <li>Splited data to 15 seconds chunks for audios with length less than 1 minute and 30 seconds chunks for more than 1 minute. Got ~40.000 records.</li> <li>Used weights arrays with 0.25-0.75 power to reduce their impact to inference.</li> <li>Training on this data with previous pipeline and ensemble with previous models gives me <strong>0.81</strong> on LB</li> </ol> <h5>Data preprocessing</h5> <p>Same</p> <h5>Augmentations</h5> <p>Same</p> <h5>Model</h5> <p>SED with tf_efficientnet_b0_ns backbone</p> <h5>Training</h5> <ul> <li>Epochs = 60</li> <li>LR = 0.0013</li> <li>weight_decay = 0.0001</li> <li>dropout = 0.4</li> <li>Loss: Focal BCE Loss</li> <li>Optimizer: Adam (betas=(0.95, 0.999))</li> <li>Scheduler: CosineAnnealingLR</li> </ul> <h5>What didn't work on this step</h5> <ol> <li>PaSST model</li> <li>AST model</li> <li>Linear head models</li> <li>PaSST preprocessing</li> </ol> <h2>Third step</h2> <ol> <li>I used similar with AST preprocessing.</li> <li>Every epoch from all data randomly choose up to 300 records for every class.</li> <li>Train on random 5 seconds crop, validate on first 5 seconds.</li> <li>Secondary labels 0.4</li> <li>Class weights array clamped with max value 8. And used with power 0.6. </li> <li>Used mixup 0.4 for first 15 epochs, mixup 0.07 for 16-22 epochs, and no mixup for others.</li> <li>Ensembling of this approach with previous gives me <strong>0.82</strong> LB</li> </ol> <h5>Data preprocessing</h5> <pre><code>waveform, sr = ta.load(filename) waveform = crop_or_pad(waveform, sr=SR, mode=self.mode) waveform = waveform - waveform.mean() waveform = torch.tensor(self.wave_transforms(samples=waveform[0].numpy(), sample_rate=SR)).unsqueeze(0) fbank = ta.compliance.kaldi.fbank(waveform, htk_compat=True, sample_frequency=SR, use_energy=False, window_type='hanning', num_mel_bins=self.melbins, dither=0.0, frame_shift=9.7) fbank = (fbank - self.norm_mean) / (self.norm_std * 2) </code></pre> <p>Mean and std calculated for all train dataset.</p> <h5>Model</h5> <p>SED with tf_efficientnet_b0_ns backbone</p> <h5>Augmentations</h5> <p>Same</p> <h5>Training</h5> <ul> <li>Epochs = 85</li> <li>mixup 0.4, when epochs &lt; 15</li> <li>LR = 0.0008</li> <li>weight_decay = 0.0001</li> <li>dropout = 0.4</li> <li>Loss: Focal BCE Loss</li> <li>Optimizer: Adam (betas=(0.95, 0.999))</li> <li>Scheduler: CosineAnnealingLR</li> </ul> <h5>Postprocessing</h5> <ul> <li>Mean-median averaging of predictions</li> </ul> <pre><code> full_med = np.median(full_events, axis=1) full_mean = np.mean(full_events, axis=1) full_events = np.mean(np.array([full_med, full_mean]), axis=0) </code></pre> <ul> <li>Max adder.</li> </ul> <pre><code> logits_max = full_events.max(0) for jk in range(full_events.shape[1]): if logits_max[jk] &gt; threshold * 2.5: full_events[:, jk] += threshold * 0.5 </code></pre> <h2>Story of gold medal</h2> <p>When there were 2 days left until the end of the competition, I thought about using a pseudo labeling. And separate my data by 0.8 threshold to remove all noisy records. Only ~15.000 left from ~40.000. Then I trained a new model on this data. Because there were not enough time I trained model on all train data and validate it on some random part and random part of only scored birds. <br> All training and preprocessing parameters was the same with previous.</p> <p>It was last day of competition and only 5 submissions. So the first 3 I lost because i forgot to make model.eval() :)<br> 4th attempt I used with single new model and get <strong>0.79</strong> LB.<br> And the last one was mean between my best previous attempt and this pseudo labeling new model.</p> <p>And it gave me <strong>0.79</strong> private, when my previous best private was <strong>0.78</strong> </p> <p>Thanks!</p> <p>Weighted loss didn't work for me here. This multiplies it's something like tuning thresholds which other participants does, but I didn't tune thresholds to lower values and just made predictions higher.</p> <p>And with this method I didn't tune threshold manually, it always linearly depends of the number of training samples for each class.</p>
BirdCLEF 2022
9th place summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Cornell Lab of Ornithology and Kaggle for hosting this interesting competition. I also would like to thank my teammates <a href="https://www.kaggle.com/truonghoang" target="_blank">@truonghoang</a> and <a href="https://www.kaggle.com/gwanghan" target="_blank">@gwanghan</a> for the great collaborative teamwork during the competition.</p> <p>Our solution is an ensemble of 1 SED model and 1 melspectrogram classification model which score 0.76 and 0.77 respectively in private LB. ensemble gave us 0.79.</p> <h2>SED</h2> <ul> <li>Backbone: tf_efficientnet_b5_ns.</li> <li>Train only on the first 5s</li> <li>Inference 5s</li> <li>Since the backbone is powerful, we use quite a lot augmentation in the hope that it will help closing the domain gap between training data and the test data. The augmentation include audio augmentation, melspec augmentation, mixup 2-3 samples and cutout.</li> </ul> <h2>Melspectrogram classification</h2> <ul> <li>Backbone: resnest50d_1s4x24d</li> <li>Train on 7s random crop.</li> <li>Inference 7s</li> <li>Augmentation: we use simple augmentation</li> </ul> <pre><code> NoiseInjection(max_noise_level=0.04, sr=SAMPLE_RATE), PitchShift(max_range=3, sr=SAMPLE_RATE), RandomVolume(limit=4), </code></pre> <ul> <li>Training: <ul> <li>Round 1: We train several models and ensemble to create an oof for each 7s crop.</li> <li>Round 2: We conbine groundtruth label and oof to modify the target of each 7s, then train on the modified target.</li></ul></li> </ul> <pre><code> if oof_prob[primary_bird]&gt;0.5: target[primary_bird] = 1.0 elif prob[primary_bird] &gt; 0.1: target[primary_bird] = 0.9975 elif prob[primary_bird] &gt; 0.01: target[primary_bird] = 0.5 else: target[primary_bird] = 0.2 if oof_prob[secondary_bird]&gt;0.5: target[secondary_bird] = 0.9975 elif oof_prob[secondary_bird] &gt; 0.1: target[secondary_bird] = 0.8 else: target[secondary_bird] = 0.0025 </code></pre> <h2>Ensemble</h2> <p>We found that the public lb score is very sensitive to threshold, the optimal threshold for each model are very different, so doing weight average does not bring much improvement in our ensemble. We end up multiplying their probability and set the top 33% highest confidence score in the test set as True, the rest is False.</p> <p>P/S: Congrats my hard-working teammate <a href="https://www.kaggle.com/truonghoang" target="_blank">@truonghoang</a> on becomming competition master</p> <ul> <li>so I add another column (conf) in submission.csv, then the top 33% highest conf will has target=True.<br> <img src="https://i.ibb.co/dPxwghm/Screenshot-from-2022-05-25-14-40-05.png" alt=""></li> <li>I tried b0, b4, b5, b7, and b5 gave better cv-lb. So I use it, I had no time to test other backbone.</li> </ul>
BirdCLEF 2022
Few words about 14th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to everybody who organized this competition. Congratulations to all who received medals.<br> Seems that results of this competition aren't make a lot of sense because the host solution is the best.</p> <p>Few words about my trying, hope you find it interesting or even useful.</p> <p><strong>Metrics</strong><br> I start with macro Fb-score which seems are suitable for case of noisy labels. I chose quite big beta - 16 to boost recall.<br> After reading some comments of hosts about the metric I switched to:<br> <code>M = (1/N) * (1/21) * ∑∑( (TP/(TP+FN) + TN/(TN+FP))/2 )</code></p> <p>For any amount of training classes metric was calculated just over scored:<br> <code>y_true = tf.gather(y_true, indices=self.mask, axis=1)</code><br> <code>y_pred = tf.gather(y_pred, indices=self.mask, axis=1)</code></p> <p><strong>External data</strong><br> 2 recent recordings from Hawaii which can be found on Xeno-Canto.<br> ff1010bird records without birdcalls to mix with.</p> <p><strong>Input</strong><br> As input pre-computed mel-spectrogram (mainly) or PCEN (in few cases) were used. Both calculated by Librosa with default parameters, nmels=128. So for 5s shape is 128x313.<br> For mel-spectrogram primitive denoising strategy was used to reduce stationary noise over whole file. On inference same denoising approach was applied but on 5s chunk level instead whole file.</p> <p>In case of mel-spec, after augmentations and before mixup it was log-scaled and normed from 0 to 1.<br> In case of PCEN after tiny augmentations and before mixup it was normed from 0 to 1.</p> <p><em>Idea to beat noisy labels a little bit (was implemented for few training attempts):</em><br> Drop time bins with low std (can be done for PCEN or log-melspectrogram):</p> <p><code>std_over_f = pcen.std(axis=0)</code><br> <code>pcen_f = pcen[:, std_over_f &gt; std_over_f.mean()]</code></p> <p>Some mask smoothing can be applied before resampling by scipy.ndimage.convolve.</p> <p><strong>Approaches</strong><br> 2 different approaches were implemented:</p> <ol> <li>Multiclass classifier for all (152) species. CategoricalCrossentropy loss with label_smoothing=0.1 was used. Sigmoid is used as activation function of output layer (instead of softmax). On inference predictions were normalized from 0 to 1 (divided by max value).</li> <li>Multilabel classifier for chosen species. In addition to scored labels the species which have high overlap with scored (in secondary labels) or which have big amount of recordings are chosen. Records for unchosen species are used for random mixing with initial input.<br> tf.nn.weighted_cross_entropy_with_logits with added label smoothing was used (weight=4.0, label_smoothing=0.1).</li> </ol> <p>ff1010bird dataset was also tried as initial input as nocall class in 1st approach and as absence of all classes in the 2nd instead of use it for mixing.</p> <p>Mainly the efficientnets (B0, B2, B3) were used with noisy-student weights, also seresnet101 was trained for mel-spec multiclass classifier.<br> Instead of simple global pooling in most attempts AutoPool over one axis followed by mean pooling was used.</p> <p><strong>Augmentations</strong></p> <ol> <li>Random crop of 5s window inside whole file (or restricted part in case of oversample).</li> <li>Time stretching and compression by resize over time axis.</li> <li>Random padding if file shorter than 5 s.</li> <li>Random low pass filtering approximately done directly on mel-spec.</li> <li>Mix with nobird records from ff1010bird.</li> <li>Mix over time: up to five consecutive 5s parts can be taken and mixed with random gains. </li> <li>Mixup: beta=2, alpha=3*beta. Mixing lambda was forced to be &gt;=0.5 for scored label in case of mixing scored with non-scored.</li> </ol> <p><strong>Thresholds</strong><br> Trained models were evaluated over whole files from validation set and best thresholds were estimated. Obtained threshold for different approaches were averaged and tuned by knowledge about amount of data per bird in dataset.<br> Min value of 0.25 was used for 7 rare classes and just two species (skylar and houfin) have thresholds more than 0.5.</p> <p><strong>Ensemble</strong><br> At first average predictions are estimated per each approach (mel-spec multiclass, pcen multiclass, mel-spec mixin multilabel) and then final averaging was done with coefficients selected from LB results.</p> <p><strong>Post-processing</strong><br> For reduce FP and find out nocall parts few checks were implemented:</p> <ul> <li>std behavior for mel-spec and pcen;</li> <li>mean/max relations between whole file and current chunk;</li> <li>check scored labels max prediction over file.</li> </ul> <h1>standwithukraine</h1> <h1>stoprussianaggression</h1>
BirdCLEF 2022
Simple 17th Place Solution [0.81 Public, 0.77 Private]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! Your notebook helped us a lot.</p> <p>No we didn't change anything in terms of architecture though we did mess around a little with the type of pooling but found the best pooling to be the BC21 2nd place pooling.</p> <p>Thanks to Kaggle, competition hosts, and fellow competitors for this very interesting competition. We joined this competition in the last month and had to work hard to understand the competition as neither <a href="https://www.kaggle.com/neomaoro" target="_blank">@neomaoro</a> or I have done anything with audio before this. Both <a href="https://www.kaggle.com/neomaoro" target="_blank">@neomaoro</a> and I worked equally hard on this competition.</p> <p><strong>TLDR</strong></p> <p>Our solution is based heavily on <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> and <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>'s <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">2nd place solution </a> from last year's birdclef competition. We made some modifications to this pipeline, the most significant being resizing spectrograms to 256 * 512 which gave a boost of 0.01 in public and private leaderboard.</p> <p><strong>Submission Notebook</strong><br> <a href="https://www.kaggle.com/code/vexxingbanana/18th-place-solution-0-77-private-0-81-public" target="_blank">https://www.kaggle.com/code/vexxingbanana/18th-place-solution-0-77-private-0-81-public</a></p> <p><strong>Code Pipeline and Data Setup</strong></p> <p>We heavily based our notebooks and python files for training and submitting on <a href="https://www.kaggle.com/julian3833" target="_blank">@julian3833</a>'s <a href="https://www.kaggle.com/code/julian3833/birdclef-21-2nd-place-model-train-0-66" target="_blank">training notebook</a> and <a href="https://www.kaggle.com/code/julian3833/birdclef-21-2nd-place-model-submit-0-66" target="_blank">inference notebook</a>. We setup training in the cloud on 2 A40 GPUs.</p> <p><strong>Bird Classifier</strong></p> <p>Our models were very similar and trained the same as last year's 2nd place solution with 30 second clips of train_audio resized into 6 x 5 second parts. We also used primary and secondary labels as targets. For inference, we fed in 6 x 5 second snippets into the model.</p> <p>We used the following backbones: eca_nfnet_l0, eca_nfnet_l1, tf_efficientnetv2_m_in21k, seresnext50_32x4d, and resnest50d_4s2x40d. Eca_nfnet_l0 and eca_nfnet_l1 achieved the best private leaderboard scores with 5 fold scores of 0.76 and 0.75 respectively. </p> <p>During training, we started with the 2nd place solution's training strategy then added some slight modifications. The modifications are as follows:</p> <ul> <li>Epochs: 20 -&gt; 25</li> <li>Optimizer: Adam -&gt; AdamW with learning rate 1e-3 and weight decay 1e-6</li> <li><strong>Resizing: After making a spectrogram, the spectrogram was resized to 256 x 512 using torchvision.</strong></li> <li>Background noise, label smoothing removed.</li> </ul> <p><strong>Validation</strong></p> <p>We couldn't find a way to make a good validation pipeline based on soundscapes since there were no soundscapes with labels given for this competition. Therefore, we simply used validation loss and public leaderboard as validation. </p> <p><strong>Ensembling</strong></p> <p>The ensembling of our models was pretty straightforward. We trained 5 fold models of each backbone and then averaged all of the models' predictions together. We tried out a few other ensembling methods such as purely voting and a combination of voting and averaging but found averaging all models to be the best performing. </p> <p><strong>What Did Not Work</strong></p> <p>Throughout the competition, we had ups and downs, many successes and some things that didn't work well. Here are some experiments which didn't work out for us:</p> <ul> <li>Applying PCEN</li> <li>Using 2021 Data </li> <li>Attention head</li> <li>Adding augmentations such as pink noise, gaussian noise, etc.</li> <li>SED models</li> <li>Adjusting mel spectrograms values to increase their size (window_size 1024, hop_size 320)</li> <li>Using recordings with either rating 0 or ratings &gt;= 2 and label smoothing of 0.01</li> <li>More epochs</li> </ul> <p>Thanks! </p> <p>So basically the model takes in 6 * batch size 5 second snippets. Then during inference, it takes duration / 5 5 second snippets. Therefore in our model, during training the model takes in a tensor shaped [96, 1, 512, 256] and during inference, it takes in a tensor shape [12, 1, 512, 256]. </p>
BirdCLEF 2022
[Public #1 Private #2] + [Private #7/8 (potential)] solutions. The host wins.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2022 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As I promised, I will publish my solution right after the competition. This will be a long topic as the two solutions are totally different. As kaggle says &gt; You've been sharing this link too often. We prevent redundant posts to reduce spam. So I put some link in the code block.</p> <p>Before the solutions, thanks to my teammate <a href="https://www.kaggle.com/jionie" target="_blank">@jionie</a>, our second gold medal, and first-time money zone!!!!!!!!!! Cheers!!!!!!!!!!</p> <p>I will start with our [Private #7/8 (potential)] Solution first since I **promise **you will say the "F" word after seeing the other solution.</p> <p><strong>[Private #7/8 (potential)] Solution</strong></p> <p>Training:<br> Our first solution is based on <a href="https://www.kaggle.com/kaerunantoka" target="_blank">@kaerunantoka</a> 's <a href="https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0" target="_blank">public notebook</a> and last year's second-place solution. The training code is pretty the mixture of the two pipelines mentioned above. <br> Some key points to mention:</p> <ol> <li><p>data augmentation:<br> <code>OneOf([ Gain(min_gain_in_db=-15, max_gain_in_db=15, p=0.8), GainTransition(min_gain_in_db=-15, max_gain_in_db=15, p=0.8), ]), OneOf( [ NoiseInjection(p=1, max_noise_level=0.04), GaussianNoise(p=1, min_snr=5, max_snr=20), PinkNoise(p=1, min_snr=5, max_snr=20), AddGaussianNoise(min_amplitude=0.0001, max_amplitude=0.03, p=0.5), AddGaussianSNR(min_snr_in_db=5, max_snr_in_db=15, p=0.5), ], p=0.3, ), AddBackgroundNoise( sounds_path=self.config.BACKGROUND_PATH, min_snr_in_db=0, max_snr_in_db=2, p=0.5 ), Normalize(p=1),</code></p></li> <li><p>cut mix + mix up</p></li> <li><p>loss function: BCEWithLogits + BCEFocal2WayLoss</p></li> <li><p>hypers setting (n_fft, n_mels, hop_length) refer to our inference notebook.</p></li> </ol> <p>Our inference kernel and all trained models are [<a href="https://www.kaggle.com/code/leonshangguan/private-7-8-final-of-submission" target="_blank">publicly available</a>] <br> Only one key point to mention: if a bird is detected in the previous or next 5s, we will rank the probs and add the top5 bird species to the current detected birds.</p> <p>Because this solution was not selected as our final submission, we don't know the actual rank, but as it scores 0.79 in private, it should rank 7-8.</p> <p><strong>[#1 Private #2] Solution</strong></p> <p>Actually, I am curious that no one found it, the solution is provided by the host and finally allowed by the host.</p> <ol> <li>If you look at the discussion <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/307941" target="_blank">Meet the host</a>, you will find the host mentioned <a href="https://birdnet.cornell.edu/" target="_blank">BirdNet project</a> in the post.</li> <li>Click the link, and it is easy to find the host's <a href="https://github.com/kahst/BirdNET-Analyzer" target="_blank">github repo</a>.</li> <li>Then, find the overlap classes between the <a href="https://github.com/kahst/BirdNET-Analyzer/blob/main/checkpoints/V2.1/BirdNET_GLOBAL_2K_V2.1_Labels.txt" target="_blank">host's model</a> and the scored birds, 20 out of 21 scored birds are the same (except aniani).</li> <li>Modify species_list.txt under example folder accordingly, I have uploaded the modified repo to kaggle <a href="https://www.kaggle.com/datasets/leonshangguan/birdnet." target="_blank">here</a>.</li> <li>Do some post-processing as in our [<a href="https://www.kaggle.com/code/leonshangguan/birdnet-inference" target="_blank">inference notebook</a>]</li> <li>Thanks to <a href="https://www.kaggle.com/ivanpan" target="_blank">@ivanpan</a> who opened <a href="https://github.com/kahst/BirdNET-Analyzer/issues/40" target="_blank">this issue</a>, the host says <code>we won't enforce the "non-commercial" clause in the license for BirdCLEF, and you can use BirdNET in your submissions.</code> (I don't know if they also benefit from this repo, we can wait for their solution)</li> </ol> <p>Some Notes:<br> Honestly, we didn't originally intend to use this model as a final submission until we notice the issue mentioned above, and actually, the ensemble of our own model could also reach the gold zone. The answer is provided by the host so that's why I say it is a weird solution. I guess the BirdNet was trained on the data for the public leaderboard but not the private one since there's a huge drop between the public lb to private (0.91 --&gt; 0.84; 0.85 --&gt; 0.78)</p> <p>The BirdNet is running on CPU less than 2h inference time, while ours cost about 8h to run on GPU. So the host wins, my hope is the first place team doesn't benefit from that model and can beat the host. Looking forward to their solution.</p> <p><strong>Finally, a big thanks to <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a> for hosting this amazing competition and providing the answer as well xD.</strong></p> <p>Yes exactly, </p> <p>Someone asked me <code>why didn't you contact the host already those 20 days ago when you found it or at least 12 days ago when you wrote your comment?</code> -- Because, as I said, we didn't manage to select BirdNet as our final submission so I didn't think that's an issue and did not contact the host. We keep training our own model until the last minute and our own one is ranking 9 which is also in the gold zone. To be honest, we didn't think the BirdNet is allowed until the host said it is allowed. I understand our solution is weird under some gray area, but I don't understand why someone downvotes my comments (and downvotes the 1st place team members comments). This situation happened in <a href="https://www.kaggle.com/competitions/cassava-leaf-disease-classification/overview" target="_blank">Cassava Leaf Disease Classification </a> as well where someone use pre-trained model and reached top places. Kaggle should manage to develop more clear rules and we should keep looking forward to helping avoid this case happen again.</p> <p>BirdNet in this form only exists since way after last year's competition -- anyone even tried <a href="https://github.com/kahst/BirdNET-Lite" target="_blank">this repo</a> and <a href="https://github.com/kahst/BirdNET-Demo" target="_blank">this</a> for the last year's competition? This is before 2021 (even 3 years ago), I don't know how they perform but it seems no one even tried it.<br> prepare better for the next competition -- I think this is for both the host and us</p> <p>Guys, I think there's no need to quarrel over this topic. I think the main problem is I found the solution too late otherwise I would make it publicly available, and no one asked about the BirdNet. I don't know how many teams found the solution (perhaps only top-1 and us). If this solution impact too many teams, those teams should stand out to say it. It's a learning process for us as well as the host, things beyond expectation always happen. The best option is just never to allow external data, and I believe the host will organize better in the future.</p>
BirdCLEF 2023
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks and congrats for winning the Prize!</p> <h1>Training Data</h1> <p>I use 2023 competition data only, but 3 of my models have pre-train weights on 2022 data.</p> <h1>Model Architecture</h1> <p>I use SED architecture, almost the same as <a href="https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0" target="_blank">this notebook</a> with very minor modifications (e.g., emb_size, num_layers).</p> <p>I tried 8 backbones in total, including 3 large and 5 small: <br> large backbones: efficientnetv2_m, eca_nfnet_l0, seresnext50<br> small backbones: efficientnet_v0, mnasnet_100, spnasnet_100, mobilenetv2_100, resnet34</p> <p>As seresnext50 and resnet34 perform not well based on my cv, so I dropped them in the final ensemble.</p> <p>I got the information of mnasnet_100, spnasnet_100, and mobilenetv2_100 by asking ChatGPT: <code>What are small models offered by timm</code> and ChatGPT replied with these models with the number of parameters, which is very helpful. </p> <h1>Code</h1> <p>One of my inference notebooks can be found <a href="https://www.kaggle.com/code/leonshangguan/fork-of-fork-of-onnx-final-combine-bs4-b9daf6?scriptVersionId=130783179" target="_blank">here</a></p> <h1>Training Strategy</h1> <ol> <li><p>all models are trained on 5s clip.</p></li> <li><p>soft label: in the one-hot gt, I give the primary label with "1" and the secondary label with "0.3".</p></li> <li><p>data augmentation: add GaussianNoise, Background noise, PinkNoise, adjust Gain, use cut mix and mix up (very helpful).</p></li> <li><p>Progressive pretraining: the hardest part of this competition is to find a good set of parameters when applying melspectrum, especially the n_fft, hop_length, and n_mels, which affect the input image (melspectrum) size. I tried many different params, and surprisingly, I found that if you have a model trained on params set A, then you load the model as a pre-trained model (except the first and last layer), and then train it on params set B, the model will perform much better than directly training on params set B, and this is the key idea of my solution. All of my models were trained on many different parameters and finally finetuned on n_mels=128, n_fft=1024, and hop_length=320 to make it the same size, so that can be ensemble efficiently.</p></li> <li><p>loss function: BCEWithLogits + BCEFocal2WayLoss, same as previous years.</p></li> <li><p>cv strategy: I use 5fold stratified cross-validation and select the best 1 or 2 fold according to cv. </p></li> <li><p>ensemble strategy: each model has 1/2 fold for ensemble, and I have 6/7 models in total, with 6 different backbones as mentioned before. The reason I didn't use all folds is because of the computational limitation asked by the host.</p></li> <li><p>cpu acceleration: I use ThreadPoolExecutor as I have mentioned <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/401587" target="_blank">here</a>, and I also convert models to onnx which utilizes the model quantization technique and shows faster inference time.</p></li> </ol> <h1>Others</h1> <ol> <li>more than 70 of my submissions encountered timeout error, and I therefore highly request the function that if more information can be released for the submission, for example, the running time <a href="https://www.kaggle.com/herbison" target="_blank">@herbison</a> . I suffered from it a lot, and many almost the same submissions sometimes pass but sometimes not, even with the identical code. </li> </ol> <p>I previously made a request regarding the submission time function, as mentioned <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337343" target="_blank">here</a>, when <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> and <a href="https://www.kaggle.com/jaafarmahmoud1" target="_blank">@jaafarmahmoud1</a> found 10 PM to 6 AM EST is a good time to make submissions. The reason behind this recommendation is that 10 PM EST is considered late for the American region, while it corresponds to 4 AM in Europe and 10 AM in Asia, which might be too early for many participants. Consequently, this particular timeframe becomes somewhat magical, as very few people tend to make submissions during this period. Therefore, if one chooses to make a submission during this time, they could potentially benefit from it.</p> <ol> <li><p>based on my experiments, I believe the model params have an impact on inference time when using CPU, i.e., with the same model architecture and the number of params, but different models, some may be fast some may not. <a href="https://www.kaggle.com/chrisqiu" target="_blank">@chrisqiu</a> 's <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/409331" target="_blank">post</a> verifies this situation. As the host is trying to build apps that can be deployed in the real world using the CPU, I think this issue is worth mentioning <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a>. </p></li> <li><p>I found that both my CV and lb vary based on the different folds, which indicates a good data split may have a great impact on the result, and some luck is needed when setting the random seed for the data split.</p></li> </ol> <h1>Acknowledgement</h1> <p>I appreciate <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a> for hosting this amazing competition, <a href="https://www.kaggle.com/herbison" target="_blank">@herbison</a> for answering my questions regarding the submission timeout error, and thanks to the Kaggle community for sharing their insights, ideas, and thoughts. This is a wonderful learning experience and I hope everyone enjoys it!</p> <p>Thank you!!!</p>
BirdCLEF 2023
1st place solution: Correct Data is All You Need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>If you compute Out Of Fold and compute metric ones - you will have only 5 100% correct rows for each class BUT if you compute metric on each fold separately you will have 5 100% correct rows for each fold<br> So in first case you will have lower metric comparing to the second one</p> <p>Hi, Kagglers!</p> <p>Let's start our journey in the tricky world of Audio Bird Data and Modelling, but before this, a few very important words:</p> <p><em>I would like to thank the Armed Forces of Ukraine, Security Service of Ukraine, Defence Intelligence of Ukraine, and the State Emergency Service of Ukraine for providing safety and security to participate in this great competition, complete this work, and help science, technology, and business not to stop but to move forward.</em></p> <h1>If You Only Knew the Power of A100s GPUs</h1> <p>I have managed to run 294 experiments: half of them with 5 folds and half of them with full data training. So, all in all, many hypotheses were checked and, of course, most of them were rejected :) So let's take a look.</p> <h1>Data, Data is Everywhere</h1> <h2>Let's Start from 2023 Training Data</h2> <p>If you take a look at <code>train_metadata["primary_label"].value_counts()</code>, you may notice some strange maximum magic number: </p> <pre><code> </code></pre> <p>Why do we have a maximum of 500 representatives of some species? I do not know the 100% answer, but I have a strong hypothesis - a bug in <a href="https://github.com/ntivirikin/xeno-canto-py" target="_blank">XC API</a>. I cannot remember the exact place in the code, but the overall problem lies in the data loading pipeline. Here's how it works:</p> <ol> <li>Download meta files - json files.</li> <li>Iterate over all urls in the meta file(s) and download them.</li> </ol> <p>BUT if you have more than 500 files for one species - on the first stage, you will have several json files (maximum number of files in one json metafile = 500), and here we have the problem! On the second stage, the API takes into account only one json for each species and ignores the next ones, so you will have a maximum of 500 files per species.</p> <p>NOTE: I am not sure whether it is fixed in the latest version of the API, but I have used a commit from the previous year, and it was there.</p> <p>From this bug, we can clearly understand that using the fixed API can hugely enrich our training dataset.</p> <h2>Other more boring stuff</h2> <ul> <li>2023/2022/2021/2020 competition data</li> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/398318" target="_blank">2020 additional competition data</a></li> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/394358#2179605" target="_blank">Zenodo</a></li> <li>Xeno-Canto </li> </ul> <h2>Data Preparation</h2> <h3>Training Data</h3> <p>In order to make validation more robust:</p> <ul> <li>Split samples of species with only one representative into 2 splits. This is done in order to have at least one CV split with each species in train AND val splits.</li> <li>Remove some duplicates manually.</li> <li>Remove duplicates by the next rule: Two samples have same: duration, author, primary_label.</li> </ul> <h3>Additional training data</h3> <p>From the 2023/2022/2021/2020 competition data plus Xeno-Canto data, I have selected only files with this year's primary labels and added them to the final stage of training.</p> <h3>Pretrained Dataset</h3> <p>When I was using only 2023 training data in the final stage of training, pretraining on 2022/2021/2020 competition data boosted the score a lot. But after adding additional training data, pretraining stopped working on the leaderboard (though it still increased local validation). In the last week, I decided to return to pretraining experiments. This granted me one position up in the public leaderboard and two positions up in the private leaderboard - so, Kagglers, don't forget to revisit even rejected hypotheses :) </p> <p>Why and when did it work? Compared to previous pretraining experiments, I have:</p> <ul> <li>Filtered out 2023 train data duplicates not only by id but also by 'author + primary_label', as was suggested <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/395843" target="_blank">here</a></li> <li>Taken species that are present in 2023/2022/2021/2020 competition data + 2020 additional competition data and only if there are more than 10 representatives of the species. Overall, 822 species. </li> <li>Added additional files for selected species from Xeno Canto.</li> </ul> <h3>Zenodo</h3> <p>I have selected nocall regions and used them as background augmentation.</p> <h3>Data Experiments That Did Not Work</h3> <ul> <li>Massive pretraining on all Xeno Canto data.</li> <li><a href="https://www.kaggle.com/datasets/christofhenkel/birdclef2021-background-noise" target="_blank">Background noise from 2021 2nd place</a> as Background augmentation </li> <li><a href="https://www.kaggle.com/datasets/mmoreaux/environmental-sound-classification-50" target="_blank">ESC50</a> as Background augmentation </li> <li>Selecting only High Quality samples (&gt;=32kHz) from additional data </li> <li>Maybe some other ideas out of 200+ experiments that I have just forgotten</li> </ul> <h1>Validation: Be soft like cmAP, Do not be hard like F1</h1> <p>Finally! We do not have to select a threshold on completely different training data compared to soundscape data, come up with super <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">sophisticated schemes</a> or fall 19 places (as I did in 2021)</p> <p>I have used pretty much the same validation scheme as in previous years' competitions:</p> <ul> <li>Stratified CV on 5 Folds</li> <li>Take max prob from each 5 second clip over time across ALL sample</li> </ul> <p>IMPORTANT: For Padded cmAP it is pretty important to take mean across folds, NOT to do Out Of Fold !!! </p> <p>Of course, absolute numbers of CV and LB are different:</p> <ul> <li>Best Public LB: 0.84444 (4 fours :) )</li> <li>Best Private LB: 0.76392</li> <li>Best CV: 0.9083368282233681 </li> </ul> <p>But the rank correlation was pretty good. CV improvement in 0.0x (and more) resulted in improvement on LB. I have nearly all CV results for my experiments, so I hope I will have time to publish a paper with a detailed ablation study and a CV-LB correlation study.</p> <h1>Training</h1> <p>I have taken a look at <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> <a href="https://www.youtube.com/watch?v=NCGkBseUSdM" target="_blank">presentation</a> and understood how strongly I was overfitting all the time.</p> <p>Due to time and device constraints, I have chosen the following scheme:</p> <ol> <li>Validate the hypothesis on CV and submit the first 2-3 folds.</li> <li>For ensembling retrain on full train data, so you have one model for each setup</li> </ol> <p>Training Details:</p> <ul> <li>50 Epochs</li> <li>Adam</li> <li>CosineAnnealing from 1e-4 (or 1e-3) to 1e-6</li> <li>Focal loss </li> <li>64 BS</li> <li>5 second chunk </li> <li>SUPER IMPORTANT: Class sampling weights</li> </ul> <pre><code>sample_weights = ( .value_counts() / all_primary_labels.value_counts().sum() ) ** (.) </code></pre> <ul> <li>Same setups for pretrain and finetune</li> </ul> <p>Stages:</p> <ol> <li>Pretrain - refer to <code>Pretrained Dataset</code></li> <li>Tune only on scored species </li> </ol> <h1>Model</h1> <p>Because of computational constraints, we couldn't use the golden rule of Deep Learning: Stack More Layers!</p> <p>So I have dived a bit in inference optimization techniques:</p> <ul> <li>ONNX - this worked pretty well for me. It improved the inference time slightly and allowed me to reduce the number of custom dependencies in the inference notebook.</li> <li>Quantization - I spent more than a week experimenting with it, but unfortunately, I had no success :( </li> <li>openvino - I didn't use or try this, I just read about it the <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/412707" target="_blank">2nd place description</a> and burnt my chair </li> </ul> <p>Overall, my final submission is an ensemble of 3 Sound Event Detection (SED) models with the following backbones:</p> <ul> <li>eca_nfnet_l0 (2 stages training; Start LR 1e-3)</li> <li>convnext_small_fb_in22k_ft_in1k_384 (2 stages training; Start LR 1e-4)</li> <li>convnextv2_tiny_fcmae_ft_in22k_in1k_384 (1 stage training; Start LR 1e-4)</li> </ul> <p>It was pretty important to tweak the starting learning rate for different architectures!!!</p> <h1>Augmentations</h1> <p>I was pretty picky about augmentation selection, so my final models used next ones:</p> <ul> <li>Mixup : Simply OR Mixup with Prob = 0.5</li> <li>BackgroundNoise with Zenodo nocall</li> <li>RandomFiltering - a custom augmentation: in simple terms, it's a simplified random Equalizer</li> <li>Spec Aug: <ul> <li>Freq: <ul> <li>Max length: 10</li> <li>Max lines: 3</li> <li>Probability: 0.3</li></ul></li> <li>Time:<ul> <li>Max length: 20</li> <li>Max lines: 3</li> <li>Probability: 0.3</li></ul></li></ul></li> </ul> <h1>Small inference tricks</h1> <ul> <li>Using temperature mean: <code>pred = (pred**2).mean(axis=0) ** 0.5</code></li> <li>Using Attention SED probs * 0.75 + Max Timewise probs * 0.25</li> </ul> <p>All these gave marginal improvements but it is was a matter of first 3 places :) </p> <h1>Other stuff that created a carbon footprint but did not improve my LB score</h1> <p>This section will be far from complete but let's add something that I have in mind now:</p> <ul> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/412707" target="_blank">2021 2nd place model</a>. I have tried (like I did in 2022) but unfortunately it did not work for me </li> <li>Pretrain on whole Xeno Canto</li> <li>Train on larger chunks. The same result occurred if I inferred on smaller chunks or on same length chunks</li> <li>Colored Noise augmentations </li> <li>CQT or <a href="https://github.com/denfed/leaf-audio-pytorch" target="_blank">LEAF</a></li> <li>Specific finetuning: smaller LR, smaller number of epochs, freeze backbone, different LRs for backbone and head</li> <li>Loss on Attention SED probs + Loss on Max Timewise probs</li> <li>Deep Supervision </li> <li>Different <code>alpha</code> for MixUp</li> <li>Transformer architectures. For example <a href="https://speechbrain.readthedocs.io/en/latest/API/speechbrain.lobes.models.ECAPA_TDNN.html" target="_blank">ECAPA TDNN</a> </li> </ul> <h1>Closing words</h1> <p>I hope you have not fallen asleep while reading. Finally, I want to thank the entire Kaggle community, congratulate all participants and winners.<br> Special thanks to Cornell Lab of Ornithology, LifeCLEF, Google Research, Xeno-canto, <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a>, <a href="https://www.kaggle.com/tomdenton" target="_blank">@tomdenton</a>, <a href="https://www.kaggle.com/holgerklinck" target="_blank">@holgerklinck</a>. All of you were super active in discussions, shared datasets and interesting materials, answered all questions, and of course, prepared such a cool competition!"</p> <h1>Resources</h1> <p><strong>Inference Kernel</strong> : <a href="https://www.kaggle.com/code/vladimirsydor/bird-clef-2023-inference-v1/notebook" target="_blank">https://www.kaggle.com/code/vladimirsydor/bird-clef-2023-inference-v1/notebook</a><br> <strong>GitHub</strong> : <a href="https://github.com/VSydorskyy/BirdCLEF_2023_1st_place" target="_blank">https://github.com/VSydorskyy/BirdCLEF_2023_1st_place</a><br> <strong>Paper</strong> : TBD</p> <p>Yep, not all data training samples is used in each epoch</p> <p>If we are talking technically, how to do it - it is pretty simple:</p> <ol> <li>compute <code>sample_weights</code>, as I showed and you will have <code>primary_label:weight</code> mapping</li> <li>apply this mapping on each train sample and you will have List of weights - <code>weights_list</code></li> <li>and finally create torch sampler: <code>torch.utils.data.WeightedRandomSampler(weights_list, len(weights_list))</code></li> </ol>
BirdCLEF 2023
20th place solution: SED + CNN ensemble using onnx
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used paperspace gradient growth plan. Mainly used A6000.<br> <a href="https://www.paperspace.com/gradient/pricing" target="_blank">https://www.paperspace.com/gradient/pricing</a></p> <p>First, I would like to thank the competition hosts for organizing this competition and also the participants.</p> <p>I would especially like to express a huge thank you to my teammates <a href="https://www.kaggle.com/yokuyama" target="_blank">@yokuyama</a>, <a href="https://www.kaggle.com/yiiino" target="_blank">@yiiino</a>, <a href="https://www.kaggle.com/mobykjr" target="_blank">@mobykjr</a>, and <a href="https://www.kaggle.com/vupasama" target="_blank">@vupasama</a> !!!</p> <h2>Overview</h2> <p>My final submission is an ensemble of the following 5 models.</p> <ul> <li>eca_nfnet_l0 (SED)</li> <li>eca_nfnet_l0 (Simple CNN)</li> <li>tf_efficientnetv2_b0 (SED)</li> <li>tf_efficientnet_b0.ns (Simple CNN)</li> <li>tf_mobilenetv3_large_100 (Simple CNN)</li> </ul> <p>These models were accelerated using onnx.</p> <h2>Our Approach</h2> <ul> <li>Model<ul> <li>We used two main model structures: a SED and a simple CNN.</li> <li>For the SED model, We created it based on <a href="https://www.kaggle.com/kaerurururu" target="_blank">@kaerurururu</a>'s notebook from BirdCLEF 2022.<br> <a href="https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0" target="_blank">https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0</a></li></ul></li> <li>Audio<ul> <li>Torchaudio was used.</li> <li>Waveform<ul> <li>duration: 5s</li> <li>sample rate: 32000</li> <li>Secondary label was used (used with soft label at 0.5)</li> <li>If the audio was less than 5 seconds, the audio was repeated and concatenated until it exceeds 5 seconds</li> <li>Used the first 5s of the audio during validation.</li> <li>Normalized with audiomentations.Normalize</li></ul></li> <li>Mel Spectrogram<ul> <li>n_fft: 2048</li> <li>win_length: 2048</li> <li>hop_length: 320</li> <li>f_min: 50</li> <li>f_max: 14000</li> <li>n_mels: 128</li> <li>top_db: 80</li> <li>Standardized using the mean and standard deviation of the ImageNet.</li></ul></li></ul></li> <li>Model Parameters<ul> <li>drop_rate: 0.5</li> <li>drop_path_rate: 0.2</li> <li>criterion: BCEWithLogitsLoss (label smoothing: 0.0025)</li> <li>optimizer: AdamW (lr: 1.0e-3, weight_decay: 1.0e-2)</li> <li>scheduler: OneCycleLR (pct_start: 0.1, div_factor: 1.0e+3, max_lr: 1.0e-3)</li> <li>epoch: 30</li> <li>batch_size: 32</li></ul></li> <li>Data Augmentation<ul> <li>waveform<ul> <li>During training, we extracted random crops or the first 5s of the audio from the total audio. Each was done at a ratio of 50% each.</li> <li>audiomentations were used.<ul> <li>audiomentations.AddBackgroundNoise</li> <li>audiomentations.AddGaussianSNR</li> <li>audiomentations.AddGaussianNoise</li></ul></li> <li>BackgroundNoise was based on the first-place solution from Birdclef2022. We used this dataset.<br> <a href="https://www.kaggle.com/datasets/christofhenkel/birdclef2021-background-noise" target="_blank">https://www.kaggle.com/datasets/christofhenkel/birdclef2021-background-noise</a></li></ul></li> <li>melspectrogram<ul> <li>Torchaudio masking was used.<ul> <li>torchaudio.transforms.FrequencyMasking(<br> freq_mask_param=mel_specgram.shape[1] // 5<br> )</li> <li>torchaudio.transforms.TimeMasking(<br> time_mask_param=mel_specgram.shape[2] // 5<br> )</li></ul></li> <li>mixup (alpha = 0.5), cutmix (alpha = 0.5)</li></ul></li></ul></li> <li>Pretraining<ul> <li>Data from Birdclef 2021 and 2022 were used for pre-training.</li> <li>Pre-training was done with a duration of 15s and no oversampling.<ul> <li>Duration of 15s seemed to be better than 5s for pre-training.</li></ul></li></ul></li> <li>Dealing with Imbalance Data<ul> <li>We oversampled and set up each class to have at least 50 data.</li></ul></li> <li>Adding Nocall Data<ul> <li>The nocall data used the background noise audio used in augmentation. The nocall data was created with all labels set to 0.</li></ul></li> <li>post-processing<ul> <li>For the SED model, the output was averaged framewise and clipwise.</li></ul></li> <li>Inference Acceleration<ul> <li>We used onnx, which was considerably faster than torchscript.</li> <li>Data preprocessing was standardized so that the same preprocessed data could be input into multiple models.</li> <li>Data was preprocessed in advance and stored in a dict. Parallel processing was used for this preprocessing.</li></ul></li> <li>CV Strategy<ul> <li>StratifiedKFold (n=5, stratify=primary_label)</li> <li>All bird species were included in the training data.</li></ul></li> </ul> <h2>What Worked</h2> <ul> <li>oversampling</li> <li>Random cropping or extraction of the first 5s of audio from the total audio during training (each executed at a ratio of 50%).</li> <li>The following augmentation<ul> <li>Background noise, Gaussian noise , Mask melspectrogram</li></ul></li> <li>SED model</li> <li>label smoothing</li> <li>Adding Nocall Data</li> <li>In the pre-training, increase the duration</li> <li>Ensemble<ul> <li>The SED model and simple CNN ensemble were good.</li></ul></li> <li>secondary label</li> <li>Set up many numworkers in the data loader → learning speed is greatly increased!</li> </ul> <h2>What Didn’t Worked</h2> <ul> <li>The following augmentation<ul> <li>Pink noise, Random volume</li></ul></li> <li>Focal Loss</li> <li>Long durations (15s, 30s, etc.) when training on 2023 data.</li> <li>Dealing with previous and next seconds of audio data in the submission<ul> <li>Use 5 seconds before and after for a total of 15 seconds to create a mel spectrogram and predict → Notebook Timeout</li> <li>Moving average, moving weighted average</li></ul></li> <li>Large Models</li> <li>geometric mean</li> <li>LSTM head</li> <li>strong dropout</li> <li>rank ensemble</li> <li>hand labeling<ul> <li>It is thought to have led to overfitting…</li></ul></li> </ul> <h2>Code</h2> <ul> <li>GitHub (My training code) → <a href="https://github.com/moritake04/birdclef-2023" target="_blank">https://github.com/moritake04/birdclef-2023</a></li> </ul> <p>Thank you very much!<br> "the output was averaged framewise and clipwise" showed slight improvement in LB.</p>
BirdCLEF 2023
24th place solution - pre-training & single model (5 folds ensemble with ONNX)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone!</p> <p>First, thanks to the Cornell Lab of Ornithology and the Kaggle team for hosting the competition. Also, congratulations to all winners and participants.</p> <h2>TL;DR</h2> <p>I joined about 2 weeks before the competition ended, and I had about 7 days to participate due to my busy daily life. In the meantime, It's a great opportunity for me to challenge myself to build a model in a short time. So, I had to focus on which recipes would work best within minimum trials.</p> <p>Thankfully, There're many good quality resources like top solutions from the last competitions. I started to follow up on the solution and figured out the working recipes in general. (past BirdCLEF 2020 experience helps a lot too)</p> <p>As a result of the consequence and some lucks, I can build a decent model I guess : )</p> <h2>Architecture</h2> <p>Here's the pipeline.</p> <ol> <li>pre-train on 2020, 2021, 2022, xeno-canto datasets.</li> <li>fine-tune on 2023 dataset (based on the pre-trained weight).<ul> <li>minor classes (&lt;= 5 samples) are included in all folds</li></ul></li> </ol> <p>I applied the same training recipes (e.g. augmentation, loss functions, …) each step.</p> <h3>CV</h3> <p>(although based on my few experiments) my cv score and LB/PB are kinda correlated.</p> <table> <thead> <tr> <th>Exp</th> <th>CV</th> <th>LB</th> <th>PB</th> <th>Note</th> </tr> </thead> <tbody> <tr> <td><code>effnetb0</code></td> <td>0.7720</td> <td>0.82438</td> <td>0.73641</td> <td>multiple losses, 5 folds</td> </tr> <tr> <td><code>effnetb0</code></td> <td>0.7693</td> <td>0.82402</td> <td>0.73604</td> <td>clipwise loss, 5 folds</td> </tr> <tr> <td><code>eca_nfnet_l0</code></td> <td>0.7753</td> <td>0.80731</td> <td>0.71845</td> <td>clipwise loss, single fold</td> </tr> </tbody> </table> <h3>Model</h3> <p>I used SED architecture with the <code>efficientnet_b0</code> backbone. Also, I tested <code>eca_nfnet_l0</code> backbone, and it has a better cv score, but I can't use it due to the latency.</p> <h3>Training recipe</h3> <ul> <li>[<strong>Important</strong>] pre-training</li> <li>[<strong>Important</strong>] augmentations<ul> <li>waveform-level<ul> <li>[Important] or mixup on a raw waveform</li> <li>gaussian &amp; uniform noise</li> <li>pitch shift</li> <li>[Important] background noise</li></ul></li> <li>spectrogram-level<ul> <li>spec augment</li></ul></li></ul></li> <li>log-mel spectrogram<ul> <li>n_fft &amp; window size 1024, hop size 320, min/max freq 20/14000, num_mels 256, top_db 80. (actually, I wanted n_fft with 2048, but I set it to 1024 by my mistake)</li></ul></li> <li>trained on 5 secs clips</li> <li>stratified k fold (5 folds, on primary_label)</li> <li>label smoothing 0.1</li> <li>multiple losses (from <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243351" target="_blank">birdcelf 2021 top 5</a>)<ul> <li>bce loss on clip-wise output w/ weight 1.0</li> <li>bce loss on max of segment-wise outputs w/ weight 0.5</li></ul></li> <li>fp32</li> <li>AdamW + cosine annealing (w/o warmup)<ul> <li>50 epochs (usually converged between 40 ~ 50)</li></ul></li> </ul> <h2>Inference</h2> <p>I can ensemble up to 4 models with Pytorch (it took nearly 2 hrs). To mix more models, I utilized ONNX and did graph optimization, and it makes one more model to be ensembled! Finally, I can ensemble 5 models (single model 5 folds). Also, to utilize the full CPU, I do some multi-processing stuff.</p> <h2>Not worked (perhaps I might be wrong)</h2> <ul> <li>secondary label (both hard label, soft label (e.g. 0.3, 0.5))</li> <li>focal loss</li> <li>longer clips (e.g. 15s)</li> <li>post-processings (proposed in the BirdCLEF 2021, and 2022 competitions)<ul> <li>aggregate the probs of the previous and next segments.</li> <li>if there's a bird above the threshold, multiply constants on all segments of the bird.)</li></ul></li> </ul> <p>I hope this could help!</p> <p>Thanks : )</p>
BirdCLEF 2023
2nd place solution: SED + CNN with 7 models ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your reply.<br> I still haven't received yet, maybe it cost sometime….</p> <p>Congratulations to all the winners! Thanks to Kaggle and Cornell Lab of Ornithology for hosting this interesting competition.</p> <p>This is my first solo gold medal and I am glad to have this result.</p> <p>This competition shared a lot of similarity to the past BirdClef competitions(2020/2021/2022). Thus I spent a lot of time gathering the solution shared by the top teams in the past competitions. Special thanks to all of you for sharing such important information!</p> <p>Let me briefly introduce my solution. I will update the solution for more details in a couple of days.</p> <h1>Most important (7 models ensemble!)</h1> <p>Please see the notebook below.<br> <a href="https://www.kaggle.com/code/honglihang/openvino-is-all-you-need" target="_blank">openvino is all you need!!</a></p> <h1>Training data</h1> <p>Here is my training data.</p> <ul> <li>2023/2022/2021/2020 competition data</li> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/398318" target="_blank">2020 additional competition data</a></li> <li>additional training data from xeno-canto, including 2023 comp species in both foreground and background(records with 2023 comp species only in background which is less than 60 seconds are included). </li> </ul> <p>I intended to collect more records from ebird site, but I realized that ebird data is not public and cannot be used. I <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/393023" target="_blank">asked the host</a> and confirmed that. Thanks <a href="https://www.kaggle.com/tomdenton" target="_blank">@tomdenton</a> answering my questions.</p> <p>Thus, my training pipeline does not contain records from ebird site.</p> <h1>Model Architecture</h1> <p>First, I used SED architecture. The same as yours.</p> <p>backbones are:</p> <ul> <li>tf_efficientnetv2_s_in21k</li> <li>seresnext26t_32x4d</li> <li>tf_efficientnet_b3_ns</li> </ul> <p>All of them are trained on 10sec clip.</p> <p>Second, I used CNN proposed by <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">2nd place of 2021 competition</a></p> <p>backbones are:</p> <ul> <li>tf_efficientnetv2_s_in21k</li> <li>resnet34d</li> <li>tf_efficientnet_b3_ns</li> <li>tf_efficientnet_b0_ns</li> </ul> <p>All except b0 are trained on 15sec clip. b0 is trained on 20sec clip.</p> <h1>Pseudo Labeling and Hand Labeling</h1> <p>I have used SED model to generate pseudo label and extracted the potential nocall using quantile threshold. Then, I hand labeled the potential nocall by hearing the record. I hand labeled about 1800 records but did not see improvement. Maybe the pseudo label contains more FP rather than FN. I did not have time to further investigate the prediction.</p> <h1>Model Training</h1> <p>augmentations:</p> <ul> <li>GaussianNoise</li> <li>PinkNoise</li> <li>Gain</li> <li>NoiseInjection</li> <li>Background Noise(nocall in 2020, 2021 comp + rainforest + environment sound + nocall in freefield1010, warblrb, birdvox)</li> <li>PitchShift</li> <li>TimeShift</li> <li>FrequencyMasking</li> <li>TimeMasking</li> <li>OR Mixup on waveforms</li> <li>Mixup on spectrograms.</li> <li><a href="https://www.kaggle.com/competitions/birdsong-recognition/discussion/183269" target="_blank">With a probability of 0.5 lowered the upper frequencies</a></li> <li>self mixup for records with 2023 species only in background.(60sec waveform -&gt; split to 6 * 10sec -&gt; np.sum(audios,axis=0) to get a 10sec clip)</li> </ul> <p>I have used weights (computed by primary_label and secondary_labels) for Dataloader in order to cope with unbalanced dataset.</p> <h1>Training stages</h1> <p>For training I have used 2 stage training:</p> <ol> <li>Pretrain on all data(834 species).</li> <li>Finetune on 2023 species(264 species).</li> </ol> <p>In both stages, I first train model with CrossEntropyLoss, and then train on with BCEWithLogitsLoss(reduction='sum'). Model converges faster with CrossEntropyLoss than BCEWithLogitsLoss, but BCEWithLogitsLoss gives better score.</p> <p>To give more diversity, models are trained on different windows and different mixup rate, and some of them only trained on CrossEntropyLoss. And also 3 of the models are fintuned on 30s clip.</p> <h1>CV strategy</h1> <ul> <li>For each validation sample - slice the first 60 seconds to pieces -&gt; predict each piece -&gt; max(sample_predictions, dim=pieces).</li> </ul> <p>CV does not show correlation with LB, but it seems that the right ways to improve the LB are those which do not significantly decrease CV. So I monitored the CV when tuning the pipeline.</p> <h1>Inference</h1> <p>For SED model, feed model 10 sec chunk BUT apply head only on centered 5 sec reduced CNN image and use max(framewise, dim=time).</p> <p>Also, tta(2s) is used for SED model.</p> <p>Important: convert pytorch model to openvino model significantly reduce inference time(about 40%). (eca_nfnet_l0 backbone ONXX cannot be converted to openvino because the stdconv layer in timm use train mode of F.batch_norm in forward method). That is the magic of ensembling 7 models.</p> <h1>Ensemble</h1> <p>I spent quite a lot of time understanding the metrics. The ensemble is as followed:</p> <ol> <li>(weighted average, 0.84 on LB, 0.76 on private) Apply weighted average on raw logit. This ensemble does not make sense for me because the output logit of models differ and should not be simply added, otherwise the result is biased. But considering that the absolute value of logit may also contribute to the score and it does give the best LB, so I choose it for final submission to have a gamble.</li> <li>(rank average, 0.83 on LB, 0.75 on private) Convert the logit to rank and apply weighted average on rank. I think this is the reasonable way to ensemble, considering that I don't have a reliable CV. <strong>Be careful that in this comp submission is padded with 5 rows of 1, thus the ranking should start from 0 to prevent the largest ranking to be 1 after convert rankings to percentile form.(For example, 0, 0.333, 0.666 is good while 0.333, 0.666, 1.0 is bad. otherwise the score will decrease due to the potential false positive of those ranked 1.0) </strong></li> </ol> <p>Basically all of my single model's performance reaches about 0.81-0.82 on LB, so the weight of each model is similar to give diversity although higher public lb requires large weight of sed v2s and sed b3_ns.</p> <p>Looking at the private LB, weighted average seems to be better. emmm……Why?</p> <h1>What did not work</h1> <ul> <li><a href="https://www.kaggle.com/competitions/birdsong-recognition/discussion/183269" target="_blank">contrast</a></li> <li>EMA</li> <li>BirdNet</li> <li>adjusting logit according to previous and next 5s clip</li> <li>q transform</li> </ul> <p>Inference Notebook: <a href="https://www.kaggle.com/code/honglihang/2nd-place-solution-inference-kernel" target="_blank">https://www.kaggle.com/code/honglihang/2nd-place-solution-inference-kernel</a><br> github: <a href="https://github.com/LIHANG-HONG/birdclef2023-2nd-place-solution" target="_blank">https://github.com/LIHANG-HONG/birdclef2023-2nd-place-solution</a></p> <p>Thanks! It is really a long journey for me and I slept for two days to recover from lack of sleep </p> <p>Thanks you! We should all thanks to those who shared these amazing stuffs in the past few years.</p> <p>Thank you!</p> <p>Hi, thanks for your question.</p> <blockquote> <p>I hand labeled about 1800 records but did not see improvement.</p> </blockquote> <p>I used both pseudo label and hand label, although there is no improvement seen in public LB, I have not confirm whether it has improvement in private LB.</p> <blockquote> <p>self mixup for records with 2023 species only in background.</p> </blockquote> <p>The idea comes from that I found mixup in waveform is effective. So I think I can take use of the records with 2023 species only in background by applying the same idea because I no longer need to care about where the bird call appears in background by doing this mixup to fully use the record. I have not confirmed whether it is useful, but there was a marginal improvement in the score.</p> <blockquote> <p>For SED model, feed model 10 sec chunk BUT apply head only on centered 5 sec reduced CNN image and use max(framewise, dim=time).</p> </blockquote> <p>Yes, it is expensive which double the inference time. So I only use 10s, not any longer…</p> <blockquote> <p>Also, tta(2s) is used for SED model.</p> </blockquote> <p>TTA is only applied on attention head so it is super fast. The encoder extract the 10s feature and the target 5s is in the center of the matrix (2.5s to 7.5s). For tta, I take the feature from 1.5s to 6.5s and 3.5s to 8.5s.</p> <p>Thank you! It is also my first time to try openvino.</p> <p>Thanks! Happy kaggling!</p> <p>Thank you! The same to you. Your solution is super incredible and intersting. I will take a deep look into your solution after I get enougth sleep~</p> <p>By the way, have you already got the winner email from the kaggle team? I haven't…</p> <p>Thank you. I appreciate you sharing the data in 2020 comp and I took full use of it. I used <a href="https://github.com/asteroid-team/torch-audiomentations" target="_blank">torch-audiomentations</a> to perform pitch shift and time shift on GPU</p> <p>Thank you! happy kaggle</p> <p>Thank you and congratulations to you too!</p> <p>Thank you! Also congrats to you!<br> The other top solutions are very incredible to me. I will also read in detial after getting back to sleep..</p> <p>The same to your team! Congratulations! It has been a really a fulfilling(and short sleep) journey for me since the RSNA.<br> Looks that you have made birdnet work, I will read your solution in detail after getting enough sleep.</p> <p>Also congratulations to you! Thanks for sharing the information about onnx. I am also new to onnx and openvino and maybe it is lucky for me to try openvino at first because I learnt about openvino in a project at work</p> <p>Thank you! I started kaggle about 2 years ago and also had a hard time to understand the solutions. I think it is a process of try and error based on the knowledge shared by others. It is better to get new ideas through reading about the papers and the past top solutions.</p>
BirdCLEF 2023
37th place solution - TF CNN + BirdNet emb. cls. & XGB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to the winners and also to the host that create these annual competitions. 👌</p> <p><strong>Summary of the solution</strong></p> <p>Inference with 12 models all together within the 2 hours CPU inference window.</p> <p>5 different Tensorflow CNN, SOTA techniques trained with previous competition data and finetuned with current data, post ONNX converted. Ensembled together with: 1. The latest BirdNet TFlite version with 124 of the 264 birds. 2. custom classification TF head trained with Birdnet embeddings on competition data. 3. XGB 5 fold trained with Birdnet embeddings on competition data. Also used the non-bird classes from Birdnet to reduce the probabilities with 50% over the birds if the non-birds classes had higher probability in 5 seconds window, nothing of use here but interesting to implement.</p> <p><strong>Training and data</strong></p> <p>As starter code I used <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> <a href="https://www.kaggle.com/code/awsaf49/birdclef23-pretraining-is-all-you-need-train" target="_blank">https://www.kaggle.com/code/awsaf49/birdclef23-pretraining-is-all-you-need-train</a>. Tensorflow based TPU training with W&amp;B logging and latest SOTA audio techniques, SpecAug like Time Freq masking and Audio Augmentation like Gaussian Noise, Random CropPad, CutMix and MixUp.</p> <blockquote> <p>Methodology 🎯<br> In this notebook, we will explore how to identify bird calls using TensorFlow. Specifically, this notebook will cover:<br> How to use tf.data for audio processing tasks and reading .ogg files in TensorFlow<br> How to extract spectrogram features from raw audio on TPU/GPU, which reduces CPU bottleneck significantly, speeding up the process by ~ 4×on P100 GPU compared to the previous notebook.<br> Unlike the previous tutorial, this notebook will perform spectrogram augmentation such as TimeFreqMask and Normalization on GPU/TPU and perform CutMix and MixUp with audio data on CPU.<br> This notebook demonstrates how pre-training on the BirdCLEF - 2020, 2021, 2022 &amp; Xeno-Canto Extend dataset can improve transfer learning performance. CNN backbones, like EfficientNet, struggle with spectrogram data even with ImageNet pre-trained weights as they are not fimilar with audio data. Pre-training on an audio dataset, like BirdCLEF, can mitigate this issue and can yield a ~ 5% improvement in local validation and ~ 2% improvement in leaderboard. This notebook is compatible with both GPU, TPU, and the newly launched TPU-VM device is automatically selected, so you won't have to do anything to allocate the device.</p> </blockquote> <p>Credit to <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> </p> <p><strong>Inference and submissions</strong></p> <p>TF CNN models trained and used: <br> Efficientnetb1, Eficientnetb0, Efficientnetv1b1ns, Efficientnetv2m, Efficientnetv2s – some trained with all the data some in single fold, many also trained/saved with weighted average over 2-5 checkpoints-weights, during the pretraining phase for better generalization. All models converted to ONNX format in test mode. All models picked based on speed and performance, e.g. could it be onnx converted and in what cpu inference speed while in onnx format. <br> BirdNet:<br> Used the code and models from <a href="https://github.com/kahst/BirdNET-Analyzer" target="_blank">https://github.com/kahst/BirdNET-Analyzer</a> and used the latest V.2.3 tflite model for best test time speed. I could find 124 of the 264 competition birds in model and also some other interesting classes within the 3K BirdNet classes.<br> I used the BirdNet to create and use 4 models to the CNN ensemble.</p> <ol> <li>Classify and extract the 124 birds and also ~100 non-bird classes for further ensemble and testing.</li> <li>Used the same model’s embeddings to classify on a custom cls-head which I had pretrained on all the classes. Describe in the BirdNet code but I used another deeper architecture.</li> <li>Used the same model’s embeddings to classify on 5-fold XGB models which I had pretrained on all the classes with multi:softprob and aucpr metric.</li> <li>I concatenated the ensembled classes together with the non-bird classes and extracted which 5 second windows that had highest probabilities within the non-bird classes and reduced the probability by 50% in the 264 bird classes as it was either noisy or other sounds that might would reduce the like hood of a birdcall. Now, this was of no use in this metric but still interesting testing for other purpose and metrics 😊</li> </ol> <p>Then all CNNs and BirdNet variants where power averaged and weighted ensembled.<br> Below is a progress for the competition metric with the different models/solutions. Not the same as the picked submissions but only another ensemble weight and also the best scored, so tested on that post deadline.</p> <p>Private – Public - Models</p> <p>.71621 – .81447 – Only CNNs<br> .72391 – .81885 – With CNNs + BirdNet classifier and extract the 124 birds.<br> .72250 – .81873 – With CNNs + BirdNet embeddings to classify on a custom cls-head which I had pretrained on all the classes.<br> .72459 – .81996 – With CNNs + BirdNet embeddings to classify on 5-fold XGB which I had pretrained on all the classes.<br> .73132 – .82337 – All above weighted ensembled.</p> <p><strong>That’s it!</strong></p> <hr> <p>It's 5 CNN Onnx rest is BirdNet TFlite but creates 7 models out of 1 BirdNet tflite output as I use the embeddings to the custom cls heads and the 5 XGB heads are fast. For the CNN I noticed that FSR made it less fast and different architectures are faster like Effv2 works perfect with ONNX in test time and for example Convnexts less faster and Nfnets I didn't manage to convert at all, but maybe it's different in Pytorch as this was in Tensorflow.</p>
BirdCLEF 2023
3rd place solution: SED with attention on Mel frequency bands
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to the organizers Sohier Dane, Stefan Kahl, Tom Denton, Holger Klinck and all involved institutions (Kaggle, Chemnitz University of Technology, Google Research, K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology, LifeCLEF, NATURAL STATE, OekoFor GbR and Xeno-canto).</p> <p>This year’s competition was a welcome change compared to previous challenges. The cMAP evaluation metric eliminated the need for threshold tuning, while the inference time limit encouraged a focus on efficient models with a good balance between accuracy and speed.</p> <p>In this post I want to briefly introduce some aspects of my solution. A more detailed description will be provided later as an update or in the upcoming working note.</p> <h3>Quick summary</h3> <ul> <li>Modified SED architecture with attention on frequency bands</li> <li>Addressing domain shift with reverb augmentation </li> <li>Using freezed TorchScipt models and precalculated inputs to speed up inference</li> <li>Addressing fluctuating inference time by setting a timer in inference notebook</li> </ul> <h3>Datasets</h3> <ul> <li>2021/2023 competition data</li> <li><a href="https://www.kaggle.com/c/birdsong-recognition/discussion/159970" target="_blank">2020 extended xeno-canto data</a></li> <li><a href="https://www.kaggle.com/datasets/mariotsaberlin/xeno-canto-extended-metadata-for-birdclef2023" target="_blank">2023 extended xeno-canto data (all files with 2023 species as primary label)</a></li> <li>BirdCLEF 2019 soundscapes (2021 species only &amp; nocall/noise)</li> <li><a href="https://dcase.community/challenge2018/task-bird-audio-detection" target="_blank">DCASE 2018 Bird Audio Detection Task (nocall/noise)</a></li> <li><a href="https://www.kaggle.com/datasets/theoviel/bird-backgrounds" target="_blank">Some nocall/noise files from datasets of previous competitions/solutions</a></li> </ul> <h3>Data preparation</h3> <ul> <li>Convert files to 32 kHz (if necessary)</li> <li>Convert extended (downloaded) xeno-canto files to FLAC</li> <li>Add duration information for each file to dataset metadata </li> <li>Add first 10 seconds interval of all xeno-canto files to training set</li> <li>Split training set into 8 folds (but mostly only 3 folds were used)</li> </ul> <h3>Model input</h3> <ul> <li>Log Mel spectrogram of 5 second audio chunks (n_fft= 2048, hop_length=512, n_mels=128, fmin=40, fmax=15000, power=2.0, top_db=100)</li> <li>Normalized to 0…255</li> <li>Converted to 3 channel RGB image </li> </ul> <h3>Model backbone/encoder architectures (from <a href="https://github.com/huggingface/pytorch-image-models" target="_blank">timm</a>)</h3> <ul> <li>tf_efficientnet_b0_ns</li> <li>tf_efficientnetv2_s_in21k</li> </ul> <p>I also tried resnet50, resnet152, tf_efficientnet_b2_ns, tf_efficientnet_b3_ns, tf_efficientnet_b4_ns, efficientformer_l3, tf_efficientnetv2_m_in21k, densenet121 and eca_nfnet_l0 but none of those were included in inference ensemble because in my case tradeoff between performance and inference time was not as good as for EffNetB0 or EffNetV2s.</p> <p>All models used pretrained ImageNet weights and served as feature extractor combined with a custom classification head. As classifier I used a modified SED head with attention on frequency bands instead of time frames. The intuition behind this is, that species in soundscapes often occupy different frequency bands. In original SED architecture, feature maps representing frequency bands are aggregated via mean pooling and attention is applied on features representing time frames. If attention is instead applied on frequency bands it can help to distinguish species vocalizing at the same time but with different pitch. The modification can be achieved simply by rotating the Mel spectrogram by 90 degrees before feeding it to the original SED network.</p> <h3>Data augmentation (esp. to deal with weak/noisy labels and domain shift between train/test set)</h3> <ul> <li>Select 5s audio chunk at random position within file:<ul> <li>Without any weighting</li> <li>Weighted by signal energy (RMS)</li> <li>Weighted by primary class probability (using info from pseudo labeling)</li></ul></li> <li>Add hard/soft pseudo labels of up to 8 bird species ranked by probability in selected chunk</li> <li>Random cyclic shift</li> <li>Filter with random transfer function</li> <li>Mixup in time domain via adding chunks of same species, random species and nocall/noise</li> <li>Random gain of signal amplitude of chunks before mix</li> <li>Random gain of mix</li> <li>Pitch shift and time stretch (local &amp; global in time and frequency domain)</li> <li>Gaussian/pink/brown noise</li> <li>Short noise bursts </li> <li>Reverb (see below)</li> <li>Different interpolation filters for spectrogram resizing </li> <li>Color jitter (brightness, contrast, saturation, hue)</li> </ul> <p>In soundscapes, birds are often recorded from far away, resulting in weaker sounds with more reverb and attenuated high frequencies (compared to most Xeno-canto files where sounds are usually much cleaner because the microphone is targeted directly at the bird). To account for this difference between training and test data (domain shift), I added reverb to the training files using impulse responses, recorded from the Valhalla Vintage Verb audio effect plugin. During training, I randomly selected impulse responses and convolved them with the audio signal with a 20% chance, using a dry/wet mix control ranging from 0.2 (almost dry signal) to 1.0 (only reverb).</p> <p>I didn’t use pretraining followed by finetuning, instead I trained on all 2021 &amp; 2023 species + nocall (659 classes). Background species were included with target value 0.3. For inference, predictions were filtered to the 2023 species (264 classes).</p> <h3>Speed up inference and deal with submission time limit</h3> <p>Due to variations in hardware and CPUs used to run inference notebooks, the number of models that could be ensembled varied. To prevent submission timeouts, I set a timer in the notebook to ensure completion within the 2-hour limit. If the timer reached approximately 118 minutes, inferencing was stopped and results were collected for models and file parts predicted up to that point. Results for unfinished models/file parts were masked before averaging predictions. Using this method, I couldn’t determine the exact number of models that could be ensembled. In early submissions, I could only ensemble 3 models without risking timeouts. Later, I prioritized inference speed over model diversity by using models with the same input (no variation in FFT size, number of Mel bands etc.). Now I could precalculate and save Mel spectrogram images to RAM for all test files in advance and reuse those for all models. I also converted models to TorchScript. With these optimizations, I could ensemble at least 7 models, depending on architecture (e.g. 4x EfficientNetB0 + 3x EfficientNetV2s) without setting a timer.</p> <p>My best single model used an EfficientNetV2s and scored 0.83386 on public leaderboard (0.74104 on private LB). The best single model with highest score on private leaderboard used a ResNet50 backbone (0.7482 private LB / 0.83288 public LB). My best ensemble on private LB (0.76365) was a mix of 8 models (5x EfficientNetB0 + 3x EfficientNetV2s) with simple mean averaging of single model predictions.</p> <h3>Some things I tried but gave up on because I couldn’t get them to work well enough</h3> <ul> <li><a href="https://arxiv.org/abs/2203.05482" target="_blank">Model soup</a></li> <li>MultiLabelSoftMarginLoss (instead of BCEWithLogitsLoss)</li> <li>Knowledge Distillation</li> <li>Finetuning using only 2023 species data</li> <li>Converting models to ONNX or openvino format (speed up was only achieved for small batch sizes)</li> <li>Any postprocessing (e.g. amplify probabilities of detected species in neighboring windows or entire file)</li> </ul> <h3>Citations</h3> <ul> <li><a href="https://arxiv.org/abs/1912.10211" target="_blank">Kong, Qiuqiang, Yin Cao, Turab Iqbal, Yuxuan Wang, Wenwu Wang, and Mark D. Plumbley. "PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition." arXiv preprint arXiv:1912.10211 (2019).</a></li> <li><a href="https://github.com/qiuqiangkong/audioset_tagging_cnn/" target="_blank">Code for PANNs paper</a></li> <li><a href="https://dcase.community/documents/workshop2019/proceedings/DCASE2019Workshop_Adavanne_45.pdf" target="_blank">S. Adavanne, H. Fayek &amp; V. Tourbabin, "Sound Event Classification and Detection with Weakly Labeled Data", Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), pages 15–19, New York University, NY, USA, Oct. 2019</a></li> <li><a href="https://www.kaggle.com/code/hidehisaarai1213/introduction-to-sound-event-detection/notebook" target="_blank">Introduction to Sound Event Detection by Hidehisa Arai</a></li> <li><a href="http://ceur-ws.org/Vol-2380/paper_86.pdf" target="_blank">Lasseck M (2019) Bird Species Identification in Soundscapes. In: CEUR Workshop Proceedings.</a></li> <li><a href="https://xeno-canto.org/" target="_blank">https://xeno-canto.org/</a></li> <li><a href="https://github.com/huggingface/pytorch-image-models" target="_blank">timm (PyTorch Image Models)</a></li> <li><a href="https://github.com/iver56/audiomentations" target="_blank">Audiomentations</a></li> </ul> <p><strong>Inference Notbook:</strong> <a href="https://www.kaggle.com/mariotsaberlin/bc23-3rd-place-solution-refactored" target="_blank">https://www.kaggle.com/mariotsaberlin/bc23-3rd-place-solution-refactored</a></p>
BirdCLEF 2023
4th Place Solution: Knowledge Distillation Is All You Need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you and good question!</p> <ul> <li>Pretraining time: 30-40h per model.</li> <li>Training time: 9-12h per model.</li> </ul> <p>I have published a GitHub repo for training.</p> <p>RTX 3090 has 24GB of VRAM</p> <p>Thank you and nice questions!</p> <ol> <li><p>Is there any reason why Focal Loss cannot outperform BCE? <br> I don't know why. I was just comparing the experimental results. Focal loss may be better for different data.</p></li> <li><p>Why do you prefer librosa over torchaudio?<br> Because I wanted my model inputs and KaggleModel inputs to be as consistent as possible. Firstly I was using soundfile.</p></li> <li><p>Is there any substantial postprocessing you used in inference or that you plan to use as future work?<br> I can't think of anything at the moment. A simple average of multiple models worked.</p></li> <li><p>What models are appropriate for ONNX?<br> It was possible to convert to ONNX by separating mel spectrogram and PCEN from my model. However, I got lost motivation to use it because the inference time was not significantly different from PyTorch JIT.</p></li> <li><p>How do you get started with PyTorch jit? Do you have any recommended documentation?<br> In many cases, it's pretty straightforward (it fails with unsupported layers). The official documentation is helpful, but you can convert it with a few lines of code.</p></li> </ol> <p><a href="https://pytorch.org/docs/stable/jit.html" target="_blank">https://pytorch.org/docs/stable/jit.html</a></p> <p>Thank you and congratulations on your solo gold 🎉</p> <p>Thank you and congratulations 🙌</p> <p>Thank you! I hope you are enjoying your work 👍</p> <p>Thank you 👍</p> <p>Thank you, I think we just have to come up with ideas and experiment. That's why we always have to keep learning.</p> <p>Thank you! </p> <p>My training steps are as you said. To be precise, ff1010 in 2. is used as background noise.</p> <p>PyTorch JIT reduced the inference time by about 20%. I also tried ONNX, but didn't use it because of the complexity of the inference pipeline. The PyTorch JIT was able to include MelSpectrogram and PCEN in the model and converted it, simplifying the inference code.</p> <p>First of all, thanks to Kaggle and Cornell Lab of Ornithology for hosting this interesting competition. And I would like to emphasize the thanks to the bird sound recordists who provided their data through xeno-canto.</p> <h3>Solution Summary</h3> <ul> <li>Knowledge Distillation is all you need.</li> <li>Adding no-call data, xeno-canto data, and background audios(Zenodo) is effective.</li> </ul> <h3>Datasets</h3> <p>Additional datasets can be found <a href="https://www.kaggle.com/datasets/atsunorifujita/birdclef-2023-additional" target="_blank">here</a>.</p> <ul> <li>Bird CLEF 2023</li> <li>Bird CLEF 2021, 2022 (for pretraining)</li> <li>ff1010bird_nocall (5,755 files for learning no call)</li> <li>xeno-canto files not included in training dataset CC-BY-NC-SA (896 files) and CC-BY-NC-ND (5,212 files).</li> <li>Zenodo dataset (background noise)</li> <li>esc50 (rain, frog)</li> <li>aicrowd2020_noise_30sec (background noise for pretraining)</li> </ul> <h3>Models</h3> <ul> <li>My models are based on <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">the BirdCLEF 2021 2nd place solution</a> using <a href="https://github.com/huggingface/pytorch-image-models" target="_blank">timm</a>.</li> <li>My solution consists of a total of 4 models, all with eca_nfnet_l0 as the backbone. Each set is slightly different.</li> </ul> <ol> <li>MelSpectrogram (sample_rate: 32000, mel_bins: 128, fmin: 20, fmax: 16000, window_size: 2048, hop_size: 512, top_db=80.0, NormalizeMelSpec) following <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243293" target="_blank">this solution</a>.<ul> <li>Public: 0.8312, Private: 0.74424</li></ul></li> <li>Balanced sampling with the above settings.<ul> <li>Public: 0.83106, Private: 0.74406</li></ul></li> <li><a href="https://github.com/daemon/pytorch-pcen" target="_blank">PCEN</a> (sample_rate: 32000, mel_bins: 128, fmin: 20, fmax: 16000, window_size: 2048, hop_size: 512).<ul> <li>Public: 0.83005, Private: 0.74134</li></ul></li> <li>MelSpectrogram (sample_rate: 32000, mel_bins: 64, fmin: 50, fmax: 14000, window_size: 1024, hop_size: 320).<ul> <li>Public: 0.83014, Private: 0.74201</li></ul></li> </ol> <h3>Training</h3> <p>Knowledge distillation of pre-computed predictions in <a href="https://www.kaggle.com/models/google/bird-vocalization-classifier/frameworks/TensorFlow2/variations/bird-vocalization-classifier/versions/1" target="_blank">Kaggle Models</a> (bird-vocalization-classifier) was the hallmark of my solution. This model cannot complete inference in less than 2h, but it is very powerful. cmAP_5 on my validation dataset was 0.9479. So I tried to extract useful information from this model.<br> <strong>The Kaggle models pre-computed predictions were created <a href="https://www.kaggle.com/code/atsunorifujita/extract-from-kaggle-models/notebook" target="_blank">here</a></strong>.<br> According to this <a href="https://arxiv.org/abs/2106.05237" target="_blank">paper</a>, they argued that efficient distillation required 1. consistent input, 2. aggressive mixup, and 3. a large number of epochs. So I did 2 and 3 because it was challenging to integrate the Kaggle model into my training pipeline. It certainly looked effective.<br> I chose only approaches that improve both CV and LB. In this competition, I just couldn't believe in one or the other.</p> <ul> <li>Using 5 StratifiedKFold. Only 1 fold did not cover all classes, so the remaining 4 were used for training.</li> <li>The evaluation metric is padded_cmap1. The best results were the same with padded_cmap5 in most cases.</li> <li>Use primary_label only</li> <li>no-call is represented by all 0</li> <li>loss: 0.1 * BCEWithLogitsLoss (primary_label) + 0.9 * KLDivLoss (from Kaggle model)<ul> <li>Softmax temperature=20 was best (tried 5, 10, 20, 30).</li></ul></li> <li>I used randomly sampled 20-sec clips for training. Audio that is less than 20 sec is repeated.</li> <li>epoch = 400 (Most models converge at 100-300)</li> <li>early stopping(pretraining=10, training=20)</li> <li>Optimizer: AdamW (lr: pretraining=5e-4, training=2.5e-4, wd: 1e-6)</li> <li>CosineLRScheduler(t_initial=10, warmup_t=1, cycle_limit=40, cycle_decay=1.0, lr_min=1e-7, t_in_epochs=True,)</li> <li>mixup p = 1.0 (It was better than p=0.5)<br></li> </ul> <p>I also encountered a situation where the training time was significantly longer when pretrained with past competition data. Thanks to everyone who suggested solutions.</p> <h4>Augmentation</h4> <ul> <li>OneOf ([Gain, GainTransition])</li> <li>OneOf ([AddGaussianNoise, AddGaussianSNR]</li> <li>AddShortNoises esc50 (rain, frog)</li> <li>AddBackgroundNoise from <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/394358#2179605" target="_blank">Zenodo</a>. The 60 minutes with the fewest bird calls were extracted from each dataset and divided into 30 sec (training only).</li> <li>AddBackgroundNoise from aicrowd2020_noise_30sec and ff1010bird_nocall (pretraining only).</li> <li>LowPassFilter</li> <li>PitchShift</li> </ul> <h4>Hardware</h4> <ul> <li>1 * RTX 3090</li> <li>Pretraining time: 30-40h per model. </li> <li>Training time: 9-12h per model.</li> </ul> <h3>Inference</h3> <p>Reducing the inference time is the part I was having trouble with. Thanks to <a href="https://www.kaggle.com/leonshangguan" target="_blank">@leonshangguan</a> for sharing his <a href="https://www.kaggle.com/code/leonshangguan/faster-eb0-sed-model-inference" target="_blank">effective approach</a>, I was able to use 4 models.</p> <ul> <li>Ensemble with simple averaging.</li> <li>4 models using PyTorch JIT. total 110min.</li> <li><strong><a href="https://www.kaggle.com/code/atsunorifujita/4th-place-solution-inference-kernel" target="_blank">Inference notebook</a></strong></li> </ul> <h3>What didn’t work</h3> <ul> <li>focal loss.</li> <li>Split a low-sample class.</li> <li>Backbone other than eca_nfnet_l0 and eca_nfnet_l1.</li> <li>Optimizers (adan, lion, ranger21, shampoo). I tried to create a custom normalize-free model but failed.</li> <li><a href="https://github.com/naver-ai/cmo" target="_blank">CMO</a> (mixup worked better when using distillation).</li> <li><a href="https://github.com/KinWaiCheuk/nnAudio/blob/master/Installation/nnAudio/features/cqt.py" target="_blank">CQT</a> (slow and degraded).</li> <li>Change to first stride(1, 1). CV was good but inference takes a long time and not a single model is completed in less than 2h.</li> <li>Pretraining Zenodo data (story before trying distillation).</li> <li>Distillation training from scratch (not use imagenet weights). It didn't converge in the amount of time I could tolerate.</li> <li>MelSpectrogram, PCEN, and CQT integrated into input channels in all combinations. There was no synergistic effect.</li> </ul> <h3>Ablation study</h3> <table> <thead> <tr> <th>Name</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>BaseModel</td> <td>0.80603</td> <td>0.70782</td> </tr> <tr> <td>BaseModel + Knowledge Distillation</td> <td>0.82073</td> <td>0.72752</td> </tr> <tr> <td>BaseModel + Knowledge Distillation + Adding xeno-canto</td> <td>0.82905</td> <td>0.74038</td> </tr> <tr> <td>BaseModel + Knowledge Distillation + Adding xeno-canto + Pretraining</td> <td>0.8312</td> <td>0.74424</td> </tr> <tr> <td>BaseModel + Knowledge Distillation + Adding xeno-canto + Pretraining + Ensemble (4 models)</td> <td>0.84019</td> <td>0.75688</td> </tr> </tbody> </table> <h3>Acknowledgments</h3> <p>My solution builds on contributions from participants in this and past competitions, birding enthusiasts, and the work of the machine learning community. thank you very much 🙏 <br> <strong>Training Code</strong>: <a href="https://github.com/AtsunoriFujita/BirdCLEF-2023-Identify-bird-calls-in-soundscapes" target="_blank">https://github.com/AtsunoriFujita/BirdCLEF-2023-Identify-bird-calls-in-soundscapes</a></p> <p>Absolutely! I'm preparing for it.</p>
BirdCLEF 2023
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi <a href="https://www.kaggle.com/janhuus" target="_blank">@janhuus</a> I've uploaded the training code to github, hope it's still relevant to you</p> <p>First of all, thanks to the Cornell Lab of Ornithology and the Kaggle Team for hosting this competition. It was a great opportunity to learn something new.</p> <p>In this post, I want to present a summary of my solution.</p> <h3>Datasets</h3> <ul> <li>2023/2022/2021 competition data</li> <li>Additional Xeno-Canto data containing 2023 comp. species in foreground and background</li> <li>ESC50 noise</li> <li>No-call noise from 2021 competition data</li> </ul> <h3>Models</h3> <p>I used SED architecture (same as in <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a> 4th place <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243293" target="_blank">solution</a>) with following backbones:</p> <ul> <li>tf_efficientnet_b1_ns</li> <li>tf_efficientnet_b2_ns</li> <li>tf_efficientnet_b3_ns</li> <li>tf_efficientnetv2_s_in21k</li> </ul> <h3>Training</h3> <p>I trained all models in two steps:</p> <ul> <li>Pretrain with 2022/2021 data. I used only white noise (p=0.5) for this step</li> <li>Finetune on 2023 data with the following augmentations:<ul> <li>For waveform - Mixup (p=1) and OneOf([White noise, pink noise, brown noise, noise injection, esc50 noise, no-call noise]) (p=0.5)</li> <li>For spectrogram - Two time masks (p=0.5 each) and one freq mask (p=0.5)</li></ul></li> </ul> <p>Training details:</p> <ul> <li>All models were trained on 5-sec clips</li> <li>4-fold stratified CV split</li> <li>I used both primary and secondary labels</li> <li>BCEWithLogitsLoss with weight for each sample based on the rating</li> <li>AdamW - 5e-4 lr, 1e-3 weight decay for most of the models</li> <li>CosineLRScheduler with default parameters</li> <li>40 epochs - the best score was almost always in the last epoch, so in addition to 4 folds, I also trained the 5th model using all available data. This full-fit version was consistently better than one-fold by 0.002-0.003 public and private LB.</li> <li>Some models were finetuned on soft/hard pseudolabels. </li> </ul> <p>My best model was trained with a combination of actual labels for competition data and hard pseudo labels for XC data. It has 0.72714/0.81836 private/public LB and 10 minutes submission time. </p> <h3>Inference</h3> <p>The probabilities from 15 models (folds) were averaged. 6 of these models have the same first three layers of the backbone, so the number of models is not very fair)</p> <p>I've also used ONNX runtime and ThreadPoolExecutor - thanks to the <a href="https://www.kaggle.com/leonshangguan" target="_blank">@leonshangguan</a> for sharing this <a href="https://www.kaggle.com/code/leonshangguan/faster-eb0-sed-model-inference" target="_blank">notebook</a>.</p> <p>Inference kernel: <a href="https://www.kaggle.com/evgeniimaslov2/birdclef-5th-place-code" target="_blank">https://www.kaggle.com/evgeniimaslov2/birdclef-5th-place-code</a><br> Github: <a href="https://github.com/yevmaslov/birdclef-2023-5th-place-solution" target="_blank">https://github.com/yevmaslov/birdclef-2023-5th-place-solution</a></p> <p>Hi! Sure, I'll update this thread with a link once I've finished cleaning up the code.</p> <p>Hi! I trained all 40 epochs</p>
BirdCLEF 2023
6th place solution: BirdNET embedding + CNN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I still don't know how to solve these problems efficiently. In my case, I carefully examine the data, think of hypotheses that might solve the problem, and experiment while gradually improving the model hundreds of times. It's through this process of trial and error that a good solution is found, albeit very rarely.</p> <p>The sample rate has not been changed. The sample rate was left at its default, which resamples to 48000, because I thought that changing and loading the sample rate to 32000 would not be suitable for BirdNET's input data.</p> <p>Thank you to the host and Kaggle for organizing the competition.</p> <p>My solution is a combination of the embedding vectors of <a href="https://github.com/kahst/BirdNET-Analyzer/tree/d1f5a9c015d4419277cbb285e89d3f843a6bab49" target="_blank">BirdNET-Analyzer V2.2</a> and the CNN from the <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">BirdCLEF 2021 2nd place solution</a>.</p> <h2>Model (BirdNET embedding + CNN)</h2> <p>In order to utilize the features of BirdNET, the embedding of BirdNET is concatenated to the output of the CNN of the <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">BirdCLEF 2021 2nd place solution</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1694617%2Fe7dcc6f87186d43df680592acb3f91d7%2Fmy_solution.png?generation=1684975539169704&amp;alt=media" alt=""></p> <h3>BirdNET embedding</h3> <p>The embedding vectors of BirdNET were created with <a href="https://github.com/kahst/BirdNET-Analyzer/blob/d1f5a9c015d4419277cbb285e89d3f843a6bab49/embeddings.py" target="_blank">BirdNET V2.2 embeddings.py</a>. I modified <a href="https://github.com/kahst/BirdNET-Analyzer/blob/d1f5a9c015d4419277cbb285e89d3f843a6bab49/audio.py#L7" target="_blank">BirdNET V2.2 audio.py</a> so that if the length of the audio data is shorter than BirdNET's sample rate (48000), the data is padded to output at least 1 second of embedding vector. Using V2.2 rather than the latest version BirdNET V2.3, slightly improved cv.</p> <h3>CNN</h3> <p>The backbone of the CNN from the BirdCLEF 2021 2nd place solution used timm's <code>eca_nfnet_l1</code> and <code>seresnext26t_32x4d</code>.</p> <h2>Training</h2> <p>After pretraining with data from BirdCLEF 2021 + BirdCLEF 2022, I train the CNN and other linear layers with data from BirdCLEF 2023. BirdNET is not trained.</p> <p>The input for training is data for 30 seconds. Since BirdNET outputs embedding vectors for 3 seconds of data, I averaged each of the embedding vectors for 30 seconds.</p> <p>In most experiments, cv was highest in the final epoch, so I included all data in the training set.</p> <p>The main training parameters are as follows:</p> <ul> <li><p>loss: BCEWithLogitsLoss</p></li> <li><p>MelSpectrogram</p> <ul> <li>sample_rate: 32000, window_size: 1024, hop_size: 320, fmin: 0, fmax: 14000, mel_bins: 128, power: 2, top_db=None</li></ul></li> <li><p>labels: primary label=0.9995, secondary label=0.4 or 0.5 (<a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/327193" target="_blank">BirdCLEF 2022 3rd place solution</a>), label smoothing=0.01</p></li> <li><p>cv: primary_label StratifiedKFold(n_splits=5)</p></li> <li><p>optimizer: AdamW(weight_decay=1e-4)</p></li> <li><p>scheduler: warmup 0-3epoch(lr=3e-6-&gt;3e-4) + Cosine Annealing 3-70epoch(lr=3e-4-&gt;3e-6)</p></li> </ul> <h1>Augmentation</h1> <p>As in the previous two competitions, augmentation was important. I combined the following augmentations:</p> <ul> <li>audiomentations.Shift</li> <li>Mixup (<a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">BirdCLEF 2021 2nd place solution</a>)</li> <li>Gaussian Noise</li> <li>random lowpass filter</li> <li>random power (<a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243351" target="_blank">BirdCLEF 2021 5th place solution</a>)</li> <li>torchaudio.transforms.FrequencyMasking</li> <li>torchaudio.transforms.TimeMasking</li> </ul> <h1>Oversampling</h1> <p>The following Oversampling was performed, but I think the effect was minimal.</p> <ul> <li><p>Oversampling to have at least 20 of data for each class</p></li> <li><p>I doubled the number of training data that have two or more secondary labels because data with more than two secondary_labels have worse val_loss</p></li> <li><p>As in the <a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/327044" target="_blank">BirdCLEF 2022 5th place solution</a>, files with reverb/echo effects created for some minority class data and added to the training data</p></li> <li><p>Strong augmentation was applied to the oversampled data</p></li> </ul> <h1>Inference</h1> <p>The CNN input is given 5 seconds of data for inference, and the BirdNET input is given the first 3 seconds of data for inference.</p> <ul> <li><p>To submit within 2 hours, I converted each model to ONNX</p></li> <li><p>As in <a href="https://www.kaggle.com/code/leonshangguan/faster-eb0-sed-model-inference" target="_blank">this notebook</a>, I also used ThreadPoolExecutor to speed up inference</p></li> </ul> <h1>Ensembling</h1> <p>The outputs of 2 models with different CNN backbones (<code>eca_nfnet_l1</code>, <code>seresnext26t_32x4d</code>) simply averaged.</p> <h1>What did not work</h1> <ul> <li>add background noise (<a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">BirdCLEF 2021 2nd place solution</a>)</li> <li>postprocess (<a href="https://www.kaggle.com/competitions/birdclef-2022/discussion/326950" target="_blank">BirdCLEF 2022 2nd place solution</a>)</li> <li>Speeding up inference using OpenVINO<ul> <li>Even if the model was converted to OpenVINO, the inference time was not much different from ONNX. I think I just used it wrong.</li></ul></li> <li>Using MFCC as input (as in <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9495150/" target="_blank">this paper</a>)</li> <li>Using CQT as input</li> <li>Using <code>FAMILY</code> etc. from <code>eBird_Taxonomy_v2021.csv</code> for prediction (as in <a href="https://arxiv.org/pdf/2110.03209.pdf" target="_blank">this paper</a>)</li> </ul> <h1>Rough lb history</h1> <table> <thead> <tr> <th>name</th> <th>Private Score</th> <th>Public Score</th> </tr> </thead> <tbody> <tr> <td>BirdCLEF 2021 2nd place solution CNN (with BirdCLEF 2021 + BirdCLEF 2022 pretrain)</td> <td>0.71</td> <td>0.80</td> </tr> <tr> <td>BirdCLEF 2021 2nd place solution CNN + Augmentation + BirdNET embedding</td> <td>0.74</td> <td>0.82</td> </tr> <tr> <td>BirdCLEF 2021 2nd place solution CNN + Augmentation + BirdNET embedding + Oversampling + Ensembling</td> <td>0.75</td> <td>0.83</td> </tr> </tbody> </table>
BirdCLEF 2023
763rd Place Solution for the BirdCLEF 2023 Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Context</strong></p> <ul> <li>Business context (use case) - <a href="https://www.kaggle.com/competitions/birdclef-2023/overview" target="_blank">https://www.kaggle.com/competitions/birdclef-2023/overview</a></li> <li>Data context - <a href="https://www.kaggle.com/competitions/birdclef-2023/data" target="_blank">https://www.kaggle.com/competitions/birdclef-2023/data</a><br> <strong>Overview of the Approach</strong><br> I have named this approach the Reading the Robot Mind system. Specifically, an interactive Jupyter notebook [1] is built for the purpose of training and deploying a deep learning neural network artificial intelligence (AI) [2] to automatically identify birds from recorded audio [3]. The notebook allows the user to modify parameters along the training and classification (inference) pipeline and observe the results. As with traditional observation methods, the notebook lets users view visual representations (spectrograms, etc.) of input vectors for similar and different birds [4]. In addition to traditional methods, this notebook also presents data in its original format (audio recordings of birds). This is common practice for a field researcher or subject matter expert (SME) testing a microphone and recording system [5]- they will want to listen to the recordings to see if they contain valid and sufficient information. The notebook [6] extends this intuitive and useful technique to individual neural network layers - working backwards towards a best estimate of the original input (referred to in this working note as "reading the robot mind"). The user can even provide just the "answer" (select a bird at the final output layer), and the reading the robot mind system will work backwards through the entire automated process and AI layers to let the SME hear a best approximation of what the AI has learned that bird sounds like.<br> <strong>Details of the Submission</strong><br> Note that due to the time and compute limitations imposed by the computing environment and contest rules provided, the notebook is divided into four public notebooks:</li> <li><a href="https://www.kaggle.com/code/pnussbaum/v15h-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v15h-birdclef2023-mindreader</a> - This notebook focuses on the Segmentation and Feature Extraction aspects of the AI solution, allowing users to make modifications and see and hear how much information is retained.</li> <li><a href="https://www.kaggle.com/code/pnussbaum/v15h-all-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v15h-all-birdclef2023-mindreader</a> - This notebook allows the user to use their final decision related to segmentation and feature extraction, and convert and save all the BirdClef2023 data into this format.</li> <li><a href="https://www.kaggle.com/code/pnussbaum/v16e-gpu-all-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v16e-gpu-all-birdclef2023-mindreader</a> - This notebook uses the final decisions noted above, and trains the entire AI for a longer period of time, achieving better accuracy, and saving the trained AI system.</li> <li><a href="https://www.kaggle.com/code/pnussbaum/v17b-all-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v17b-all-birdclef2023-mindreader</a> - This notebook brings all of this together for the sake of the contest submission and scoring.</li> </ul> <p>In the following sections, segmentation, feature extraction, quantization, AI model creation, training, and validation are discussed in detail. Also shown with formulas and examples are the aspects of the reading the robot mind system, including visualization of filters, recreation of input approximations based on outputs of intermediate and final layers of the AI system, and also the method whereby the output can be forced to an individual bird, and a best approximation of what that bird sounds like is created by the system.</p> <p><em>Segmentation, Feature Extraction, and Image Quantization Analysis</em><br> An automated segmentation algorithm is used, however, due to the rules of the BirdCLEF 2023 competition, the SME is not permitted to modify this algorithm [3]. After segmentation, the audio data is transformed into another domain (called feature extraction in this document), and finally saved as a two-dimensional grayscale image with 8-bit quantization. The quantization was chosen due to the power and memory constraints of the edge device that will be performing inference (bird classification) in the field.<br> The notebook allows the SME to try several Feature Extraction Algorithms and test the following:</p> <ul> <li>Do the features visually look similar for the same bird, and different for different birds?</li> <li>Is the similarity/difference enough to be able to visually classify which bird is which?</li> <li>If the feature extraction algorithm is performed in reverse to recreate the inputted audio (or an approximation thereof, due to the lossy nature of feature extraction) - is the recreated sound clear enough for the SME to identify the bird?</li> <li>If the answer is "no" to any of the above, allow fine tuning by the user</li> </ul> <p><em>Layer-wise Analysis of the AI (Convolutional, Max Pooling, and Dense Layers)</em><br> A simple sequential (not residual) convolutional AI is used, with dimensions and architecture shown in the aforementioned notebooks. This was found to yield a satisfactory accuracy measure for the purposes of demonstrating the reading the robot mind system, while still falling within the running time and memory constraints imposed by the development platform and contest rules.</p> <p><em>Visualizing Filter Patches</em><br> The notebook allows visualization of the convolutional filter patches. Although this information is more useful to the AI programmer than it is to the SME, it is presented here since it is a step in the system of reading the robot mind. The algorithms used are similar to the "expansion" method [12].</p> <p><em>Using Filter Patches to Reconstruct Approximation of Input (for Convolutional and Max Pooling layers)</em><br> Using convolutional neural networks to extract patterns from image data necessarily involves loss of information. Each convolutional layer has only a limited number of filters. Even though that limited number of filters is greater than would be needed to form an ortho-normal basis set (each filter having all 0 weights except for a weight of one at a different individual pixel), the AI is not being trained as an auto-encoder, and is instead being trained to minimize the loss when classifying birds. Also, information is being lost through the use of the "relu" activation function, which makes all negative output values a 0. Most of all, information is being lost at the max pooling layers.</p> <p><em>Working Backwards from a Bird Classification (What Does This Bird Sound Like?)</em><br> The reading the robot mind system also allows one more function that can prove useful to the SME who is helping the AI programmer improve the system. This function is the ability to specify a particular output (bird classification) and work backwards through the entire AI to recreate an approximation of the original audio input, even when no input is provided.</p> <p><em>Conclusions</em><br> The reading the robot mind system is implemented, allowing the SME to observe and qualitatively analyze the internal data flow of deep learning neural networks in a format familiar to them.<br> With this, the SME can understand where in the pipeline the most information is being discarded by the AI system, and possibly help the programmer make improvements in future systems.</p> <p><strong>Sources</strong><br> [1]<br> J. Perkel, "Why Jupyter is data scientists' computational notebook of choice," Nature, vol. 563.7732, no. (2018), pp. 145-147, 2018.<br> [2]<br> IBM, "AI vs. Machine Learning vs. Deep Learning v. Neural Networks: What's the Difference?," 2023. [Online]. Available: <a href="https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks" target="_blank">https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks</a>.<br> [3]<br> S. D. T. K. H. R. H. C. F. G. H. G. H. V. W. P. R. J. A. Kahl, "Overview of BirdCLEF 2023: Automated bird species identification in Eastern Africa.," Working Notes of CLEF 2023 – Conference and Labs of the Evaluation Forum, 2023.<br> [4]<br> Audibon Society, "Start using Spectrograms to Read Bird Songs and Calls," 2023. [Online]. Available: <a href="https://www.audubon.org/news/start-using-spectrograms-read-bird-songs-and-calls" target="_blank">https://www.audubon.org/news/start-using-spectrograms-read-bird-songs-and-calls</a>.<br> [5]<br> Acoustic Nature, "Best way to record birdsong: Gear guide and tips," 2020. [Online]. Available: <a href="https://www.audubon.org/news/start-using-spectrograms-read-bird-songs-and-calls" target="_blank">https://www.audubon.org/news/start-using-spectrograms-read-bird-songs-and-calls</a>. [Accessed 2023].<br> [6]<br> P. Nussbaum, "v15h BirdClef2023 Mindreader," 2023. [Online]. Available: <a href="https://www.kaggle.com/code/pnussbaum/v15h-birdclef2023-mindreader" target="_blank">https://www.kaggle.com/code/pnussbaum/v15h-birdclef2023-mindreader</a>.<br> [7]<br> C. B. L. P. S. K. H. G. B. D. D. M. J. E. C. L. T. L. R. C. M. Š. M. H. M. S. H. G. R. P. W.-P. V. H. K. T. D. I. E. P. B. H. M. A. Joly, "Overview of LifeCLEF 2023: evaluation of ai models for the identification and prediction of birds, plants, snakes and fungi," in International Conference of the Cross-Language Evaluation Forum for European Languages, 2023.<br> [8]<br> J. M. K. M. S. a. M. P. W. Wu, "Explainable AI for Early Detection of Health Changes Via Streaming Clustering," in 2022 IEEE International Conference on Fuzzy Systems, Padua, 2022.<br> [9]<br> S. P. a. S. A. S. Sutthithatip, "(Explainable) Artificial Intelligence in Aerospace Safety-Critical Systems," in IEEE Aerospace Conference, Big Sky. MT, 2022.<br> [10]<br> Librosa, "librosa.feature.melspectrogram," 2023. [Online]. Available: <a href="https://librosa.org/doc/main/generated/librosa.feature.melspectrogram.html" target="_blank">https://librosa.org/doc/main/generated/librosa.feature.melspectrogram.html</a>. [Accessed 2023].<br> [11]<br> Librosa, "librosa.feature.mfcc," 2023. [Online]. Available: <a href="https://librosa.org/doc/main/generated/librosa.feature.mfcc.html" target="_blank">https://librosa.org/doc/main/generated/librosa.feature.mfcc.html</a>. [Accessed 2023].<br> [12]<br> e. a. Voss, "Visualizing Weights," Distill, vol. 10.23915/distill.00024.007, no. 10.23915/distill.00024.007, p. 10.23915/distill.00024.007, 2021.<br> [13]<br> Google, 2023. [Online]. Available: <a href="https://distill.pub/2020/circuits/visualizing-weights/" target="_blank">https://distill.pub/2020/circuits/visualizing-weights/</a>.</p>
BirdCLEF 2023
8th Place Solution: Implementing Multimodal Data Augmentation Methods (last updated 05/28)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/gentlezdh" target="_blank">@gentlezdh</a> <br> Thank you for your appreciation. github<br> 6 folders are linked to commit_id of Result. For example, 4a57a98…script represents the training script for 4a57a984d1f4101b9f470095e6a6fae0a6cc5af2. Please ask again if you have any questions.</p> <p>Congratulations to all the winners!<br> Our gratitude extends to Kaggle and the Cornell Lab of Ornithology for their exceptional efforts in organizing this intriguing competition. In particular, I was able to contribute to the improvement of accuracy with highly original methods, so I was able to enjoy the competition very much.<br> Allow me to give a succinct overview of my solution. I’ll provide a more detailed update on the solution in the following days.</p> <h1><strong>Summary</strong></h1> <p>In my solution, only one type of backbone is used, and multimodal data augmentation techniques such as object detection, recommendation systems, and NLP are used to increase diversity and improve model accuracy.</p> <h2>Preprocess Pipeline Summary</h2> <p>We built a preprocessing pipeline that prevents overfitting by devising preprocessing so that the combinations that are actually possible while maintaining the diversity of the data. In particular, in preprocessing for Sound Object Detection, weighting the signal density of the spectrum made it easier to pick out the areas where the contribution of secondary labels is included. Thanks to that, the training accuracy has improved even when the value of the secondary labels is large.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6220676%2Ffc17c767400faa897953a3034352d16e%2F.drawio%20(1).png?generation=1685242284411201&amp;alt=media" alt=""></p> <h2>Training Pipeline</h2> <p>In order to grasp the features of bird calls with various periodicities, build a pipeline that first learns with a long spectrum input, then lowers the time axis direction for each epoch, and finally learns local features. Did. Preprocessing by SOD prevents noise-like offsets from being selected even for local spectra, so training is stable.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6220676%2F090bc781738d7cfe844801e0bcd23f6e%2F8thplacesolution-2.drawio.png?generation=1685260705814564&amp;alt=media" alt=""></p> <p>code detail: <a href="https://www.kaggle.com/code/kunihikofurugori/8th-solution-down-scaling-detail/notebook?scriptVersionId=131297676" target="_blank">https://www.kaggle.com/code/kunihikofurugori/8th-solution-down-scaling-detail/notebook?scriptVersionId=131297676</a></p> <h1><strong>MultiModal Data Augments Part(in Detailed)</strong></h1> <p>Code detail: <a href="https://www.kaggle.com/code/kunihikofurugori/8th-solution-notebook/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/8th-solution-notebook/notebook</a></p> <h3>・Time Segment DownScaling Augmentation (Global to Local)</h3> <p>Regarding the mini-batch method represented by the 2nd place solution of birdclef2021, we trained to learn local features while pre-learning global features by training to evolve the number of mini-batches from 15 to 1 at each epoch.</p> <p>By first learning the global features and then learning the local features, it becomes possible to predict the local features after understanding the global features.</p> <h3>・Weak to Strong Preprocessing by YOLOv8 (Sound Object Detection)</h3> <p>I use the data pointed out by the competition host to train the yolov8 object detection model, and select an offset that will increase the number of object detections as much as possible.<br> Also, by minimizing the bounding box and reliability-weighted IoU for the frequency masking, Frequency Masking effectively hides noise.</p> <h4>Sample Inference</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6220676%2F058db543cb9d88e54a699785bd32c3b2%2F__results___3_2.png?generation=1685058390267527&amp;alt=media" alt="Inferece Sample Sound Object Detection"></p> <p>Reference Notebook<br> [1]<a href="https://www.kaggle.com/code/kunihikofurugori/sound-object-detection-prepare-annot-data/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/sound-object-detection-prepare-annot-data/notebook</a><br> [2]<a href="https://www.kaggle.com/code/kunihikofurugori/birdclef2023-soundobjectdetectiontrain/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/birdclef2023-soundobjectdetectiontrain/notebook</a><br> [3]<a href="https://www.kaggle.com/code/kunihikofurugori/sound-object-detection-batch-predict/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/sound-object-detection-batch-predict/notebook</a><br> [4]<a href="https://www.kaggle.com/code/kunihikofurugori/birdclef2023-soundobjectdetectionanalysis/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/birdclef2023-soundobjectdetectionanalysis/notebook</a></p> <h2>・Various Mixup Strategy</h2> <p>By using various mix-up methods, we tried to make it impossible to choose unnatural pairs. Details of the method are described below.</p> <h3>・Geometrical Mixup</h3> <p>Assuming that geographically close samples should have similar background noise, we created a model that mixes up geographically close samples without learning the background noise.</p> <p>If the distance matrix is properly constructed, the amount of calculation will be O(n^2), so the code is written so that the parts that match up to the second decimal place of the longitude and latitude are treated as the same group.</p> <h3>・Resonance Mixup (Inspired by collaborative filtering)</h3> <p>Choosing a pair that resonates easily as a mixup destination is important for creating a more natural sound. Implemented a mixup that preferentially selects bird samples that are likely to resonate based on the collaborative filtering idea of the recommender system.</p> <h3>・Inner mixup for geometrical clustering space</h3> <p>Groups that are geographically close and have the same label do not change the label characteristics even if they share inputs and mixup. Therefore, we clustered such groups and made it possible to perform Inputmixup freely.</p> <h1><strong>Models</strong></h1> <p>My models are based on the BirdCLEF 2021 2nd place solution using backbone, eca_nfnet_l0.</p> <h1><strong>Dataset</strong></h1> <ol> <li>Bird CLEF 2023</li> <li>Bird CLEF 2022/2021/2020 (Pretraining)</li> <li>ff1010bird_nocall</li> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/394358" target="_blank">Zendo data (YOLOv8) </a></li> </ol> <h1><strong>CV setup</strong></h1> <p>I thought it would be difficult to create a multifold model and check the LB due to resource limitations, so I did a simple test-train split instead of out-of-fold.</p> <p>For the test data, we built validation by splitting 30 seconds into 5 second intervals and bootstrap sampling to the same size as the LB dataset size (8400 samples).</p> <p>Furthermore, we found that using only the data with the primary label showed behavior similar to LB, so we excluded the data with the secondary label from the validation and calculated.</p> <p>Inference Kernel: <a href="https://www.kaggle.com/code/kunihikofurugori/8th-place-solution-inference-kernel" target="_blank">https://www.kaggle.com/code/kunihikofurugori/8th-place-solution-inference-kernel</a><br> Github link: <a href="https://github.com/furu-kaggle/birdclef2023" target="_blank">https://github.com/furu-kaggle/birdclef2023</a></p> <h1>Result</h1> <table> <thead> <tr> <th>commit_id</th> <th>single inference kernel</th> <th>LB</th> <th>PB</th> <th>tag</th> </tr> </thead> <tbody> <tr> <td>4a57a984d1f4101b9f470095e6a6fae0a6cc5af2</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=130154067" target="_blank">link</a></td> <td>0.82628</td> <td><strong>0.73777</strong></td> <td>ecanfnet_nmel128_best1</td> </tr> <tr> <td>55557aea59976c5befc11cbcd077f22c1d8289c0</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=129843959" target="_blank">link</a></td> <td>0.82475</td> <td>0.73301</td> <td>ecanfnet_nmel128_best3</td> </tr> <tr> <td>ccc435294a9f5f9989f5865def93da0817c3b030</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=129843959" target="_blank">link</a></td> <td><strong>0.82786</strong></td> <td>0.73341</td> <td>ecanfnet_nmel128_best2</td> </tr> <tr> <td>c9e077c172368c31c2d1974e0288554ebd014b2f</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=130288082" target="_blank">link</a></td> <td>0.82279</td> <td>0.72918</td> <td>ecanfnet_nmel64_best2</td> </tr> <tr> <td>a3791c561544adcc37507d5c27ec98e9d1c603a7</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=129584055" target="_blank">link</a></td> <td>0.82172</td> <td>0.73223</td> <td>ecanfnet_nmel64_best1</td> </tr> <tr> <td>ea70ecc95de1625212465b8e0148bc996e0237f9</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=129564172" target="_blank">link</a></td> <td>0.82149</td> <td>0.72850</td> <td>ecanfnet_nmel64_best3</td> </tr> <tr> <td>ecanfnet_nmel128_best1~3(300 stride pred) + ecanfnet_nmel64_best1~3 ensemble</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/8th-place-solution-inference-kernel?scriptVersionId=130834386" target="_blank">link</a></td> <td><strong>0.83616</strong></td> <td><strong>0.75285</strong></td> <td>best_submission</td> </tr> </tbody> </table>
BirdCLEF 2023
9th Place Solution: 7 CNN Models Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry for a very late reply, I somehow missed this comment</p> <p><a href="https://www.kaggle.com/code/harshitsheoran/openvino-opt/notebook" target="_blank">https://www.kaggle.com/code/harshitsheoran/openvino-opt/notebook</a></p> <p>I think it was that ScaledStdConv2d was using batch norm and the default was going for training=True, which was causing error, so I remade the function and replaced the layers</p> <p>Thanks to the organizers for hosting BirdClef 2023 again this year and giving us a learning experience.</p> <p>Heartfelt Congratulations to the winners and everyone who got to learn and experience.</p> <p>I would also like to express my gratitude to my talented teammates <a href="https://www.kaggle.com/ivanaerlic" target="_blank">@ivanaerlic</a> and <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a></p> <h1>Training</h1> <p>We used efficientnet_b0, eca_nfnet_l0 and convnext_tiny architectures in our final submission.</p> <p>The training was done in multiple rounds using datasets from all previous years. All models were trained on 5s clips</p> <p>For eca_nfnet_l0 and convnext_tiny<br> Hop size 320, n_mels 64, image_size = 64x501<br> For efficientnet_b0<br> Hop size 320, n_mels 64, image_size (Bilinear image size increase) = 128x1002</p> <p>The training was done in multiple rounds:</p> <p><strong>Round 1:</strong></p> <p>Hop size 512, n_mels 64, image_size = 64x313<br> (Bilinear image size increase) = 128x636</p> <p>Models : Eca nfnet l0 (128x636), convnext tiny (128x636).</p> <p>Training data: 2022 &amp; 2023</p> <p>Augmentations: HFlip, Random Cutout, Pink Noise, Gaussian Noise, Noise Injection, Random Volume</p> <p>Type of Training: Competition Labels</p> <p>CV: ~.79 (Low)</p> <p><strong>Round 2:</strong></p> <p>Hop size 320, n_mels 64, image_size = 64x501</p> <p>Models : Efficientnet B0 (128x1002 Bilinear increase), Eca nfnet l0 (64x501), convnext tiny (64x501).</p> <p>Augmentations: HFlip, Random Cutout, Pink Noise, Gaussian Noise, Noise Injection, Random Volume</p> <p>Training data: 2022 &amp; 2023</p> <p>Type of Training: Distillation</p> <p>CV: ~.79 (Slight increase)</p> <p><strong>Round 3:</strong></p> <p>Hop size 320, n_mels 64, image_size = 64x501</p> <p>Models : Efficientnet B0 (128x1002 Bilinear increase), Eca nfnet l0 (64x501), convnext tiny (64x501).</p> <p>Training data: 2021 &amp; 2022 &amp; 2023</p> <p>Extra methods: AWP, SWA, Attention Head</p> <p>Augmentations: HFlip, Random Cutout, Pixel Dropout</p> <p>Type of Training: Competition Labels</p> <p>CV: ~.8</p> <p><strong>Round 4:</strong></p> <p>Hop size 320, n_mels 64, image_size = 64x501</p> <p>Models : Efficientnet B0 (128x1002 Bilinear increase), Eca nfnet l0 (64x501), convnext tiny (64x501).</p> <p>Training data: 2023</p> <p>Augmentations: HFlip, Random Cutout, Pixel Dropout, Pink Noise, Gaussian Noise, Noise Injection, Random Volume</p> <p>Extra methods: SWA, Attention Head</p> <p>Type of Training: Pseudo Labels </p> <p>CV: ~.81 (mild leak) </p> <p>Our CV was the first 5 seconds of the clips, this CV although was not always consistent with LB, gave us a good enough idea of what we could expect.</p> <p>Pseudo labeling improved our score a lot. We used pseudo-labels to make new secondary labels and gave them a hard probability 0.5</p> <p>For convnext model, it was first fine tuned to increase the score on the targets where secondary label exists, although this sacrificed the primary label score, this adds diversity to our ensembles</p> <h1>Inference</h1> <p>We used OpenVino to increase the inference speed of our models</p> <p>We also saw that loading an audio and giving it in batches, the first batch of inference is especially slower than the rest, so we made it so that it loads 8 audios at once, makes a data loader for all 8 audios and then forward pass the model, doing this we increased the inference speed further by 20%.</p> <p>So we were able to submit 7 models, 2 of efficientnet_b0 + 2 of eca_nfnet_l0 + 3 of convnext_tiny</p> <p>We also noticed that sigmoid after weight averaging was more effective than sigmoid before weight averaging, also we were using softmax activation which gave a better score than sigmoid up until we reached top of .82 (public LB), after that sigmoid started to work better.</p> <h1>BirdNet</h1> <p>Our best private submission turned out to be when we ensembled our models with BirdNet, although it was like our top 10 submissions of public and did give us a lot of variance, we did not select this because we were a little afraid of it being a dice roll on private LB.</p> <h1>Post Processing</h1> <p>Looking at the last year’s competition, we tried out increasing the confidence of a class by a factor of 1.5 if any prediction of that class in the audio file is more than .96</p> <p>This did work on a lot of submissions and was in fact amongst our top subs, but in the end, it did not play well on private LB and we did not select it because of the risk of overfitting the public LB.</p> <h1>What did not work</h1> <p>SED models<br> Q-Transform<br> More than 5s training<br> Rolling Mean with last and next 5s segment</p> <p>More than 60 submissions of ours had timeout errors and on the last second day of the competition, 4 of 5 submissions had encountered timeout error while before that the chance of getting a timeout was only 20-30% in the same notebook.</p> <p>A lot of people on Public .82 will get to .83 if they use BirdNet, I think it would be a 0.005 increase on average between both public and private… atleast that is what we observed</p> <p>BirdNet models on their own do not do good as they were not trained with all our competitions classes, they only work when ensembled on their classes</p>