Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Understanding Clouds from Satellite Images | Krazy Klassifiers - 48th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>If you could predict empty mask for every empty mask and predict full mask, (i.e. predict every pixel with <code>rle = '1 183750'</code>) for every mask, then your CV is 0.686!! Therefore a perfect classifier can win without any segmentation. The following code outputs 0.686:</p>
<pre><code>train = pd.read_csv('../input/understanding_cloud_organization/train.csv')
train['pred'] = np.where(~train.EncodedPixels.isna(),'1 183750','')
train['dice'] = train.apply(lambda x: kaggle_dice(x['EncodedPixels'],x['pred']),axis=1)
print( train.dice.mean() )
</code></pre>
<h1>Classification Models</h1>
<p>I focused most of my energy on building classification models and finally achieved 78% classification validation accuracy (on 33% holdout set, i.e. 3-Fold CV) by ensembling two crazy classifiers. The first has 4 backbones that extract features from 4 different resized input images (half size, quarter size, one sixth size, and one eighth size)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fdef7ae2825082c95e50f22674c562124%2Fcls1.jpg?generation=1574169946410492&alt=media" alt=""></p>
<pre><code>base_model0 = Xception(weights='imagenet',include_top=False,input_shape=(None,None,3))
base_model1 = Xception(weights='imagenet',include_top=False,input_shape=(None,None,3))
base_model2 = Xception(weights='imagenet',include_top=False,input_shape=(None,None,3))
base_model3 = Xception(weights='imagenet',include_top=False,input_shape=(None,None,3))
x0 = base_model0.output
x0 = layers.GlobalAveragePooling2D()(x0)
x1 = base_model1.output
x1 = layers.GlobalAveragePooling2D()(x1)
x2 = base_model2.output
x2 = layers.GlobalAveragePooling2D()(x2)
x3 = base_model3.output
x3 = layers.GlobalAveragePooling2D()(x3)
x = layers.concatenate([x0,x1,x2,x3])
x = layers.Dense(4,activation='sigmoid')(x)
model = Model(inputs=(base_model0.input, base_model1.input, base_model2.input,
base_model3.input), outputs=x)
</code></pre>
<p>My second model uses masks in addition to labels and achieves 77% accuracy by itself. The label loss is backpropagated through the mask prediction. Then instead of using the outputted labels, we predict 1 or 0 for label based on whether mask is present or not.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F2445cef6b03266046473736d3d4a8914%2Fcls2.jpg?generation=1574170027626081&alt=media" alt=""></p>
<pre><code>model0 = Unet('resnet34', input_shape=(None,None,3), classes=4,
activation='sigmoid', encoder_freeze=True)
model0.layers[-1].name = 'out1'
x = model0.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(4, activation='sigmoid', name='out2')(x)
model = Model(inputs = model0.input, outputs = (model0.output,x))
model.compile(optimizer=opt, loss={'out1':loss1,'out2':loss2},
metric = {'out1':metric1, 'out2':metric2})
</code></pre>
<h1>Segmentation Model</h1>
<p>My segmentation model is a collage of ideas from public kernels. Without post process, it achieves Public LB 0.650. Test time augmentation (TTAx6) increases this to LB 0.655. Using 3-Folds increases this to LB 0.660. Ensembling 7 copies with different choices for 3-Fold achieves LB 0.665. And finally removing false positives with my classifier increases this to LB 0.670. My final solution has CV 0.663 and Private LB 0.663. Here are specific details:</p>
<ul>
<li>Unet Architecture</li>
<li>EfficientnetB2 backbone</li>
<li>Train on 352x544 random crops from 384x576 size images</li>
<li>Train augmentation of flips and rotate</li>
<li>Adam Accumulate optimizer</li>
<li>Jaccard loss</li>
<li>Kaggle Dice metric, Kaggle accuracy metric</li>
<li>Reduce LR on plateau and early stopping</li>
<li>Remove masks less than 20000 pixels</li>
<li>TTA of flips and shifts</li>
<li>3-Fold CV and prediction</li>
<li>Remove false positive masks with classifier</li>
</ul>
<h1>Kaggle Notebook</h1>
<p>I posted a Kaggle notebook showing my segmentation model <a href="https://www.kaggle.com/cdeotte/cloud-solution-lb-0-670">here</a>. It scores LB 0.665 by itself and LB 0.670 if you ensemble it with 7 copies of itself with different initialization seeds. It loads classification predictions from my offline classifier models for false positive removal.</p>
<p>Thank you everyone for a fun and exciting competition. I learned a lot from reading everyone's discussions and posted code. Thank you Kaggle and Max-Planck-Institite for sharing cloud data and hosting. Congratulations to all the winners.</p> |
Understanding Clouds from Satellite Images | Private LB 0.66758 solution: From single Network multi folds | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry, I took a while to write this. I was struggling with my course exam 👀 </p>
<p>Congratulations to all the participants and winners! This competition was a little bit tricky and different, especially due to the very noisy label in the dataset. I am going to share my solution which outperformed all of my other networks and ensembles. </p>
<h2>Score and Network Summary</h2>
<p><strong>Private score:</strong> 0.66758
<strong>Public score:</strong> 0.66654 (<em>Yes! The network gave higher score in private!</em>)
<strong>Network backbone:</strong> InceptionResNetV2
<strong>Segmentation Model:</strong> FPN</p>
<p><strong>Train height:</strong> 384
<strong>Train weight:</strong> 512
<strong>Loss:</strong> Lovasz
<strong>Optimizer:</strong> Radam
<strong>Learning rate:</strong> 1e-4
<strong>Pretrained weight:</strong> Imagenet</p>
<h3>Augmentations:</h3>
<p>I used albumentations. </p>
<p><strong>TTA:</strong>
1. HorizontalFlip, VerticalFlip
2. ShiftScaleRotate
3. Blur
4. ToGray
4. RandomBrightnessContrast, RandomGamma, RGBShift, ElasticTransform (One of them)</p>
<p><strong>Post-processing (For Prediction) Augmentation:</strong>
- HorizontalFlip, VerticalFlip, ToGray, RandomGamma, Normalize. </p>
<h3>Post-Processing confidence density:</h3>
<p>For all classes:
<strong>Upper</strong>: 0.6
<strong>Lower</strong>: 0.4
<strong>Area</strong>: 100
<strong>Min area</strong>: 200</p>
<h3>Folds and weights</h3>
<p>I have used a total of 4 folds and 4 weights from each fold. So there was a total of 16 weights from 4 folds and the prediction was the simple average of 16 folds. </p>
<p><strong>There was no other ensemble or use of any extra classifier.</strong></p>
<h3>GPU</h3>
<p>I didn't have any good GPU locally. So I used the Google cloud in the last few days. Google gave $300 dollar free trial credits which were good for around 120 hours VM instance.</p>
<p>I build a VM there with V100 GPU. </p>
<h3>Some thoughts</h3>
<ol>
<li>I believe this network performs well due to the InceptionResNetV2 only. Because I have output from a similar setup with other backbones which wasn't anywhere near to this score. </li>
<li>I should have trusted the validation score. This model produced the highest Kaggle dice score at validation compared to other models. </li>
</ol> |
Understanding Clouds from Satellite Images | Quick Silver - The Late Joiners' Journey | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello and congrats to all participants!
Thanks, organizers for this incredible competition!</p>
<p>We entered this competition after a good rest from Severstal competition.
Actually, I made the repo for it 12 days before the final deadline.</p>
<p>We used our pipeline from Severstal competition and changed only separate parts of it.
At first, I tried the best scoring config from Severstal competition for this data, only hoping for "easy gold" but got the rank around ~900.
This is my second competition, where the 'fit_predict' could not get into the bronze zone.</p>
<p><strong>At first, here are the things that we tried, and that did not work at all:</strong></p>
<ul>
<li>Higher than 352x576 resolution,</li>
<li>Adding heavy encoders (everything larger than Densenet169/SEResNeXt50 overfitted like crazy)</li>
<li>Masking that black stripe with mean value or overlaying with a black rectangle,</li>
<li>Training on crops,</li>
<li>Separate classifier,</li>
<li>Unet decoder,</li>
<li>PSP decoder,</li>
<li>Hard color augmentations,</li>
<li>Decoder from <a href="/hengck23">@hengck23</a> (but was great to learn from!),</li>
<li>BCE Loss, Focal Loss, Dice loss or Lovasz loss (alone),</li>
<li>Symmetric CrossEntropy (<a href="https://arxiv.org/pdf/1908.06112.pdf">https://arxiv.org/pdf/1908.06112.pdf</a>)</li>
</ul>
<p><strong>Things that worked:</strong>
<a href="/spsancti">@spsancti</a>:
- Combination of Trimmed BCE and Dice loss,
- Separate classification head,
- EfficientNet-B0 and B1 encoders,
- Pseudolabeling and Knowledge distillation</p>
<p><a href="/hakuryuu">@hakuryuu</a>:
- Small networks (Resnet34, EfficientNet-B0...B2),
- Grid shuffle augmentation,
- Sum of symmetric Lovasz, Trimmed Focal, Dice and BCE losses</p>
<p><strong>We both used:</strong>
- Cosine Annealing LR schedule,
- FPN decoder,
- Hard geometric augmentations,
- Over9000 optimizer (better than RAdam on all runs)
- Threshold selection on the validation</p>
<p><strong>Libraries used:</strong>
- Catalyst and Apex
- Albumentations
- <a href="/pavel92">@pavel92</a> 's awesome segmentation models library</p>
<p>We trained single-fold models, with different split, to get more boost after merging them.</p>
<p><strong>Pseudolabelling and Knowledge distillation</strong></p>
<p>One day before the deadline, I decided to try to use pseudolabelling.
I took our most successful submit for that time (0.6575 public) and used it as hard pseudo labels. I did not select any confident predictions assuming that the source markings are noisy enough to tolerate some more noise from pseudo labels.
I pretrained the model on pseudo labels and continued to the regular train.
Immediately there was an improvement on local validation while training on pseudo labels, which quickly disappeared when I switched to the regular train. So, I trained several models with this scheme only to confirm this result.</p>
<p>This made me think that noisy corners in original markings confuse models too much, and I decided to relabel training data with the same ensemble that produced this submission.
With relabeled training data mixed with pseudo labels, the single model could reach 0.68 on local validation. I submitted it and got the first of my single models to break the 0.66 barrier on public LB. </p>
<p>These results looked too optimistic, so we thought that I created a selection bias, so we decided to select one ensemble with models trained with pseudo labels and KD and one without.</p>
<p><strong>Ensembling</strong>
In our final submission we had 2 ensembles:
The first one was trained without pseudo labels and consisted of 10 models.
The second one was the same as first, but with 5 more models trained on KD mixed with pseudo labels. </p>
<p>Every model in ensemble was wrapped with 8xTTA:
- HorizontalFlip
- VerticalFip
- BrightnessUp
- BrightnessDown</p>
<p>Probably, the assumption of selection bias was correct, as ensemble trained with KD and pseudo labels scored 0.65982 on private LB, which is 0.0034 less than an ensemble without them. </p>
<p><strong>Hardware</strong>
We used 2 servers with 1xV100 and 1 workstation with 2x1080Ti.
We thank VITech Lab and 3DLook for providing computing resources for this competition.</p> |
University of Liverpool - Ion Switching | 10th private (10th public) solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Check this out: <a href="https://github.com/fadel/pytorch_ema">https://github.com/fadel/pytorch_ema</a></p>
<p>We used 2 magic things in this comp:
1. Test was slightly shifted: we made it closer to the train distribution using cross-correlation;
2. Removing of 50Hz sine noise.</p>
<p>In our view, models were not so important in this competition but Wavenet variations performed slightly better than other models;
Our zoo is 6x Wavenets (cross-entropy & focal losses), 2x CNN2d-Wavenet, 1x CNN2d, 1x LGBM.</p>
<p>Our best models (both CV & LB) are Wavenet on 4 features (cleaned signal, signal difference, signal min & max in the 22-window) & CNN2d-Wavenet. </p>
<p>We didn't use the noisy part of training data (part of 8th batch) and blended fold predictions (EMA-decay helped a lot!).</p>
<p>All our individual models (except LGBM) score gold on the private LB, but every blend performed better.</p>
<p>My teammates will describe some parts of the solution in the comment section.</p> |
University of Liverpool - Ion Switching | 11th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="/corochann">@corochann</a> . You are right, my focus has somewhat shifted recently.</p>
<blockquote>
<p>I can't wait for my solo gold to make the comeptition master soon too :-)</p>
</blockquote>
<p>It won't be long!</p>
<p>Excellent response and that totally makes sense and would explain what I was seeing. Thanks for the response and congrats on your finish.</p>
<p>Thanks <a href="/sheriytm">@sheriytm</a>! Congrats on your silver. I've always been impressed with your contributions and consistent solo success. I can't think of anymore more deserving to become competitions master and can't wait until you do!</p>
<p>Thanks <a href="/cpmpml">@cpmpml</a> - That means a lot coming from you!</p>
<p>Congrats to everyone who participated in this competition. I want to thank Liverpool and Kaggle for putting it on. No competition is perfect but I think that despite it's faults this competition was successful (at least for me) in that it challenged me to think hard and apply machine learning solutions to solve a tough problem. A little over a year and a half ago I participated in my first serious Kaggle competition. I would've never believed that eventually I could achieve a solo gold. It was a lot of work paired with a healthy dose of luck.</p>
<h2>Background (My Competition Story)</h2>
<p>This competition piqued my interest the day it was launched. It seemed check all the boxes of what I'm interested in, namely: Non traditional data that wasn't too large or too small, a scientific based problem statement, non-synchronous so I didn't have to depend on kernels for training or submitting, and an opportunity to expand my skills building neural nets. After the Bengali competition ended I started working on the problem on nights and weekends like I normally do and keeping ideas in the back of my mind.</p>
<p>Then COVID-19 cases started to show up in the US and things changed very quickly with my day job, I was furloughed, and suddenly had more time in my schedule for family and kaggle :)</p>
<p>I noticed that many well known kagglers were opting out of this competition, which usually is not a good sign. Some theorized early on that we had reached optimum solution at 0.938 - I was skeptical. Yes, the data is semi-synthetic, but knowing the data was passed through a physical amplifier I figured that signal processing would at least give a boost beyond 0.938.</p>
<p>I started modeling as I usually do with LGBM similar to what I shared in my public kernel (<a href="https://www.kaggle.com/robikscube/ion-switching-5kfold-lgbm-tracking">https://www.kaggle.com/robikscube/ion-switching-5kfold-lgbm-tracking</a>) Being inspired by The Zoo's past solutions I wanted to keep my model light on features. After reading <a href="/cdeotte">@cdeotte</a> 's excellent notebook explaining drift it was clear de-drifting was the priority. My first attempt at removing drift was just fitting a linear and parabolic equation on the rolling median values of each batch and subtracting it from the signal. This gave me a relatively clean signal however there was still one issue, the peaks of each channel were not lined up- specifically the 10 channel signal was not properly lined up with the peaks for the associated channels in other groups. To account for this I manually added an offset that would line up these peaks. This "peak alignment" combined with the clean data my LGBM model achieved 0.941 public LB - good enough for 2nd place at the time. I was thrilled but knew it was still very early in the competition. Not soon after that the first version of the "drift free" data was shared and many teams jumped to 0.941.</p>
<p><img src="https://i.imgur.com/5Vcl2Bc.png" alt="0.941"></p>
<p>I experimented with different features, different CV setups, model settings and was seeing small gains. During this time I also focused on better understanding the macroF1 metric and how it could be optimized. After reading some papers and past kaggle posts I learned that MacroF1 could be improved by giving preference to lower occurring classes. I developed some code for doing this and found post processing gave around a 0.0001 to 0.0005 boost on my CV and LB score but was also quite risky because there is always a chance of over post processing and doing more harm than good.</p>
<p>That's when I decided to take my best LGBM features and use them in a Neural Network. Thanks to the wonderful notebooks I decided to code with TF2/keras instead of pytorch:</p>
<ul>
<li>Unet: <a href="https://www.kaggle.com/kmat2019/u-net-1d-cnn-with-keras">https://www.kaggle.com/kmat2019/u-net-1d-cnn-with-keras</a> Thanks- <a href="/kmat2019">@kmat2019</a></li>
<li>Wavenet: <a href="https://www.kaggle.com/siavrez/wavenet-keras">https://www.kaggle.com/siavrez/wavenet-keras</a> Thanks- <a href="/siavrez">@siavrez</a> </li>
</ul>
<p>Wavenet showed the most promise, so I started to experiment with different features and slightly modifying the structure which was enough to push me to 0.942 public LB (3rd place at the time).</p>
<p><img src="https://i.imgur.com/bx8nMZP.png" alt="0.942"></p>
<p>I experimented with pseudo labeling, target encoding, and many other things but eventually was starting to drop down in the rankings and became very discouraged. I even reached out to a friend/kaggler to see if he was interested in teaming- Luckily (for me) I made my next breakthrough before he could respond.</p>
<p>In one of my experiments I tried was adding an LSTM after the wavenet in combination with target encoding on the signal rounded to 2 decimal places. This gave a big improvement on CV but I was skeptical. When I submitted I was surprised that it held up and I was the first to break 0.945! I spent the next few days trying to understand why this worked and recreating it in a more CV stable way. What I found was that the model performed better when the target encoding wasn't scaled: so while the target encoding was in the range 0-10, the signal and other features were scaled 0-1. I also realized that rounding the signal to 2 digits was essentially the same thing as binning the signal - so I switched to using <code>KBinsDiscretizer</code> to allow me to experiment with different bin sizes.</p>
<p><img src="https://i.imgur.com/d0IeCid.png" alt="0.945"></p>
<p>After many experiments I was able to push my score up to 0.946 (1st place at the time). I also found this great kernel gave me the idea to add a 50Hz and a few additional frequency features which gave my CV/LB a boost: <a href="https://www.kaggle.com/johnoliverjones/frequency-domain-filtering">https://www.kaggle.com/johnoliverjones/frequency-domain-filtering</a> - thanks <a href="/johnoliverjones">@johnoliverjones</a></p>
<p><img src="https://i.imgur.com/NZStXz1.png" alt="0.946"></p>
<p>By this point many power teams were forming with multiple GMs and strong kagglers. I was preparing myself to be overtaken- but alas I kept experimenting! I tried to keep my GPUs running experiments at all times (<code>while ps -p 12345; do sleep 60; python myscript.py</code> is a nifty trick to kick off a script once the previous completes). I also used google colab for additional experiments. Almost all my experiments failed or were inconclusive, but two ended up improving my score to 0.947 (2nd place public):</p>
<p>1) Improving upon my "peak shift" code to find the perfect shift in peaks of each batch.</p>
<p>2) Removing specific frequencies that contained noise (namely 50Hz) from the signal.</p>
<p><img src="https://i.imgur.com/InuRxfZ.png" alt="0.947"></p>
<p>And now onto the details:</p>
<h2>Preprocessing</h2>
<p><strong>Removing bad training data</strong>
I started with drift free data as most people did. I also could visually identify that there were some outliers due to extreme noise that didn't resemble anything in the test set, so I removed that section of the training data completely:</p>
<p><code>
FILTER_TRAIN = '(time &lt;= 47.6 or time &gt; 48) and (time &lt;= 364 or time &gt; 382.4)'
train = train.query(FILTER_TRAIN)
</code>
<strong>Removing noisy frequency</strong>
I knew the 50Hz noise was from the hum that is created due to AC current in the amplifier. Having previously studied and worked in power engineering I knew that 50Hz (60Hz in the US) is only the <strong>approximate</strong> frequency of AC power. In reality it can lead or lag 50Hz due to many different factors. I wrote some code to identify where exactly the peak occurred for each batch- some batches it was 50.01Hz others 49.99Hz or 50Hz. The code then automatically tuned a notch filter with a Q value tuned with scipy to minimize the variance of the ~50Hz frequency with the surrounding frequencies amplitudes. The allowed it to "smoothly" filter out the 50Hz frequency with minimal disturbance to the rest of the signal.</p>
<p>This notebook was helpful when I was first researching the idea: <a href="https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise">https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise</a></p>
<p>And the code for this "auto 50Hz filter":</p>
<p>```
def fix_50hz(tt, batch, Q = 250.0,
f0 = 50, fs=10000, plot=True,
write=True, domax=True,
use_cleaned=False):
"""
Q: Quality factor
f0: Frequency to be removed from signal (Hz)
"""
if use_cleaned:
sig_sample = tt.loc[tt['sbatch'] == batch]['signal_cleaned_10s'].values
else:
sig_sample = tt.loc[tt['sbatch'] == batch]['signal'].values</p>
<pre><code>tmin = tt.loc[tt['sbatch'] == batch]['time'].min()
tmax = tt.loc[tt['sbatch'] == batch]['time'].max()
chunique = tt.loc[tt['sbatch'] == batch]['open_channels'].nunique()
fft = np.fft.fft(sig_sample)
psd = np.abs(fft) ** 2
fftfreq = np.fft.fftfreq(len(psd),1/fs)
# See if 50 hz is the max amplitude
i_45_55 = (fftfreq &gt;= (f0-5)) &amp; (fftfreq &lt;= (f0+5))
i_45_55_no_50 = ((fftfreq &gt;= (f0-5)) &amp; (fftfreq &lt;= (f0+5))) &amp; (fftfreq != f0)
i_50 = fftfreq == 50
if domax:
freq_with_max_ampl = fftfreq[i_45_55][np.argmax(psd[i_45_55])]
if freq_with_max_ampl &gt; (f0-1) and freq_with_max_ampl &lt; (f0+1) and freq_with_max_ampl != f0:
print(f'Making f0 {freq_with_max_ampl}')
f0 = freq_with_max_ampl
else:
freq_with_max_ampl = fftfreq[i_45_55][np.argmin(psd[i_45_55])]
if freq_with_max_ampl &gt; (f0-1) and freq_with_max_ampl &lt; (f0+1) and freq_with_max_ampl != f0:
print(f'Making f0 {freq_with_max_ampl}')
f0 = freq_with_max_ampl
# Design notch filter to make frequency f0 similar to average
def notch_filter(Q):
b, a = signal.iirnotch(f0, Q, fs)
# Filter and plot
y = signal.filtfilt(b, a, sig_sample)
fft_ = np.fft.fft(y)
psd_ = np.abs(fft_) ** 2
fftfreq_ = np.fft.fftfreq(len(psd_),1/fs)
ivar = ((fftfreq &gt;= f0-2) &amp; (fftfreq &lt; f0-1)) | ((fftfreq &gt;= f0+2) &amp; (fftfreq &lt; f0+1))
i_f0 = fftfreq_ == f0
abs_diff = np.abs(np.mean(psd[ivar]) - psd_[i_f0])
diff_others = np.mean(psd[ivar] - psd_[ivar])
return abs_diff + np.abs(diff_others / 2)
res = minimize(notch_filter, [Q], tol=1e-10, method='Powell') #, bounds=((100, 5000),))
Q = res['x']
print(f'Using Q {Q}')
b, a = signal.iirnotch(f0, Q, fs)
# Filter and plot
y = signal.filtfilt(b, a, sig_sample)
fft_ = np.fft.fft(y)
psd_ = np.abs(fft_) ** 2
fftfreq_ = np.fft.fftfreq(len(psd_),1/fs)
if plot:
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 2))
i = abs(fftfreq) &lt; 100
ax.grid()
ax.plot(fftfreq[i], 20*np.log10(psd[i]), linewidth=1)
ax.plot(fftfreq_[i], 20*np.log10(psd_[i]), linewidth=1)
ax.set_title(f'Time: {tmin} to {tmax} - Unique Channels: {chunique} - Q: {Q}')
ax.set_xlabel('Frequency (Hz)')
ax.set_ylabel('PSD (dB)')
ax.axvline(f0, color='red', alpha=0.2)
ax.axvline(-f0, color='red', alpha=0.2)
plt.show()
if write:
# Write Cleaned Signal to dataset
tt.loc[tt['sbatch'] == batch, 'signal_cleaned_10s'] = y
return tt
</code></pre>
<p>```</p>
<p>This worked rather nicely at removing the 49-51Hz noise. I experimented running this on each batch individually, by 10 second intervals, and 1 second intervals. I also experimented with removing other "noisy" frequencies. Plots were used to validate the change (the blue area shows where the frequency was removed)</p>
<p><img src="https://i.imgur.com/zdIlxfB.png" alt="Filter Example"></p>
<p><strong>Peak Alignment</strong>
The final step of preprocessing was lining up of peaks of the distribution for each batch. To achieve this I wrote a bit of code that minimized the difference in median signal between batches by <code>open_channel</code>. In doing some research I found that these types of "peak alignment" problems are quite common in various field and I skimmed a few papers about it. <a href="https://www.researchgate.net/publication/221816197_Model-based_peak_alignment_of_metabolomic_profiling_from_comprehensive_two-dimensional_gas_chromatography_mass_spectrometry">For instance this paper discusses peak alignment for chromatography</a></p>
<p>And the code:</p>
<p>```
def apply_shift(tt, shift_group, base_groups):</p>
<pre><code>te1 = tt.query('sbatch == @shift_group')['time'].min()
te2 = tt.query('sbatch == @shift_group')['time'].max()
print(f'Running for Group {shift_group} - {te1} to {te2}')
shift_channel_freq = tt.query('sbatch == @shift_group')['open_channels'].value_counts().sort_values(ascending=False).index
base_channels = tt.query('sbatch in @base_groups')['open_channels'].unique()
for m in shift_channel_freq:
if m in base_channels:
print(f'Lining up on channel {m}')
most_freq_channel = m
break
m1 = tt.query('sbatch in @base_groups and open_channels == @most_freq_channel')['signal_fixed'].median()
m2 = tt.query('sbatch == @shift_group and open_channels == @most_freq_channel')['signal'].median()
shift_start = m1 - m2
print('Base Shift: ', shift_start)
def overlap_weighted(shift, shift_group=shift_group, base_groups=base_groups, tt=tt):
shift_data = tt.query('sbatch == @shift_group').copy()
shift_data['signal_fixed'] = shift_data['signal'] + shift
base_data = tt.query('sbatch in @base_groups').copy()
shift_times = shift_data['time'].values
common_channels = np.intersect1d(base_data['open_channels'].unique(),
shift_data['open_channels'].unique())
base_data = base_data.query('open_channels in <a href="/common">@common</a>_channels')
shift_data = shift_data.query('open_channels in <a href="/common">@common</a>_channels')
shift_medians = shift_data.groupby('open_channels')['signal_fixed'].median()
base_medians = base_data.groupby('open_channels')['signal_fixed'].median()
shift_counts = shift_data.groupby('open_channels')['signal_fixed'].count()
base_counts = base_data.groupby('open_channels')['signal_fixed'].count()
weighted_diff = ((shift_medians - base_medians) * shift_counts).mean() / shift_counts.sum()
print(f'Shift: {shift[0]:0.6f} : Weighted Diff {weighted_diff:0.20f}', end="\r", flush=True)
return np.abs(weighted_diff)
print('')
res = minimize(overlap_weighted, shift_start, method='Powell', tol=1e-2)
best_shift = res['x']
print('')
print('best_shift', best_shift)
te1 = tt.query('sbatch == @shift_group')['time'].min()
te2 = tt.query('sbatch == @shift_group')['time'].max()
tt.loc[(tt['time'] &gt;= te1) &amp; (tt['time'] &lt;= te2), 'signal_fixed'] = \
tt.loc[(tt['time'] &gt;= te1) &amp; (tt['time'] &lt;= te2)]['signal'] + best_shift
fig, ax = plt.subplots(1, 1, sharex=True, figsize=(15, 3))
tt.query('sbatch == @shift_group')['signal_fixed'].plot(kind='kde', ax=ax, title='Shift Set')
</code></pre>
<p>['signal'].median().iteritems():
@shift_group').groupby('open_channels')['signal_fixed'].median().iteritems():
ax.axvline(i[1], color='orange', ls='--')
tt.query('sbatch in @base_groups')['signal_fixed'].plot(kind='kde', ax=ax, title='Base set')
for i in tt.query('sbatch in @base_groups').groupby('open_channels')['signal_fixed'].median().iteritems():
ax.axvline(i[1], color='red', ls='--')
plt.show()
return tt</p>
<p>```
Here are two examples of the output from this code:</p>
<p><img src="https://i.imgur.com/rhXoFNf.png" alt="Peak 1"></p>
<p><img src="https://i.imgur.com/6lEbbJk.png" alt="Peak 2"></p>
<p>The one issue with the peak alignment process was that for the training set I knew the true open channels. For the test set I didn't know them, so I used my previous model's predictions. I tried using out of fold values on the training set- but it didn't work as well. My thinking was that eventually my test alignment would reach an "optimum" shift.</p>
<p>Still, I knew this left me open to the possibility of overfitting to my own predictions. I tried many other approaches for lining up the peaks- but decided that this seemed to work the best despite the risks.</p>
<p>The final result was a very clean dataset with peaks of each channel aligned.</p>
<p><img src="https://i.imgur.com/keGEZ77.png" alt="Clean 1">
<img src="https://i.imgur.com/4NiMJGh.png" alt="Clean 2"></p>
<h2>Features</h2>
<p>As I mentioned above the most important feature I found was target encoding. I used the <code>KBinsDiscretizer</code> to first divide the signal into different bins and then ran target encoding on that. I didn't scale the target encoding but did scale all other features. Other features were lags +/- 5 samples, signal^2 and the 50Hz features.</p>
<p>The following is the function I used to create target encoding. It makes use of the category-encoders package <a href="https://pypi.org/project/category-encoders/">https://pypi.org/project/category-encoders/</a></p>
<p>```
def add_target_encoding(tr_df, val_df, test_df,
features, n_bins=500,
strategy='uniform', feature='signal',
smoothing=1):
if smoothing == 1:
sm = ''
else:
sm = '_' + str(smoothing)
kbd = KBinsDiscretizer(n_bins=n_bins, encode='ordinal', strategy=strategy)
tr_df[f'{feature}_{n_bins}bins{sm}'] = kbd.fit_transform(tr_df[feature].values.reshape(-1, 1))
val_df[f'{feature}_{n_bins}bins{sm}'] = kbd.transform(val_df[feature].values.reshape(-1, 1))
test_df[f'{feature}_{n_bins}bins{sm}'] = kbd.transform(test_df[feature].values.reshape(-1, 1))</p>
<pre><code>tr_df[f'{feature}_{n_bins}bins{sm}'] = tr_df[f'{feature}_{n_bins}bins{sm}'].astype('category')
val_df[f'{feature}_{n_bins}bins{sm}'] = val_df[f'{feature}_{n_bins}bins{sm}'].astype('category')
test_df[f'{feature}_{n_bins}bins{sm}'] = test_df[f'{feature}_{n_bins}bins{sm}'].astype('category')
te = ce.target_encoder.TargetEncoder(verbose=1, smoothing=smoothing)
tr_df[f'target_encode_{feature}_{n_bins}bins{sm}'] = te.fit_transform(tr_df[f'{feature}_{n_bins}bins{sm}'],
tr_df['open_channels'])
val_df[f'target_encode_{feature}_{n_bins}bins{sm}'] = te.transform(val_df[f'{feature}_{n_bins}bins{sm}'])
test_df[f'target_encode_{feature}_{n_bins}bins{sm}'] = te.transform(test_df[f'{feature}_{n_bins}bins{sm}'])
te_col_name = f'target_encode_{feature}_{n_bins}bins{sm}'
if te_col_name not in features:
features.append(te_col_name)
return tr_df, val_df, test_df, features
</code></pre>
<p>```</p>
<p>Every so often I would permutation importance on my models to see if any features could be dropped.</p>
<p>An example of the exact features used in one of my final models is:</p>
<p><code>
['signal', 'f_DC', 'f_50Hz', 'f_n100Hz',
'f_2xn100', 'signal_mm_scaled', 'signal_round2',
'signal_shift_pos_1', 'signal_shift_neg_1',
'signal_shift_pos_2', 'signal_shift_neg_2',
'signal_shift_pos_3', 'signal_shift_neg_3',
'signal_shift_pos_4', 'signal_shift_neg_4',
'signal_shift_pos_5', 'signal_shift_neg_5',
'signal_2',
'target_encode_signal_round2_500bins_10',
'target_encode_signal_300bins',
'target_encode_signal_600bins',
'target_encode_signal_100bins',
'target_encode_signal_1000bins']
</code></p>
<h2>Model: Wavenet+LSTM</h2>
<p>My model ended up being based off of the one found in the wavenet-keras kernel <a href="https://www.kaggle.com/siavrez/wavenet-keras">https://www.kaggle.com/siavrez/wavenet-keras</a> (thanks <a href="/siavrez">@siavrez</a>). I wanted to upvote his notebook so badly during the competition but I thought it might give people a clue as to what I was using. I've since upvoted it and wish I could upvote it 10x it was so helpful.</p>
<p>Building off of the wavenet from that notebook I added:</p>
<ul>
<li>LSTM right after the wavenet into a softmax output</li>
<li>Added GaussianNoise seemed to help</li>
<li>Tuned learning rate scheduler</li>
</ul>
<p>Things that didn't work:</p>
<ul>
<li>GRU wasn't as good as LSTM</li>
<li>GRU + LSTM and concat results</li>
<li>Change activation functions</li>
<li>Different levels of GaussianNoise</li>
<li>32, 64, 92, 128 sized LSTM (92 seemed to work best)</li>
<li>Lots of different settings to the wavenet but nothing conclusively improved.</li>
<li>Lots of different learning rate setups</li>
<li>Modeling types seperately</li>
<li>Removing training data in attempt to more closely resemble public or private test data.</li>
</ul>
<p>I tried many, many different ideas, an example of one my final models looked like this:</p>
<p>```
inp = Input(shape = (shape_))
x = wave_block(inp, 16, 3, 12)
x = GaussianNoise(0.01)(x)
x = wave_block(x, 32, 3, 8)
x = GaussianNoise(0.01)(x)
x = wave_block(x, 64, 3, 4)
x = GaussianNoise(0.01)(x)
x = wave_block(x, 128, 3, 1)
x = LSTM(96, return_sequences=True)(x)
out = Dense(11, activation = 'softmax', dtype='float32', name='predictions')(x)
model = models.Model(inputs = inp, outputs = out)</p>
<p>```</p>
<h2>Augmentation</h2>
<p>Early on I found that flipping the signal during training to double the data size helped. I also flipped for TTA (used flipping on the test data when predicting and averaged the results). I tried adding noise to the raw signal but that only made CV and LB worse.</p>
<h2>The final week- training and blending</h2>
<p>There were a lot of things that were difficult about doing a competition solo - but this part was probably the hardest. I didn't have anyone to consult with about how to make my final ensemble and blend and had to rely on my gut, in the end it ended up working out - but I also had a bit of luck.</p>
<p>Since I was worried about already being slightly overfit in my peak alignment I ended up taking my best 40 or so submissions and blending the raw probabilities before doing post processing. For one of my final submissions I just took a straight average across the 40 models. The second submission I weighed each prediction by it's position on the public LB. I also very closely inspected the correlation of each sub, comparing public vs private LB, and also comparing by model type. My main concern was with the 10 channel section in the private test set. I noticed that was the section that had the highest variance across submissions, and it would have a huge impact on the final score because each class is weighed equally with the macroF1 metric.</p>
<p>Example plot correlation of submission files ordered by public LB score:
<img src="https://i.imgur.com/8fBLOgr.png" alt="Corr"></p>
<h2>Post Processing</h2>
<p>For post processing I merged two ideas from other kagglers. The first was from Chris' solution writeup about<a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021"> "hacking macro recall" in the bengali competition</a>. The second was the f1 optimization class being used in notebooks. What makes this code unique is that it optimizes using the raw probabilities for each class and not the class predictions themselves.</p>
<p>The resulting code was this:</p>
<p>```
class ArgmaxF1Optimizer(object):
"""
Class for optimizing predictions using argmax
"""
def <strong>init</strong>(self, initial_exp=-1.2, method='nelder-mead',
verbose=True):
self.initial_exp = initial_exp
self.opt_result = None
self.method = method
self.verbose = verbose</p>
<pre><code>def apply_pp(self, X, EXP):
preds_argmax = np.argmax(X,axis=1)
s = pd.Series(preds_argmax)
vc = s.value_counts().sort_index()
df = pd.DataFrame({'a':np.arange(11),'b':np.ones(11)})
df.b = df.a.map(vc)
df.fillna(df.b.min(),inplace=True)
mat1 = np.diag(df.b.astype('float32')**EXP)
return np.argmax(X.dot(mat1), axis=1)
def apply_pp_f1_score(self, EXP, X, y):
preds_pp = self.apply_pp(X, EXP)
score = -macro_f1_score(y, preds_pp)
if self.verbose:
print(f'Exp {EXP}, score {score}')
return score
def fit(self, X, y):
loss_partial = partial(self.apply_pp_f1_score, X=X, y=y)
if self.method == 'nelder-mead':
self.opt_result = minimize(loss_partial,
self.initial_exp,
method='nelder-mead')
self.opt_exp = self.opt_result['x']
elif self.method == 'brute':
rranges = ([slice(-1, 1, 0.05)])
self.opt_result = brute(loss_partial, rranges,
full_output=True,
finish=fmin)
self.opt_exp = self.opt_result[0][0]
if self.verbose:
print(f'Optimal exp value {self.opt_exp}')
score_argmax = macro_f1_score(y, np.argmax(X, axis=1))
score_pp = - self.apply_pp_f1_score(self.opt_exp, X, y)
print(f'Score raw: {score_argmax:0.5f}')
print(f'Score post processed: {score_pp:0.5f}')
self.score_argmax = score_argmax
self.score_pp = score_pp
def predict(self, X, y=None):
if self.opt_exp is None:
print('[ERROR] Must first run fit method')
return
return self.apply_pp(X, self.opt_exp)
def fit_predict(self, X, y):
self.fit(X, y)
return self.predict(X)
</code></pre>
<p>```</p>
<p>I wanted to be extremely careful when applying post processing. I knew it helped on the public leaderboard but could never be 100% confident with how much to apply to the private. To help me decide how much post processing to apply I wrote code that:</p>
<ul>
<li>Used the OOF predictions from all of the models in the blend.</li>
<li>Simulated the exact same distributions of that in the public and private test set (matching both model type and open channel)</li>
<li>Repeated this process 100x with different seeds. The reason why I did this is because the F1 score would fluctuate +/- 0.001 depending on which samples were selected.</li>
<li>Ran my F1 optimization code on each simulation.</li>
<li>Took the average <code>EXP</code> from the 100 synthetic cases and used that to post process my final submission.</li>
<li>Note this was done separately on public and private test set as they are scored separately. </li>
</ul>
<p>The output of this simulation looked something like this:</p>
<p><code>
0 Raw public: 0.94290 : -0.14906 : 0.94304 Raw private: 0.94278 : -0.14000 : 0.94289
1 Raw public: 0.94391 : -0.01950 : 0.94390 Raw private: 0.94377 : -0.02000 : 0.94374
2 Raw public: 0.94353 : -0.13569 : 0.94378 Raw private: 0.94325 : -0.12837 : 0.94349
3 Raw public: 0.94336 : -0.02237 : 0.94338 Raw private: 0.94341 : -0.02000 : 0.94344
4 Raw public: 0.94361 : -0.10450 : 0.94381 Raw private: 0.94375 : -0.09656 : 0.94391
5 Raw public: 0.94441 : -0.09394 : 0.94453 Raw private: 0.94439 : -0.04750 : 0.94449
6 Raw public: 0.94412 : -0.10000 : 0.94427 Raw private: 0.94413 : -0.02200 : 0.94429
7 Raw public: 0.94413 : -0.06891 : 0.94418 Raw private: 0.94435 : -0.04828 : 0.94440
8 Raw public: 0.94384 : -0.13325 : 0.94410 Raw private: 0.94374 : -0.12375 : 0.94397
9 Raw public: 0.94363 : -0.01900 : 0.94365 Raw private: 0.94342 : -0.04100 : 0.94344
10 Raw public: 0.94333 : -0.04812 : 0.94341 Raw private: 0.94324 : -0.04550 : 0.94329
11 Raw public: 0.94339 : -0.09656 : 0.94354 Raw private: 0.94356 : -0.08803 : 0.94373
12 Raw public: 0.94378 : -0.09000 : 0.94385 Raw private: 0.94380 : -0.04200 : 0.94382
13 Raw public: 0.94385 : -0.09112 : 0.94403 Raw private: 0.94365 : -0.04750 : 0.94380
14 Raw public: 0.94380 : -0.09562 : 0.94403 Raw private: 0.94394 : -0.08775 : 0.94417
....
</code></p>
<p>The distribution then looked like this, and I used the mean value when applying post processing to my submission:</p>
<p><img src="https://i.imgur.com/w59k3O5.png" alt="exp selection"></p>
<h2>Things I still can't explain.</h2>
<p>There are still some lingering questions I have.</p>
<ul>
<li><strong>What are these anchor points?</strong> [-2.5002, -1.2502, -0.0002, 1.2498, 2.4998, 3.7498]. I noticed in the signal there is a spike in unique value counts just to the right of the signal distribution for each open channel. You can see them clearly in this plot. I called them "anchor points" because they almost always were labeled as the open_channel even though they were to the right of the mean value.</li>
</ul>
<p><img src="https://i.imgur.com/zAyMkMy.png" alt="spike"></p>
<p>I probably spent 2 or 3 days working on this and thought it might be helpful for lining up peaks. I still don't have a good explanation for why they exist. I even wrote an optimization function that shifted the drift batches such that when rounding to 4 decimal places it would maximize these values. It resulted in a worse LB score.</p>
<ul>
<li><strong>Where do new batches start?</strong> </li>
</ul>
<p>Batch sizes are 10 seconds or 50 seconds.. I think. I considered seconds 0-50 to be one batch until closer inspection of the rolling median value. You can see a clear shift in the median value at 10 seconds. I found the same thing in the private test data. Because I was unsure I decided to split at these points and did my "peak alignment" on these separately.</p>
<p><img src="https://i.imgur.com/oquVORO.png" alt="shift"></p>
<h2>Conclusion</h2>
<p>I ended up shaking down 9 spots in the private leaderboard to 11th place. My best single model scored 0.94547 private and had the potential for 7th place - but there was no way I would've selected it. On the other hand my best public LB score model had a private LB score of 0.94211! I never considered selecting it either.</p>
<p><img src="https://i.imgur.com/yzEletG.png" alt="selectedsub"></p>
<p>I'm sure there are things I could've done to perform better in the shakeup but I was just happy that it was good enough to stay in the gold range.</p>
<p>Depending on time and interest - I may create a notebook that runs my code end-to-end.</p>
<p>That's it. I hope you found this writeup helpful!</p>
<p>Thanks <a href="/mobassir">@mobassir</a> !</p>
<p>Thanks Siavash. Your notebook was a huge help to me and taught me a lot.</p>
<p>I'm so glad you asked! I thought maybe you had already found them.</p>
<p>edit: see the link to a notebook with the code below.</p>
<p>Thanks Chris! Great points about the discrete feature possibly allowing wavenet to identify the signal from the noise. I never thought about it that way. As for the dual GRU and/or LSTM- I tried a lot of those configurations and it never matched my Wavenet + LSTM setup. For instance this was one of my models and performed similar or worse than just a plain LSTM:</p>
<p><code>
x = wave_block(inp, 16, 3, 12)
x = GaussianNoise(0.01)(x)
x = wave_block(x, 32, 3, 8)
x = GaussianNoise(0.01)(x)
x = wave_block(x, 64, 3, 4)
x = GaussianNoise(0.01)(x)
x = wave_block(x, 128, 3, 1)
x_lstm = LSTM(92, return_sequences=True)(x)
x_gru = GRU(92, return_sequences=True)(x)
conc = Concatenate()([x_lstm, x_gru])
out = Dense(11, activation = 'softmax', dtype='float32', name='predictions')(conc)
model = models.Model(inputs = inp, outputs = out)
</code></p>
<p>So funny that we both landed on the 92 size! I wasn't sure if 128 performed equally or not, but it was close.</p>
<p>Did you happen to read my section on "anchor points" - did your team notice them? Do you have any idea why they exist? They still don't make sense to me.</p>
<p>Thanks <a href="/group16">@group16</a> - I remember reading Kumar's post about that and agree something fishy is going on every 10 seconds. The only reason why I still question it is because I found when doing "peak alignment" for batches of 10 seconds I got consistently worse CV/LB scores. The training data preferred to be shifted in the 50 second batches. Maybe I'm just overthinking it.</p>
<blockquote>
<p>Have you tried using the samples only that were close to their rounded value? These appeared to be the most easy samples to classify</p>
</blockquote>
<p>I need to think about that one- but it never crossed my mind. Are you saying that you calculated CV based only on predicted signal values that were close to their rounded values? I'm not sure I understand.</p>
<p>So I just quickly created this notebook using some of the code I created during the challenge. Let me know if you have a chance to read it and can make any sense of the anchor points. Clearly it's still bugging me!</p>
<p><a href="https://www.kaggle.com/robikscube/ion-challenge-anchor-point-mystery">https://www.kaggle.com/robikscube/ion-challenge-anchor-point-mystery</a></p>
<p>I didn't do a great job of explaining them in my post, but if you look at non-drift corrected batches, and look at the value counts of each signal value - these values jump out.</p>
<p>They don't appear in the drift areas however you can pull them out by adding the correct shift and then rounding to 4 decimal places (like the non-drift data)</p>
<p>I wrote some optimization code to maximize the occurrence of these specific signal values in the drift areas and I was able to do it.... I really should copy that code into a kernel.... it might be easier to explain.</p> |
University of Liverpool - Ion Switching | 12th solution: single model wavenet + lstm | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly thanks a lot to my teammates and the great community, to all people who keep sharing, we have all learned a lot during this competition. (I haven't even heard wavenet before this, neither have I imagined to solve time-series problem using CNN)</p>
<p>Yesterday at 2am the private LB was revealed and after that I was too excited. I just slept for 2 hours this morning even though today I have to work, that's for our one month's hard work and finally I got my first gold medal.</p>
<h2>Some thoughts</h2>
<ul>
<li>We were basically focusing on doing extra work on the famous 0.944 wavenet RFC kernel.</li>
<li>We found groupKFold CV of group size 4000 was consistent to the public LB, but not perfectly consistent. Then we found that the class 10 count in test set was very informative, we only submit when test class 10 count is over 7000, ideally about 7020. By doing this our CV strategy became quite consistent, this allowed us to do all kinds of experiments in local securely. Different batch size or group size didn't help.</li>
<li>Our final two submissions are this one (highest CV score) and a vote of three submissions (highest LB score), this submission is finally also the highest on private LB, while the vote dropped violently. This in return supports the fact that our CV strategy worked well.</li>
<li>Flipping helped, but adding gaussian noise didn't, we didn't try augmentation at test time.</li>
<li>We didn't spend time on HMM modeling. </li>
<li>Our best LGB model was 0.941 on public LB, as someone mentions in a post that RFC leaks because of the different CV split strategy, we tried to replace it by LGB/RFC/LSTM using the same groupKFold, but no luck.</li>
</ul>
<h2>Preprocessing</h2>
<ul>
<li>We used drift cleaned and kalman filtered signal</li>
<li>We dropped 200,000 rows from 3,640,000 to 3,840,000 , as to remove the spike noises. That made no difference on public LB score.</li>
</ul>
<h2>Features</h2>
<p>We tried a lot features but few of them work, then we decided to stick with model structure tuning, hoping it to find useful features by itself.
Still, except the RFC probabilities, some of features worked, even not bringing big boost.
So we used these features in this model:
- RFC Probabilities
- signal ** 2
- lag range 3 and -11, -15: [-15, -11, -3, -2, -1, 1, 2, 3]
All features are created after standard scaling signal.</p>
<h2>Model structure (Pytorch)</h2>
<p>`class Wave_Block(nn.Module):</p>
<pre><code>def __init__(self, in_channels, out_channels, dilation_rates, kernel_size):
super(Wave_Block, self).__init__()
self.num_rates = dilation_rates
self.convs = nn.ModuleList()
self.filter_convs = nn.ModuleList()
self.gate_convs = nn.ModuleList()
self.convs.append(nn.Conv1d(in_channels, out_channels, kernel_size=1))
dilation_rates = [2 ** i for i in range(dilation_rates)]
for dilation_rate in dilation_rates:
self.filter_convs.append(
nn.Conv1d(out_channels, out_channels, kernel_size=kernel_size, padding=int((dilation_rate*(kernel_size-1))/2), dilation=dilation_rate, padding_mode='replicate'))
self.gate_convs.append(
nn.Conv1d(out_channels, out_channels, kernel_size=kernel_size, padding=int((dilation_rate*(kernel_size-1))/2), dilation=dilation_rate, padding_mode='replicate'))
self.convs.append(nn.Conv1d(out_channels, out_channels, kernel_size=1))
def forward(self, x):
x = self.convs[0](x)
res = x
for i in range(self.num_rates):
x = torch.tanh(self.filter_convs[i](x)) * torch.sigmoid(self.gate_convs[i](x))
x = self.convs[i + 1](x)
res = res + x
return res
</code></pre>
<p>class SEModule(nn.Module):</p>
<pre><code>def __init__(self, in_channels, reduction=2):
super(SEModule, self).__init__()
self.conv = nn.Conv1d(in_channels, in_channels, kernel_size=1, padding=0)
def forward(self, x):
s = F.adaptive_avg_pool1d(x, 1)
s = self.conv(s)
x *= torch.sigmoid(s)
return x
</code></pre>
<p>class Classifier(nn.Module):</p>
<pre><code>def __init__(self, inch=8, kernel_size=3):
super().__init__()
dropout_rate = 0.1
self.conv1d_1 = nn.Conv1d(inch, 32, kernel_size=1, stride=1, dilation=1, padding=0, padding_mode='replicate')
self.batch_norm_conv_1 = nn.BatchNorm1d(32)
self.dropout_conv_1 = nn.Dropout(dropout_rate)
self.conv1d_2 = nn.Conv1d(inch+16+32+64+128, 32, kernel_size=1, stride=1, dilation=1, padding=0, padding_mode='replicate')
self.batch_norm_conv_2 = nn.BatchNorm1d(32)
self.dropout_conv_2 = nn.Dropout(dropout_rate)
self.wave_block1 = Wave_Block(32, 16, 12, kernel_size)
self.wave_block2 = Wave_Block(inch+16, 32, 8, kernel_size)
self.wave_block3 = Wave_Block(inch+16+32, 64, 4, kernel_size)
self.wave_block4 = Wave_Block(inch+16+32+64, 128, 1, kernel_size)
self.se_module1 = SEModule(16)
self.se_module2 = SEModule(32)
self.se_module3 = SEModule(64)
self.se_module4 = SEModule(128)
self.batch_norm_1 = nn.BatchNorm1d(16)
self.batch_norm_2 = nn.BatchNorm1d(32)
self.batch_norm_3 = nn.BatchNorm1d(64)
self.batch_norm_4 = nn.BatchNorm1d(128)
self.dropout_1 = nn.Dropout(dropout_rate)
self.dropout_2 = nn.Dropout(dropout_rate)
self.dropout_3 = nn.Dropout(dropout_rate)
self.dropout_4 = nn.Dropout(dropout_rate)
self.lstm = nn.LSTM(inch+16+32+64, 32, 1, batch_first=True, bidirectional=True)
self.fc0 = nn.Linear(64, 32)
self.fc1 = nn.Linear(32, 11)
self.fc2 = nn.Linear(32, 11)
def forward(self, x):
x = x.permute(0, 2, 1)
x0 = self.conv1d_1(x)
x0 = F.relu(x0)
x0 = self.batch_norm_conv_1(x0)
x0 = self.dropout_conv_1(x0)
x1 = self.wave_block1(x0)
x1 = self.batch_norm_1(x1)
x1 = self.dropout_1(x1)
x1 = self.se_module1(x1)
x2_base = torch.cat([x1, x], dim=1)
x2 = self.wave_block2(x2_base)
x2 = self.batch_norm_2(x2)
x2 = self.dropout_2(x2)
x2 = self.se_module2(x2)
x3_base = torch.cat([x2_base, x2], dim=1)
x3 = self.wave_block3(x3_base)
x3 = self.batch_norm_3(x3)
x3 = self.dropout_3(x3)
x3 = self.se_module3(x3)
x4_base = torch.cat([x3_base, x3], dim=1)
x4 = self.wave_block4(x4_base)
x4 = self.batch_norm_4(x4)
x4 = self.dropout_4(x4)
x4 = self.se_module4(x4)
x5_base = torch.cat([x4_base, x4], dim=1)
x5 = self.conv1d_2(x5_base)
x5 = F.relu(x5)
x5 = self.batch_norm_conv_2(x5)
x5 = self.dropout_conv_2(x5)
lstm_out, _ = self.lstm(x4_base.permute(0, 2, 1))
out1 = self.fc0(lstm_out)
out1 = self.fc1(out1)
out2 = x5.permute(0, 2, 1)
out2 = self.fc2(out2)
return out1 * out2 `
</code></pre>
<ul>
<li>Wave block is the same in public kernel. The concatenation after se module in each block gave us a boost, so does the lstm layer.</li>
<li>we've also tried multi-task learning, using open_channels both as classes and as continuous values, but no luck with this model. Interestingly we have another LSTM model without wavenet, using this on that model gave us a huge boost of 0.002 on public LB.</li>
<li>tried 1d pooling encoder decoder, didn't help</li>
</ul>
<h2>About model training</h2>
<p>We were using CrossEntropyLoss as loss function, we did try other functions like focal loss and ohem loss, but no luck.
Learning rate is 0.001, tried weight_decay but didn't help, batch size is 16.
This part I believe helped a lot, as CE loss is not quite consistent with macro f1 score :
- save model weight each time macro F1 score improves
- after 10 epochs that macro F1 doesn't improve, and CE loss doesn't reduce, learning rate /= 2, and load lastly saved model (best macro F1 so far)
- after 20 epochs that macro F1 doesn't improve, and CE loss doesn't reduce, early stopping
- either macro f1 improves or CE loss reduces, epoch counter = 0
We believe this will give the model more chances to optimize F1 score, and we retrain a fold if its score is relatively lower than before, until it can give us satisfying score (usually 1 to 2 times).
Then the training procedure will become like this: (red means F1 score improves, green means CE loss)
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F456596%2F330ffd69a2072c802a3f85ee9cf66735%2FWX20200526-205151.png?generation=1590519148879009&alt=media" alt=""></p>
<p>Finally this model got 0.94365 on local CV, 0.94545 on public LB and 0.94513 on private LB.</p>
<p><strong>Edit 1:</strong>
I've seen that some people are interested in the LSTM model, so I can show you this simple multi-task learning model, even though there's no magic in it.</p>
<p>`class Classifier(nn.Module):</p>
<pre><code>def __init__(self, inch=8):
super().__init__()
self.conv_7 = nn.Conv1d(inch, 12, kernel_size=7, stride=1, padding=3, padding_mode='replicate')
self.conv_5 = nn.Conv1d(inch, 12, kernel_size=5, stride=1, padding=2, padding_mode='replicate')
self.conv_3 = nn.Conv1d(inch, 12, kernel_size=3, stride=1, padding=1, padding_mode='replicate')
self.lstm = nn.LSTM(48, 45, 2, batch_first=True, bidirectional=True)
self.linear_1 = nn.Linear(90, 45)
self.linear_2 = nn.Linear(45, 22)
self.out_cls = nn.Linear(22, 11)
self.out_reg = nn.Linear(22, 1)
for l in [self.linear_1, self.linear_2, self.out_cls, self.out_reg]:
nn.init.kaiming_normal_(l.weight.data)
def forward(self, x):
x = x.permute(0, 2, 1)
conv_out_7 = self.conv_7(x)
conv_out_5 = self.conv_5(x)
conv_out_3 = self.conv_3(x)
concated_out = torch.cat([x, conv_out_3, conv_out_5, conv_out_7], 1)
lstm_out, _ = self.lstm(concated_out.permute(0, 2, 1))
out_tmp_1 = F.relu(self.linear_1(lstm_out))
out_tmp_2 = F.relu(self.linear_2(out_tmp_1))
y_pred_cls = self.out_cls(out_tmp_2)
y_pred_reg = self.out_reg(out_tmp_2)
return y_pred_cls, y_pred_reg`
</code></pre>
<p>I use CrossEntropyLoss for the classification output and MSELoss for the regression output.
This model can achieve 0.94487 on the public LB, but only 0.94388 on the private LB</p>
<p>Hi Helen, I see on this <a href="https://pytorch.org/docs/stable/nn.html#lstm">link</a>:
> batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False</p>
<p>Maybe we are talking about different version of pytorch ? Mine is 1.5.0</p>
<p>Hi <a href="/waylongo">@waylongo</a> , I've added the simple LSTM model in this post, in case you're interested </p>
<p>And the SE module I want to <a href="/dandingclam">@dandingclam</a>
But I think that brought just a little improvement.</p>
<p>Hi, using multi-task learning has boosted our simple LSTM to 0.944 on the public LB, and according to the position among all 0.944 submissions I could say that was about 0.9445, while without MTL this model had got 0.942 on the public LB.</p>
<p>0.944 has already been a comparable score with the original wavenet, but at that moment (2 weeks before the deadline) our modified wavenet had 0.945. And training a LSTM is 3-4 times slower than wavenet, basically it takes 7-8 hours to train a 5-fold LSTM model, which discouraged us a lot, not even to think about tuning this model structure. So finally we abandonned this model.</p>
<p>P.S. we did a lot tuning on the wavenet model without LSTM, and we were trying to improve one specific fold to avoid training 5 folds (We found evaluating on just one fold is already consistent), by doing this one iteration can take just 20 minutes. Adding that LSTM layer is just one thought to improve potentially the performance of our wavenet at the end time of this competition when we thought we had little time to do other things</p> |
University of Liverpool - Ion Switching | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Manuel!, I'll go for M5 or ALASKA. See you there!</p>
<p>Thanks Guchio! Gold is getting harder and harder 😜 </p>
<p>Thanks Rob!, it has been a very challenging competition </p>
<p>Thanks a lot!</p>
<p>Thanks!</p>
<p>Congratulations to the winners, during last week the competition had been very hard!</p>
<p>The most difficult part was to find a good way to clean the signal micro-drifting (which is different across batches and for train and test) . To do that, I fit a gaussian mixture to batches of 100k points, and linearly ‘lift’ the signal to adapt the gaussians means to a set precalculated fixed signal levels, common for train and test data. The gaussian mixture fit also give an unsupervised estimate for open channel probability for both train and test.</p>
<p>Using that probability as guess for the open channels is easy to spot the 50Hz and harmonics AC component using FFT and remove it surgically. ( see details of the calculations in this notebook <a href="https://www.kaggle.com/vicensgaitan/1-remove-drift-ac">https://www.kaggle.com/vicensgaitan/1-remove-drift-ac</a></p>
<p>Firstly I try lightgbm for modeling, so some feature engineering was needed.
I build some time-symmetrical rolling features (averaging for the signal and the time reversed signal) . Tree based models using these features and probabilities from the gaussian mixture scored around 0.942 in public LB</p>
<p>The game changer is the WaveNet architecture. Using the same set of variables with a plain WaveNet ( no LSTM, no batch normalization, no dropout, no fancy heads) using cross entropy loss, we can easily achieve 0.946 public (0.945 private) . A single model can do almost gold
<a href="https://www.kaggle.com/vicensgaitan/2-wavenet-swa">https://www.kaggle.com/vicensgaitan/2-wavenet-swa</a></p>
<p>My 13th position is a bagging of 5 models with different seeds and learning rates.</p>
<p>I miss the 5+5 structure of the 10-channel data…. Too bad</p>
<p>There is still one mystery: The leaderboard results (public and private) is 0.004 points better than the CV value, correcting by batch composition. It seems that test data has some ‘leak’ from the train data, maybe this related to the high scores obtained by the competition winners</p> |
University of Liverpool - Ion Switching | 15th solution (wavenet only) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/jt120lz">@jt120lz</a> </p>
<p>The network can be divided into two part : preprocessing module and wavenet module.
The wavenet module is a standard one so I do not explain too much.
The preprocessing network is used to "shift the signal value" for each element.(It might also remove some noise)
In the beginning , I use 3 parallel FC path/layer (actually I try 1-5 FC path with different level) ,Then change one of the path to capture spatial feature with different kernel size.</p>
<p>I think it is not necessary to use such complicate architecture :)</p>
<p>I just lazy to simplify it.</p>
<p><a href="/datamafia7">@datamafia7</a> </p>
<p>It is different If signal has been changed in augmentation phase </p>
<p>model
`
def TCNNet(shape_):</p>
<pre><code>num_filters_ = 16
kernel_size_ = 3
stacked_layers_ = [12, 8, 4, 1]
l_input = Input(shape=(shape_))
def auglayer(x): #ok for concat feature
sig = x[:,:,0]
yy = K.square(sig)
outy =tf.keras.backend.expand_dims(yy, -1)
zz = K.exp(sig)
outz =tf.keras.backend.expand_dims(zz, -1)
ww = K.abs(sig)
outw =tf.keras.backend.expand_dims(ww, -1)
return K.concatenate([x, outy,outz,outw])
x = Lambda(auglayer)(l_input)
x1 = Dense(512, activation='linear', name='in_dense')(x)
x2 = Dense(512, activation='linear', name='in_dense2')(x)
x4 = Conv1D(32, 3,activation='linear', padding='same')(x)
x4 = Conv1D(64, 3,activation='linear', padding='same')(x4)
x42 = Conv1D(64, 3,activation='linear', padding='same')(x)
x5 = Conv1D(32, 7,activation='linear', padding='same')(x)
x5 = Conv1D(64, 7,activation='linear', padding='same')(x5)
x52 = Conv1D(64, 7,activation='linear', padding='same')(x)
x6 = Conv1D(16, 3,activation='tanh', padding='same')(x)
x6 = Conv1D(32, 3,activation='tanh', padding='same')(x6)
x6 = Conv1D(64, 3,activation='tanh', padding='same')(x6)
x6 = Conv1D(128, 3,activation='tanh', padding='same')(x6)
x7 = Conv1D(16, 7,activation='tanh', padding='same')(x)
x7 = Conv1D(32, 7,activation='tanh', padding='same')(x7)
x7 = Conv1D(64, 7,activation='tanh', padding='same')(x7)
x7 = Conv1D(128, 7,activation='tanh', padding='same')(x7)
x_merge = Concatenate()([x4,x42, x5,x52,x6,x7])
x = Add()([x1,x2,x_merge])
xin = x
x00 = Conv1D(num_filters_*2, 1, padding='same')(xin)#(l_input)
x,skip1 = WaveNetResidualConv1D(num_filters_*2, kernel_size_, stacked_layers_[0])(x00)
x = Conv1D(num_filters_*2, 1, padding='same')(x)
x,skip2 = WaveNetResidualConv1D(num_filters_*2, kernel_size_, stacked_layers_[1])(x)
x = Conv1D(num_filters_*2, 1, padding='same')(x)
x,skip3 = WaveNetResidualConv1D(num_filters_*2, kernel_size_, stacked_layers_[2])(x)
x = Conv1D(num_filters_*2, 1, padding='same')(x)
x,skip4 = WaveNetResidualConv1D(num_filters_*2, kernel_size_, stacked_layers_[3])(x)
skip_cons = skip4+skip1+skip2+skip3
x = Add()(skip_cons)
l_output = Dense(11, activation='sigmoid')(x)
model = models.Model(inputs=[l_input], outputs=[l_output])
opt = Adam(lr=LR)
model.compile(loss=macro_double_soft_f1, optimizer=opt, metrics=['accuracy'])
return model`
</code></pre>
<p>I finished my best solution(I thought)with unexpectedly good score in last day , but no time to finetune.....</p>
<p>The following model got the best score for me but it is not "beautiful" :(.</p>
<p>OS: windows 10</p>
<p>framework : tensorflow 1.4/keras 2.2</p>
<p>dataset : 'clean-kalman/train_clean_kalman.csv' and remove batch8</p>
<p>model:modified wavenet based on
<a href="https://www.kaggle.com/siavrez/wavenet-keras">https://www.kaggle.com/siavrez/wavenet-keras</a></p>
<p>optimizer : adam (adam > ranger > sgd)</p>
<p>loss : bce + weighted macro_double_soft_f1 (bce > ce = focal loss >macro f1)</p>
<p>batch : 16</p>
<p>feature : <strong>rfc /lag_with_pct 123/ roll_stats 5 /gradient1-4</strong> added in the beginning
<strong>abs(sig) , exp(sig) , sig^^2</strong> added in model layer (after augmentation)</p>
<p>augmentation : flip / gausision noise +-0.0075/ random shift 5</p>
<p>post processing : flip TTA+ Group 5 fold averaging</p>
<p>*key in early stage without rfc feature : use batch 1-4</p>
<p>*key2 : train with low lr and without earlystop
reduce_lr = ReduceLROnPlateau(min_delta = 0.000001,cooldown = 3,factor=0.3, patience=15, min_lr=1e-6, verbose=2)</p>
<p>early_stopping = EarlyStopping(patience=1000, verbose=2)</p>
<p>*key3 : loss function</p>
<p>loss function</p>
<p>```
def macro_double_soft_f1(y, y_hat):
"""Compute the macro soft F1-score as a cost (average 1 - soft-F1 across all labels).
Use probability values instead of binary predictions.
This version uses the computation of soft-F1 for both positive and negative class for each label.
Args:
y (int32 Tensor): targets array of shape (BATCH_SIZE, N_LABELS)
y_hat (float32 Tensor): probability matrix from forward propagation of shape (BATCH_SIZE, N_LABELS)</p>
<pre><code>Returns:
cost (scalar Tensor): value of the cost function for the batch
"""
ce =keras.backend.binary_crossentropy(y, y_hat) #original ok
#ok for label smoothing , need to remove sigmoid layer in model
#ce = tf.losses.sigmoid_cross_entropy(y, y_hat, label_smoothing=0.01) #
#y_hat = keras.backend.sigmoid(y_hat)
y = tf.cast(y, tf.float32)
y_hat = tf.cast(y_hat, tf.float32)
tp = tf.reduce_sum(y_hat * y, axis=0)
fp = tf.reduce_sum(y_hat * (1 - y), axis=0)
fn = tf.reduce_sum((1 - y_hat) * y, axis=0)
tn = tf.reduce_sum((1 - y_hat) * (1 - y), axis=0)
soft_f1_class1 = 2*tp / (2*tp + fn + fp + 1e-16)
soft_f1_class0 = 2*tn / (2*tn + fn + fp + 1e-16)
cost_class1 = 1 - soft_f1_class1 # reduce 1 - soft-f1_class1 in order to increase soft-f1 on class 1
cost_class0 = 1 - soft_f1_class0 # reduce 1 - soft-f1_class0 in order to increase soft-f1 on class 0
#cost = 0.5 * (cost_class1 + cost_class0) # take into account both class 1 and class 0
cost = 0.5 * (cost_class1 + 0.5*cost_class0) # take into account both class 1 and class 0 #memory error?
macro_cost = tf.reduce_mean(cost) # average on all labels
macro_cost = macro_cost+ce
return macro_cost
</code></pre>
<p>```</p>
<p>The original code is </p>
<p><code>cost = 0.5 * (cost_class1 + cost_class0)</code></p>
<p>I just try to modify the weight to pray for luck but without much difference.
You can use the original code directly. </p>
<p><a href="/dandingclam">@dandingclam</a> </p>
<p>please check the following code</p>
<p>```
def WaveNetResidualConv1D(num_filters, kernel_size, stacked_layer):</p>
<pre><code>def build_residual_block(l_input):
resid_input = l_input
skip_connections = []
for dilation_rate in [2**i for i in range(stacked_layer)]:
l_sigmoid_conv1d = Conv1D(
num_filters, kernel_size, dilation_rate=dilation_rate,
padding='same', activation='sigmoid')(l_input)
#padding='same', activation='linear')(l_input)
l_tanh_conv1d = Conv1D(
num_filters, kernel_size, dilation_rate=dilation_rate,
#padding='same', activation='tanh')(l_input)
padding='same', activation='mish')(l_input)
l_input = Multiply()([l_sigmoid_conv1d, l_tanh_conv1d])
l_skip = Conv1D(num_filters, 1, padding='same')(l_input)
skip_connections.append(l_skip)
l_input = Conv1D(num_filters, 1, padding='same')(l_input)
resid_input = Add()([resid_input ,l_input])
return resid_input ,skip_connections
return build_residual_block
</code></pre>
<p>```</p> |
University of Liverpool - Ion Switching | 17th Private & 17th Public Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the kagglers who participated in this competition and especially to those researchers who were desperately trying to overcome 0.946 barrier on public LB 😊 </p>
<p>I am really enthusiastic and satisfied since it is our first kaggle competition.</p>
<p>We were using a dataset with <a href="https://www.kaggle.com/cdeotte/data-without-drift">removed drift</a> and <a href="https://www.kaggle.com/teejmahal20/a-signal-processing-approach-kalman-filtering">Kalman filtering</a>.</p>
<p>Nearly 4% of the train data was cut as too noisy.</p>
<p>Our solution is based on <a href="https://www.kaggle.com/siavrez/wavenet-keras">WaveNet</a> and <a href="https://www.kaggle.com/nxrprime/wavenet-with-shifted-rfc-proba-and-cbr">Wavenet with SHIFTED-RFC Proba and CBR</a> backed by a callback that provided early stopping in case macro f1 would not grow in 40 epochs with further weights' rollback.</p>
<p>We have modified this estimator with an ensemble technique inspired by <a href="https://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo method</a>. The base prediction is rounding the medians of predictions.</p>
<p>An important step was to split data into 5 groups w.r.t. average open channels. This idea has led us to the fact that our WaveNet ensemble provides a lower bound for the group with the highest average number of open channels.</p>
<p>The final estimator looks like <strong>max(predictions)</strong> for the group with the high average number of open channels and <strong>round(median(predictions))</strong> for everything else.</p>
<p>We are going to publish a <a href="https://www.kaggle.com/biruk1230/17th-place-simple-wavenet-solution-ion-switching">full kernel</a> shortly.</p> |
University of Liverpool - Ion Switching | 18th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am sorry for the confusing expression.
I made 3 data with noise added and flipped 3 of them.
```
X = []
x = data
X.append(x)
for i in range(3):
x += noise
X.append(x)</p>
<p>X.append(flip(X))</p>
<p>```</p>
<p>I'm a lucky person, and I won 18th place and a silver medal.
My solution is just a slight modification to the public wavenet notebook.</p>
<p>feature engineering
I used Data Without Drift from <a href="/cdeotte">@cdeotte</a> for the data.
42 features were used, including 3 lag features as well as <a href="/siavrez">@siavrez</a>'s wavenet notebook, signal*<em>2 and signal</em>*4 features based on <a href="/ragnar123">@ragnar123</a>'s notebook, gradient, low-pass and high-pass filters based on <a href="/martxelo">@martxelo</a>'s notebook, and the addition of <a href="/sggpls">@sggpls</a>' ION-SHIFTED-RFC-PROBA.</p>
<p>network architecture
The convolutional 16 base filters were changed to 24 (1.5x),And 3 LSTMs were added.</p>
<p>augmentation strategy and others
I normalized the features, added np.random.normal(loc=0, scale=0.01) to all the features, and flipped the data three times to make the training data 8x.
GROUP BATCH SIZE (signals per sample) was determined to be the best at 6250.</p>
<p>training
The training was conducted twice. Optimizing for losses as normal at first. For the second time, I changed the weight regularization of L2 to factor from 0 to 0.0001 and LR to 1e-5. And optimize accuracy.</p>
<p>prediction
Predictions were also augmented. I added np.random.normal(loc=0, scale=0.001) to all the features and flipped the data 4 times, i.e., to predict the test data 10 times.
In addition, I changed the GROUP BATCH SIZE from 6250 to 500000 13 times and averaged it and then used argmax.
The final oofCV was 0.942421.</p>
<p>Again, I'm just a lucky person. Excellent discussion and notebook sharing made me won a medal. Without these, my score would have been 0.
Thank you to those who shared insights.</p>
<p>Reference. They are my heroes.
datasets
ION-SHIFTED-RFC-PROBA(<a href="https://www.kaggle.com/sggpls/ion-shifted-rfc-proba">https://www.kaggle.com/sggpls/ion-shifted-rfc-proba</a>)
Data Without Drift(<a href="https://www.kaggle.com/cdeotte/data-without-drift">https://www.kaggle.com/cdeotte/data-without-drift</a>)
notebooks
WaveNet-Keras(<a href="https://www.kaggle.com/siavrez/wavenet-keras">https://www.kaggle.com/siavrez/wavenet-keras</a>)
Wavenet with 1 more feature(<a href="https://www.kaggle.com/ragnar123/wavenet-with-1-more-feature">https://www.kaggle.com/ragnar123/wavenet-with-1-more-feature</a>)
FE and ensemble MLP and LGBM(<a href="https://www.kaggle.com/martxelo/fe-and-ensemble-mlp-and-lgbm">https://www.kaggle.com/martxelo/fe-and-ensemble-mlp-and-lgbm</a>)
Wavenet with SHIFTED-RFC Proba and CBR(<a href="https://www.kaggle.com/nxrprime/wavenet-with-shifted-rfc-proba-and-cbr?scriptVersionId=33967419">https://www.kaggle.com/nxrprime/wavenet-with-shifted-rfc-proba-and-cbr?scriptVersionId=33967419</a>)
There are many others...
Thanks for reading this far.
I'm sorry that my English is bad.</p>
<p>appendix:
```
df = lag_with_pct_change(df, [1, 2, 3])</p>
<pre><code>df['signal_2'] = df['signal'] ** 2
df['signal_4'] = df['signal'] ** 4
# calc gradient
g = df['signal']
n_grads = 2
for i in range(n_grads):
g = np.gradient(g)
df['grad_' + str(i+1)] = g
# lpf
x = df['signal']
n_filts = 5
wns = np.logspace(-2, -0.3, n_filts)
for wn in wns:
b, a = signal.butter(1, Wn=wn, btype='low')
zi = signal.lfilter_zi(b, a)
df['lowpass_lf_' + str('%.4f' %wn)] = signal.lfilter(b, a, x, zi=zi*x[0])[0]
df['lowpass_ff_' + str('%.4f' %wn)] = signal.filtfilt(b, a, x)
# hpf
x = df['signal']
n_filts = 5
wns = np.logspace(-2, -0.1, n_filts)
for wn in wns:
b, a = signal.butter(1, Wn=wn, btype='high')
zi = signal.lfilter_zi(b, a)
df['highpass_lf_' + str('%.4f' %wn)] = signal.lfilter(b, a, x, zi=zi*x[0])[0]
df['highpass_ff_' + str('%.4f' %wn)] = signal.filtfilt(b, a, x)
</code></pre>
<p>```</p> |
University of Liverpool - Ion Switching | 19th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Nope, i just used chris clean dataset + kalman filter. </p>
<p>jaja vle man gracias</p>
<p>Single 1cnn wavenet with batchnormalization and a little bit of dropout with the clean kalman dataset of my public wavenet notebook.</p>
<p>Important:
Added some data augmentation to handle the shift in the test adding a constant to the signal.
Another data augmentation technique that also improve the cv was to add a little bit of a random normal distribution like this train_aug['signal'] + [np.random.normal(0, 0.01) for x in range(train_aug.shape[0])]. What i actually used was train_aug['signal'] + np.random.normal(0, 0.01) hoping that this will handle the shift and it worked (this was pure luck :) )</p>
<p>Used KMeans to create 22 clusters, then i one hot encode this cluster making 22 extra features.</p>
<p>Also i get some aggregated stats for each cluster like the kurtosis, skewness and std.</p>
<p>Added the difference between the signal and nearest cluster centroid, also added the euclidean distance between them</p>
<p>Used -+ 3 shifted signal, -3 rolling mean and std, -1 to 1 rolling mean and std, signal ** 2 and signal ** 4.</p>
<p>That cover the feature engineering part. </p>
<p>To handdle overfitting i used a learning rate scheduler like the public wavenet so it can make an early stop (if i used constant lr of 0.001 the net overfitts badly giving 0.942 public and private leaderboard and an awsome oof f1 of 0.94452)</p>
<p>It was a fun competition, msg me if you want to team up in another competition that we are both interested and have more or less the same skills and knowledge</p>
<p>Sry for my english it's not my first language.</p> |
University of Liverpool - Ion Switching | 1st (or 16th) place short write up | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Surprisingly, sort of but not really. Rakesh did write a wavenet to create predictions (using lgbm as an initial guess) then those were used to subtract off the predicted "real" signal (using average signal values corresponding to open_channels=X) from the physical signal. (I think I got this wrong above, we used wavenet+lgbm to get at the leftover noise, but I don't know if it's a big impact)</p>
<p>This gives predicted noise to be cleaned further, mostly 50Hz. After stripping 50Hz and re-running lgbm, lgbm predictions (fed into wavenet as a first guess) were apparently too good for our wavenet to do anything more with and CV started decreasing for most sets. The max10 set wasn't "maxed out" in CV so to speak, so we definitely would have used a final pass of wavenet on it, but other folks might have actually taken a hit on the lower sets using wavenet along with initial predictions that were too strong. </p>
<p>Fair point, thanks!</p>
<p>Hi Everyone,
Thanks particularly to Chris Deotte, Vitalii Mokin, Trigram. I'm sorry to forget people, we'll spend the next days upvoting things we've neglected. Thanks to the organizers and the kaggle team, this whole competition meant so much to us, and just getting to gold medal level was a great feeling. </p>
<p>Concerning the leak, it's been well described already but there were other clues, and we certainly didn't think we had something new.
- The Matlab code has a vague comment about not working above 5
- It isn't seeded with the clock. It's a "feature" of Matlab (not a bug) that the RNG by default produces the same chain on startup. This was probably the original source of the problem.
- There are visible beats in the 50Hz of the max10 set.
- You can actually see the correlation in the signals visually (we happened to be adding 5+5 to get some extra noise for simulation and checked .corr across everything. It looks like lots of people had the same idea here)</p>
<p>The leak was entirely limited (as far as we know) to one max10 private set. Of course everything seems obvious once you know it, but we expected a small advantage if any and a big leaderboard shake that everyone was talking about. I didn't think it would be at all appropriate to "disclose" it in the last week.</p>
<p>Take this as the first place rundown, or the 16th place rundown as you prefer. We will eventually provide more details, but I didn't plan on having to write anything up so soon. </p>
<p>One large improvement came from 50Hz cleaning. We ended up averaging signal minus lgbm predictions over a few cycles after labeling them mod200. This nicely accounts for small drifts around 50Hz in both frequency and amplitude, and is incredibly simple. (a real fourier transform wasn't the right way to go for us).</p>
<p>The shifts were also obviously important, as Rob pointed out recently. His fourth point also can't be emphasized enough. </p>
<p>We spent a <em>lot</em> of time on a bayesian inference model, although we didn't use it in the very end since the max10 set suddenly became max5, and bayes was virtually identical to LGBM after good 50Hz cleaning (but free from any machine learning!). This required sufficient monte carlo to find state transition probabilities that we also used for data augmentation.</p>
<p>Finally, the max25 set was initialized differently (the second "low" HMM). This turns out to be important for a Bayesian model, as we began with the initial state and updated backward with each signal point. (although it's not a significant impact for f1)</p>
<p>Hi Waylon, sorry I'm not exactly sure what that output is. I hacked up something like:
<code>
train_data=pd.read_csv('../input/data-without-drift/train_clean.csv')
test_data=pd.read_csv('../input/data-without-drift/test_clean.csv')
for i in range(0,20):
for j in range(0,50):
test1=test_data[100000*i:(i+1)*100000]
test1.reset_index(inplace=True,drop=True)
train1=train_data[100000*j:(j+1)*100000]
train1.reset_index(inplace=True,drop=True)
corr=train1.signal.corr(test1.signal)
if corr&gt;abs(0.1):
print('corr between test',i,'and train',j, train1.open_channels.corr(test1.signal))
</code></p>
<p>Thanks and sorry I've been delayed. Yes, we made a more streamlined version, as the original pipeline was pretty hard to follow. The order is Liverppol_LGBM_oofs, Liverpool_NoiseRemoval, and finally Liverpool_Wavenet. They take input from each other, and only the final notebook requires GPU. <br>
<a href="https://www.kaggle.com/rakeshptiwari/liverpool-lgbm-oofs" target="_blank">https://www.kaggle.com/rakeshptiwari/liverpool-lgbm-oofs</a><br>
<a href="https://www.kaggle.com/rakeshptiwari/liverpool-noiseremoval" target="_blank">https://www.kaggle.com/rakeshptiwari/liverpool-noiseremoval</a><br>
<a href="https://www.kaggle.com/rakeshptiwari/liverpool-wavenet" target="_blank">https://www.kaggle.com/rakeshptiwari/liverpool-wavenet</a></p> |
University of Liverpool - Ion Switching | 20th place solution [Single Model] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><blockquote>
<p>so NN has 2 branch share same input?</p>
</blockquote>
<p>Pseudocode:</p>
<blockquote>
<p>inp = Input(..)
gru = Bidirectional(GRU(..))(inp)
gru = Bidirectional(GRU(..))(gru)</p>
<p>wave = cbr(inp) #as in kernel
wave = ...</p>
<p>x = Concatenate([gru, wave])
x = Dense(..)</p>
</blockquote>
<p>Features, as in kernel + rolling50. I also try other features, but it did not improve my CV score. </p>
<p>I use <a href="https://www.kaggle.com/nxrprime/wavenet-with-shifted-rfc-proba-and-cbr">wavenet</a> as baseline.
I found, that increasing receptive field before wave block improve CV and LB score (I think, this help because the length was too big).
So, I use maxpooling1d(2) with concat with signal before maxpool1d before wave block (max pool stride=1).</p>
<p>NN has 2 branch: wavenet and 2 bi-gru with concatenation, after that fully-connected layer.</p>
<p>I found, that Tversky loss with CCE improve NN accuracy and save training time.
Final loss was: CCE + focal + Tversky (without balancing).</p>
<p>I use GROUP BATCH SIZE=[4000, 50000, 25000, 12500, 10000, 8000, 20000] for prediction with with averaging.</p> |
University of Liverpool - Ion Switching | 22th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>My first silver medal!</h2>
<h2>And I became an Kaggle Expert!</h2>
<p>Thank you very much!!!!!</p>
<p>I'm a little late in publishing my solution.
I'm sorry.</p>
<p>I've tried a few ideas, but I'd like to highlight the one that worked the best for my score in particular. <br>
The other ideas I have are ones that others will surely be trying.</p>
<p>I was very affected by this notebook (<a href="https://www.kaggle.com/jt120lz/open-channel-clear-plot">https://www.kaggle.com/jt120lz/open-channel-clear-plot</a>).
Thank you, liuze!!!!</p>
<h3>train data</h3>
<p>The following graph shows the signal replaced by the average value per openchannels. For example, the red line in Batch5 shows the average value of openchannels = 5 for signal in Batch5.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2F9150a9269a3cd46e5afcf5c4d3e33391%2F(1" alt="">.png?generation=1590758184980770&alt=media)</p>
<p>As you can see, Batch4 and Batch9 clearly behave differently than the other batches. <br>
I've corrected this discrepancy.
This is the only thing to do with the train data.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2Fe3a463a9006a1131b65f4a541c29e977%2F(2" alt="">.png?generation=1590758231124352&alt=media)</p>
<h3>test data</h3>
<p>The group is defined as follows.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2F88a196c4a2d0fcde8d75749aa8f2dec9%2FScreenshot%20at%2011-09-03.png?generation=1590804592985105&alt=media" alt=""></p>
<p>The problem is test data. test data doesn't have open_channels information, so the above method can't be used.</p>
<p>However, what needs to be done is immediately apparent when you look at the kde plot.
Compare the test data(group4) with the modified train data(group4).
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2F8897b4164ad3cdfe4cb97fa0f70bd67e%2F2.png?generation=1590804832825827&alt=media" alt=""></p>
<p>It's clear that we have work to do.
Just fix the above misalignment. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2F468f71b3681175bfd8932379ebe7d980%2F.png?generation=1590804860802524&alt=media" alt=""></p>
<p>That's nice.</p>
<h2>code</h2>
<p>All you have to do is process the following!
Thanks!!
```python</p>
<h1>--- train ---</h1>
<p>off_set_4 = 0.952472 - (-1.766044)
off_set_9 = 0.952472 - (-1.770441)</p>
<h1>batch4</h1>
<p>idxs = df_tr['batch'] == 4
df_tr['signal'][idxs] = df_tr['signal'].values + off_set_4</p>
<h1>batch9</h1>
<p>idxs = df_tr['batch'] == 9
df_tr['signal'][idxs] = df_tr['signal'].values + off_set_9</p>
<h1>--- test ---</h1>
<p>off_set_test = 2.750
df_te['signal'] = df_te['signal'].values
idxs = df_te['group'] == 4
df_te['signal'][idxs] = df_te['signal'][idxs].values + off_set_test
```</p>
<p>thanks Prakhar Rai !!</p>
<p>thanks!!</p>
<p>I defined the offset so that the average of the signal of open_channels=3 for batches 4 and 9 is equal to the average of the other batches.
<code>python
signal_mean(batch4) = -1.766044
signal_mean(batch9) = -1.770441
signal_mean(other) = 0.952472
</code></p>
<p>The above values were obtained.
And in the code, I leave the intermediate formula as a note.</p> |
University of Liverpool - Ion Switching | 23th Place Interesting Approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone,
From the beginning of the competition and after reading the paper from the host, I've been thinking about 2 ways to improve the models.
1. Transfer learning from 1f models to 3\5\10 models ==> Failed with HMM\RF\Wavenet...
2. Data augmentation from 1f signals\open_channels to 3\5\10 signals\open_channels: From the intuition that model with 3 max. open channels should be like 3 1f models running with summarized results ==> <strong>Success</strong></p>
<p><strong>For the second approach:</strong>
1. Split train\validation set in each fold
2. Random select signals\open_channels sequence (len=4000) from 1f models
3. Randomly picked, for example, 5 sequences from step 2. and sum up to create a sample with 5 max. open channels.
4. For each summed signal, adjust the mean\variance according to the corresponding mean\variance of the original data.</p>
<p>My wavenet scored around <strong>cv .9388, lb .941</strong>
After <strong>augmenting data for 5\10 open channels</strong>, the wavenet scores <strong>cv .9423, lb: .945</strong>
I've shared my code here: <a href="https://www.kaggle.com/khyeh0719/wavenet-with-augmentation-2">https://www.kaggle.com/khyeh0719/wavenet-with-augmentation-2</a>
which scores around .944 on public lb for a single model with single fold (no RFC features)</p>
<p>Hopefully, this method could be used extensively to create signals\open_channels without actually generating the signal (but assumingly the mean\variance of each open channel state is known in advance :p)</p>
<p>Anyways, I hope you like it and feel it interesting! Thanks Kaggle and the host for such an interesting competition, I really enjoyed it, and finally big congrats to all winners!</p> |
University of Liverpool - Ion Switching | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>UPDATE: All solution code is now available on my GitHub: <a href="https://github.com/stdereka/liverpool-ion-switching">https://github.com/stdereka/liverpool-ion-switching</a></p>
<p><strong>Introduction</strong></p>
<p>Before I start describing my approach, I would like to give many thanks to the following kagglers:
1. <a href="/eunholee">@eunholee</a> for discovering the most effective way of removing drift, explained in this <a href="https://www.kaggle.com/eunholee/remove-drift-using-a-sine-function">kernel</a>.
2. <a href="/friedchips">@friedchips</a> for diving really deep into markovian nature of the data. His posts and kernels spurred me to learn more about HMM (it didn't help me in the last weeks of the competition, but I experienced true pleasure learning the subject).
3. <a href="/kakoimasataka">@kakoimasataka</a> for attracting my attention to the noise component of the data. His <a href="https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise">kernel</a> allowed me to understand how I should treat the noise in my preprocessing.</p>
<p>There are several things I admitted on the early stage:
1. Preprocessing and data augmentations make more contribution to the final score than model tuning. One of the early breakthroughs of this competition was drift removal.
2. In 0.935+ zone even a very negligible (5 or 6 signs after decimal point) improvement of the score (CV or LB) is important and should be taken into account.</p>
<p><strong>Preprocessing and data leak</strong></p>
<p>I described the process of data cleaning and creating new data <a href="https://www.kaggle.com/stdereka/2nd-place-solution-preprocessing-tricks">here</a>. Unfortunately, the private part of the test dataset is corrupted by a leak. It has been already revealed by the other team in this <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/153824">post</a>, so I am not going to describe it here. My personal history with this leakage is following. With my way of creating new data, I faced this leak in my local CV and I had to fight with it (I explain how in the modeling section) in order to get reliable CV score. Less than two days before the end of the competition I checked the test data for this kind of leakage. The result shocked me: I could never imagine that I can find it in the private part!</p>
<p><strong>Modeling</strong></p>
<p>As a baseline for my final model I used Wavenet-based architecture, which was initially posted in this <a href="https://www.kaggle.com/siavrez/wavenet-keras">kernel</a>. Following ideas worked for me:
1. Early stopping, learning rate scheduler. My model's convergence behavior was slightly different for different folds, so these simple tools helped me to control overfitting.
2. Data augmentation. I used flipping the signal and adding a random shift to the signal. Experiments showed that model predictions are extremely sensitive to a shift, and this augmentation helped the model to became more generalisable.
3. RFC probabilities as features for NN. Initially proposed in this <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/144645">post</a>, the effectiveness of this stacking has been confirmed in my experiments.
4. CV strategy. As I trained on different augmented datasets (with different size and target distribution), I had to validate on a subdataset, which would be common for all synthetic datasets. To do so I have chosen the original non-synthetic data. OOF score, obtained on this subset, was used to compare different models, trained on possibly different data. The original train data is not corrupted by the leak mentioned above, so it was safe, and as I got to know after the competition deadline, it was correlated with private LB.
5. One model for all groups of data. In case of NN-based models, it was not necessary to train different models for different groups. Moreover, I observed some sort of synergy between the groups: separate models had lower CV than one general model.</p>
<p><strong>Final submissions</strong></p>
<ol>
<li><p>My first submission is a blend of two models, trained with slightly different scaler and augmentations. No leak is exploited here, this submission was planned before I found the leakage. CV 0.94359, public 0.94664, private LB 0.94529.</p></li>
<li><p>The second one differs from the first only in 7th batch of the test data. To predict this batch I trained a separate model on reduced signal and then added subtracted channels to its prediction.</p></li>
</ol>
<p><strong>Conclusions</strong></p>
<p>I have mixed feelings about this competition.</p>
<p>On the one hand, I haven't built a model, which can handle real-world ion switching data. The others published by this moment solutions are also barely applicable to a data different from competition dataset. For example, the drift problem hasn't been solved and all top-scoring models are trained on the clean data. The leak I have found in the last days of competition leaves no confidence in the data. I realized that this kind of leakage can be implicitly hidden in the rest of the data, so complex models can exploit it automatically.</p>
<p>On the other hand, I worked hard for more then a month and a half and enjoyed improvement of my models, it was really funny and cool. Concerning the leak, I think that the task of searching vulnerabilities in the data is valuable in itself. In the long run it can help to build reliable models. I cannot be sure, but probably the knowledge of possible leaks in ion switching data will help electrophysiologists to improve their existing models.</p>
<p>In any case, I am grateful to the organizers and participants of the competition for an unforgettable experience.</p>
<p><strong>P.S. I am not a native English speaker, so if you find any mistake, please, let me know. If you have any questions and need a clarification on some details, don't hesitate to ask. I'd also appreciate any feedback (positive or negative) you leave in the comments.</strong></p>
<ol>
<li>I use group KFold on all available data, but compute the metric only on OOF predictions which correspond the original data:
<code>f1_score(ground[:5_000_000], predictions_oof[:5_000_000], average='macro')</code>
The data after 5000000 is synthetic.</li>
<li>signal + lag and lead features</li>
</ol>
<p>Thank you! Finding the leak in the last days of the competition was really frustrating for me. I am still in frustration: the thing I fought with on CV, arises in test and gives the final boost, that is incredible...</p>
<p>Thank you!</p>
<p>> Such simple features + wavenet give you 2nd place</p>
<p>If you add 1D convolution block before wavenet block, you will not even need lag and lead features. Basically, NNs don't require a lot of features: they are designed to find these features. For this reason I don't remove periodic noise as wavenet is capable of treating it by itself.</p>
<p><a href="/group16">@group16</a> Here I cannot disagree with you. Nevertheless, the results of this competition could be a good reason to verify the original <a href="https://www.nature.com/articles/s42003-019-0729-3">paper</a>, according to it some synthetic data was also used. In addition, it is possible that teams (I don't know exactly about your team) who used <code>group_4 + group_4 = group_3</code> magic which I explained in my kernel have implicitly exploited the same leak.</p>
<p><a href="/khahuras">@khahuras</a> I appreciate any <em>constructive</em> feedback. I think that <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/153689#862359">making jokes from a pure language</a> is <em>unconstructive</em> and even toxic. <em>I cannot be sure, but probably</em> you should take into account, that English is not my native language and I don't always choose the right register for my phrases.</p> |
University of Liverpool - Ion Switching | 310th place solution (large shake up solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>we learn more from failure than success</strong> 😊 </p>
<p>i would like to thank my talented team mates <a href="/kakoimasataka">@kakoimasataka</a> and <a href="/tonychenxyz">@tonychenxyz</a>
you guys are awesome,i learned a lot from you,thanks for all the wisdom :)</p>
<p>some of my favorite leanings from this competition (top 10): </p>
<ol>
<li><a href="https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise">Remove/Pick up electric? noise</a></li>
<li><a href="https://www.youtube.com/watch?v=EqUfuT3CC8s">Markov Models</a></li>
<li><a href="https://www.youtube.com/watch?v=5araDjcBHMQ">Hidden Markov Models</a></li>
<li><a href="https://www.youtube.com/watch?v=spUNpyF58BY">But what is the Fourier Transform? A visual introduction.</a></li>
<li><a href="https://en.wikipedia.org/wiki/Band-pass_filter">Band-pass filter</a></li>
<li>[Understanding Ion-Switching with wavenet] (<a href="https://www.kaggle.com/mobassir/understanding-ion-switching-with-modeling">https://www.kaggle.com/mobassir/understanding-ion-switching-with-modeling</a>) and <a href="https://www.kaggle.com/sggpls/wavenet-with-shifted-rfc-proba">Wavenet with SHIFTED-RFC Proba</a></li>
<li><a href="https://en.wikipedia.org/wiki/The_Hum">The Hum</a></li>
<li><a href="https://en.wikipedia.org/wiki/Band-stop_filter">Band-stop filter</a></li>
<li><a href="https://www.youtube.com/watch?v=CaCcOwJPytQ">What is a Kalman Filter?</a></li>
<li><a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/133874">What is Drift?</a> and
<a href="https://www.kaggle.com/tarunpaparaju/ion-switching-competition-signal-eda">Ion Switching Competition : Signal EDA</a></li>
</ol>
<h1>Our Model</h1>
<p>we create fake signal using gaussian noise and 50 Hz noise.we use FourierTransform to create the HUM noise. we use mean and std to create the Gaussian noise.
Then we Pass our fake signal to the pipeline and a FEATURE is created, FEATURE mainly uses the GMM and HMM results. we use mean and std to help converge or avoid convergence to local solutions.
average of each channel's signal is aligned to 0 based on the HMM predictions.We hoped that by doing so, we could use all the data to detect anomalies in even a small class, such as channel 10. we stopped using all the data along the way though.</p>
<h1>Pipeline</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2034058%2Ff5d3e7be570e31427da634b4662344ab%2FUntitled%20Diagram%20(1" alt="">.png?generation=1590458481854840&alt=media)</p>
<p>if you don't understand what do we mean by "Aligning the mean with vertical shift" then check this kernel <a href="https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise">Remove/Pick up electric? noise</a></p>
<p><a href="/muhakabartay">@muhakabartay</a> it helped a bit</p>
<p><a href="/robikscube">@robikscube</a> thank you a lot
i got free medal in lyft competition after working for 2 days only
and in this competition after a lot of hours work we ended up overfitting
so i think this medal less solution is my real success and that medal was my failure but people will mark that as success.</p>
<p>thank you a lot for keeping the discussion forum busy, we regularly discussed about a lot of post of you and some other kaggler like you posted in the discussion forum and it helped us to learn something at least. thank you :)</p> |
University of Liverpool - Ion Switching | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I was not afraid of overfitting due to the high score of 0.946. I had inspected the results and they seemed not to include major errors.
When using stacking it is better to use different models on different levels. I was using Keras models as input to another Keras model. This seemed to me to be a risk, even if they have different architecture.
I then created first models without the Keras stack, but with BiGRU for feature extraction. The new models had lower CV results than models similar to my best results, but higher LB results. They seemed to generalize better. Therefore I focused on eliminating the stacking of Keras models. </p>
<p>Thanks for your comments. I saw the use of GroupKFold, but I found the implementations clumsy and used a different way to achieve similar results. With
X = df_train.signal.reshape(-1, segment_length)
y = df_train.open_channels.reshape(-1, segment_length)
I get sequences of adjacent records of length "segment_length".
(Here I also ended up with segment_length 4000 - after testing also 1000, 500 and later even 10000). With
group = df_train.batch.values.reshape(-1, segment_length)
group = group[:,0].reshape(-1,)
I have assigned a batch number to each sequence (using that 4000 is a divisor of the length of each batch). I can now use StratifiedKFold on the group and splitting using the first index of X by
X_train, y_train = X[train_idx], y[train_idx]
Now X_train has shape (-1, segment_length) and includes sequences of adjacent 4000 records.
This is (almost) trivial coding that can also be used for storing oof and test predictions.</p>
<p>First, I would like to thank Kaggle and the host @richardbj for organizing this competition. I find the competitions with scientific and mathematical background the most interesting. Of course a leak is always bitter.
The solution I created is the average of 4 very similar Keras models. They are based on encoding the signals based on the results of Gaussian Mixture Models, initial processing (head) using Dense and BiGRU layers for feature creation, a Wavenet Model for main processing and final steps creating the classification using Dense layers. Additionally two tree models based on Random Forest Classifier (RFC) and Histogram-based Gradient Boosting Classification (HGB) are used as stacked input.
Following is a description of the individual parts</p>
<h2>Data Preparation and Preprocessing</h2>
<p>The data is separated into batches of 500000 records each. This is done with the dataset df (for train and test) through df.batch = df.index // 500000. In the same manner a split into segments of 100000 records and windows of 10000 records are created. </p>
<p>Initially the segments can be classified into segment types 1,...5 based on the values of mean standard deviation over the segments. </p>
<p>As identifed by @cdeotte the signals contain a sinusoidal drift. This drift is removed automatically for train and test data using a python script. This is done by fitting and subtracting a sinusoidal curve from the signal over a complete batch and then a segment if not possible for the batch. </p>
<p>The important part here as well as for the definition of segment types is to do this automatically without any visual inspection and being able to execute the scripts on new test data. Otherwise the solution would be invalid.</p>
<p>The resulting signals are visualized below as discussed in the forum already. For the train data with batch number
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fbc203e2b2a812b4bf9e4eb78b5c72764%2Fimage1.png?generation=1590499148853904&alt=media" alt="">
and the test data with segment id.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fcc4ca985c2b1673c799692ef8e8a5230%2Fimage2.png?generation=1590499399969186&alt=media" alt=""></p>
<p>The training data contains outliers in batch 8. I did not see any way of correcting this and therefore cut this out of the training data.
When taking the signal values for each open channel and building the mean for each window one gets an interesting plot
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fedc2d509486f36f69aa2a546c59d7a6d%2Fimage3.png?generation=1590499712443407&alt=media" alt="">
This shows that the signal strengths for an open channels value are approximately equal for segment types 1, 2, 3 and 4, but shifted for segment type 5.
I evaluated the possibility to use the segment type for models and even split models. I came to the conclusion that this would be an error and could overfit badly. Instead I grouped segment type 1, 2, 3 and 4 into a segment group 0 and segment type 5 into segment group 1 and used the segment group as a feature in modeling.</p>
<h2>Gaussian Mixture Models</h2>
<p>The target variable Y = number of open channels is based on a HMM. The signal X is a function of Y, but including a Gaussian Noise. Evaluating the conditional distribution P(X|Y=k, Segment) gives Normal distributions with mean mu_k = mu_0 + k*delta, but with standard deviation that differ slightly between segment types and are larger for segment group 1. The image below show an approximation of the signal through GMM for one sample segment
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fbf357c97e47e81ec9ea9b18a8ed6bc56%2Fimage4.png?generation=1590500503459258&alt=media" alt="">
The red curve shows the weighted sum of normal distributions with center corresponding to the discrete mean values (vertical lines) associated to the open channels value and the weight corresponding to the fraction of target values equal to that open channels value.</p>
<h2>Signal Encoding</h2>
<p>The modeling of the signals as GMM enable encoding the signal X as a vector (f_k), k =0,…,10 using the normal distribution density function N(mu, sigma)(X), with mean mu_k and sigma depending on the segment type.
This encoding of the signals is one key input into the models.</p>
<h2>Models</h2>
<p>The main models are based on Keras Wavenet models with specific head and tail processing.
Initially I focused on sequence processing using BiGRU, but they are much slower and therefore did not allow as much experimentation. I did not manage to get a better score that 0.942 on the public LB.
When changing to Wavenet I initially used RFC and HGB models as additional input, but also the already created Keras models with BiGRU as stack input. This reached 0.946 on the public LB, but I had the impression it would overfit. I therefore dropped the BiGRU models as stack input and included BiGRU as a processing step together with the Wavenet model.
The local CV results actually dropped, but the LB results increased and inspecting the predictions gave me seemingly better results. This gave me the final models.
For the Keras models I did not use early stopping. I saved 3 models: With best f1 score, with lowest categorical cross entropy and latest (for stacking purposes). When initially testing on the LB the one stored based on “best f1 score” always had better LB score than the “lowest loss”. I therefore sticked to that.
The Keras models were trained using Adam optimizer and SWA as well as a One Cycle Scheduler with max learning rate of 1.0e-3 and 300 – 350 epochs. </p>
<h2>Cross Validation</h2>
<p>For validation I used 5 fold CV. Prior to splitting the train data into train and validation data I split it into sequences of 4000 records and used StratifiedKFold on the batch as group. This makes certain that the batches are distributed equally among the folds. But what about the target variable? For this I looked at the distribution of the target variable for different random state values. The distribution depends strongly on that value and I looked for a value giving similar distribution of each class for both train and validation folds. So which one to choose? The answer is not 42, but one good answer was 43 (also 53 or 132).
For the tree models for stacked input it is important not to use the validation folds for early stopping, but only for storing the out of fold predictions. Both RFC and HGB support this directly. I also tried Catboost with cross validation for determining the number of iteration, but did not use this in the final models. I was not able to get anything sensible out of LightGBM and (my version of) xgboost hat errors that did not allow me to use it.</p>
<h2>Analysing Predictions on Test Data</h2>
<p>The CV results turned out to correlate very well with the LB score, but there were potential issues in particular with the different test data for segment type 1. For train data these only had open channels values 0 or 1, but for test data it turned out to have higher values 2, 3, or even 4. </p>
<p>For validating this I created scatter plots of the signals with predicted open channels values for each window (every 10000 records) showing horizontal lines for the relevant mean values from the GMM models. The plot below shows two windows
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fa269cd30482eb4500f7d64c43dcc1a95%2Fimage5.png?generation=1590501512153335&alt=media" alt=""></p>
<p>Looking at these different windows one can validate if there are “obvious” errors and try to enhance the models. When using Wavenet model alone I saw such errors that were then corrected when adding a BiGRU layer.</p>
<h2>Some Final Words</h2>
<p>I see the models as a good basis for further work. I had limited hardware resources (single GTX 1080Ti) and could not analyze variants as wished. It can definitely be improved.
Finally: It is unfortunate that a leak occurred and partly spoiled the results.</p> |
University of Liverpool - Ion Switching | 59th place solution [Single Model] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. In the beginning, I was just building bi-directional NN. After doing some experiment I tried skip connections and it gives boost in some classes (I think for open_channels = 8 and 9). </p>
<p>Dear All,
Here is my model (GRU+waveNet) that i used in this competition. This model is able to achieve .943 on Public LB only using cleaned signals.
Additional Features: K-Means clustering and transform as one-hot encoding.
Loss: Focal Loss.
Key point:
1. Dropout reduces the overfitting
2. Concatenate of GRU and wavenet output boost the score. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1970439%2Fce19354cb230dd9c9c44101ad088941c%2Fkaggle_model.png?generation=1590482812868883&alt=media" alt=""></p>
<p>Thanks for your comment. I was also thinking the same before applying k-mean clustering.
On raw signal, I haven't tried this architecture. But I tried bidirectional GRU without k-Mean clustering, and I obtained around 0.93996(on private LB) and 0.94142 (on public LB). </p>
<p>The issue on raw signal might be encoding with k-mean clustering. </p> |
University of Liverpool - Ion Switching | 5th Place Solution with Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Everyone,
Thanks to Kaggle team, organizers, Kaggle community for their contributions and my brilliant teammates <a href="https://www.kaggle.com/titericz">Giba</a>, <a href="https://www.kaggle.com/aerdem4">Ahmet</a>, <a href="https://www.kaggle.com/skgone123">Skgone </a> and <a href="https://www.kaggle.com/lihuajing">Alex </a>for their efforts. It was real fun to work with them.</p>
<p>I shared a post about our journey in this competition with some technical details previously;
<a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/153688">https://www.kaggle.com/c/liverpool-ion-switching/discussion/153688</a></p>
<p>This time, I will briefly summarize our final pipeline and share the code for each step.</p>
<h3><strong>1- Clean Dataset Creation</strong></h3>
<p>This is the first and most important step in our progress. Before <a href="https://www.kaggle.com/titericz">Giba </a> removed drift and other kind of noises by himself, we couldn't have reliable CV with Chris' data. other teams may set up a reliable CV on Chris' dataset but it didn't work for us after some point. (thanks to <a href="https://www.kaggle.com/cdeotte">Chris</a> for his generosity in this competition). It's a bit hard to prevent overfitting on a synthetic and noisy dataset, especially when you are using multiple OOFs for stacking. <a href="https://www.kaggle.com/titericz">Giba</a> shared his perfect work with all explanations, please see his notebook for more details;
<a href="https://www.kaggle.com/titericz/remove-trends-giba-explained">https://www.kaggle.com/titericz/remove-trends-giba-explained</a></p>
<h3><strong>2- Data Augmentation</strong></h3>
<p>We found it useful to augment a new 10-channel batch by summing 5th and 8th batches because 10-channel batches are simply the sum of two 5-channel batches. To be more specific, it's sum of 2 different (5 HMM Bs + pink noise + white noise).</p>
<p>```
train['group'] = np.arange(train.shape[0])//500_000
aug_df = train[train["group"] == 5].copy()
aug_df["group"] = 10</p>
<p>for col in ["signal", "open_channels"]:
aug_df[col] += train[train["group"] == 8][col].values</p>
<p>train = train.append(aug_df, sort=False).reset_index(drop=True)</p>
<p>```
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F612360%2F7381c147dcb571e8290ba8b63d456a8e%2Fpublic_aug.png?generation=1590590911570595&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F612360%2F92a489aa6d2c2e6fdb7bdf7b57256896%2Fprivate_aug.png?generation=1590591833151679&alt=media" alt=""></p>
<h3><strong>3- Validation Scheme and OOF Creation</strong></h3>
<p>We used the same validation for all models including the ones for OOF creation, which is Group-5Fold over 4k sequences. This validation works fine with Giba's clean dataset. Before we don't trust this validation scheme, we were using 2-fold CV to test different things based on these groups divided equally in terms of batch types;
```
groups = [[0, 2, 3, 4, 5], [1, 6, 7, 8, 9]]</p>
<p>```
We tried 3 LGBs, 1 RFC, 1 MLP, 1 BiGRU, 1 KNN, 1 SVM for stacking. We used the same validation scheme and applied the same data augmentation above to create OOFs. Only 3 models' OOFs were added to the wavenet (click on model names to reach their notebooks);
- <a href="https://www.kaggle.com/meminozturk/into-the-wild-rfc-classification">Random Forest Classifier,</a> similar to <a href="https://www.kaggle.com/sggpls">Sergey's</a> public kernel (thanks to Sergey for his contributions). We applied it on our own dataset and calculated lead/lag features grouping by 10-second batches to avoid leakage from other batches for both train and test. </p>
<ul>
<li><p><a href="https://www.kaggle.com/meminozturk/into-the-wild-lgb-regression">Lightgbm Regression,</a> simply the Lightgbm Regression version of the RFC model with same lead/lag features. <a href="https://www.kaggle.com/lihuajing">Alex</a> had created this model before we teamed up. I tried hyperparameter optimization and increased its CV by 0.01, however, wavenet worked better with initial paramaters;</p></li>
<li><p><a href="https://www.kaggle.com/meminozturk/into-the-wild-mlp-regression">MLP Regression,</a> it's the MLP part of <a href="https://www.kaggle.com/martxelo">Marcelo's </a> kernel with some adjustments;</p></li>
</ul>
<h3><strong>4- Wavenet Model</strong></h3>
<strong>(Private Score: 0.94559 - Public Score: 0.94673)</strong>
<p>Our main model is wavenet like most teams and better version of <a href="https://www.kaggle.com/nxrprime">Trigram's </a> great kernel with some more tricks;
-Own dataset and OOFs
-No other signal-related features
-Simpler but effective Wavenet (tuned by <a href="https://www.kaggle.com/aerdem4">Ahmet</a>)
<code>
inp = Input(shape = (shape_)) <br>
x = wavenet_block(inp, 16, 3, 8)
x = wavenet_block(x, 32, 3, 12)
x = wavenet_block(x, 64, 3, 4)
x = wavenet_block(x, 128, 3, 1)
out = Dense(11, activation = 'softmax', name = 'out')(x) <br>
model = models.Model(inputs = inp, outputs = out)
opt = Adam(lr = LR)
model.compile(loss = losses.CategoricalCrossentropy(), optimizer = opt, metrics = ['accuracy'])
</code>
-Simpler LR scheduler
-Used Best weights based on F1 Macro for prediction
-Test time flip augmentation added
-Train time flip augmentation used as in the original kernel
-Sliding Window prediction (implemented by <a href="https://www.kaggle.com/aerdem4">Ahmet</a>)
<code>
def sliding_predict(model, x):
pred = np.zeros((x.shape[0], 100_000, 11))
for begin in range(0, 100_000, SLIDE):
end = begin + GROUP_BATCH_SIZE
pred[:, begin:end, :] += model.predict(x[:, begin:end, :])
return pred
</code></p>
<p>It scored 0.04559 which is better than our current private score without any blending, <a href="https://www.kaggle.com/meminozturk/into-the-wild-wavenet/">which is available here</a></p>
<h3><strong>5-XGB Classifier for Submission</strong></h3>
<strong>(Private Score: 0.94555 - Public Score: 0.94686)</strong>
<p>Ahmet implemented an XGB Classifier to tune predictions from Wavenet. It's simply using signal, wavenet predictions and noise as features;
```
train['noise'] = train['signal'].values - train['wavenet_prediction'].values
test['noise'] = test['signal'].values - test['wavenet_prediction'].values</p>
<p>```
This step gave us a nice boost on public LB but, results are almost the same on private leaderboard without XGB. On the other hand, our best private submission was the result of this XGB model with a score of 0.94568 on private leaderbord, <a href="https://www.kaggle.com/meminozturk/into-the-wild-xgb-submission">which is available here</a></p>
<p>I hope you find it useful what we've done, which is open to play with.</p>
<p>Hope to see you in the upcoming competitions,
Emin</p>
<p>Thanks Chris, congrats to you and the team! We thought that you solved the competition by reverse-engineering for a long time. We realized that you're working on ML models like us towards the end of the competition since we're able to pass you on LB 😃</p>
<p>All of this information is from the original paper, see images on Page 5:
<a href="https://www.nature.com/articles/s42003-019-0729-3.pdf">https://www.nature.com/articles/s42003-019-0729-3.pdf</a>
You can read Giba's new post about this;
<a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/154197">https://www.kaggle.com/c/liverpool-ion-switching/discussion/154197</a>
Plus, There was a long discussion with nice comments during the competition;
<a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/140634">https://www.kaggle.com/c/liverpool-ion-switching/discussion/140634</a></p>
<p><strong>All links in one place</strong>
*<em>Clean Dataset;</em>*
<a href="https://www.kaggle.com/titericz/remove-trends-giba-explained">https://www.kaggle.com/titericz/remove-trends-giba-explained</a></p>
<p><strong>Notebooks to Create OOFs;</strong>
<a href="https://www.kaggle.com/meminozturk/into-the-wild-rfc-classification">https://www.kaggle.com/meminozturk/into-the-wild-rfc-classification</a>
<a href="https://www.kaggle.com/meminozturk/into-the-wild-mlp-regression/">https://www.kaggle.com/meminozturk/into-the-wild-mlp-regression/</a>
<a href="https://www.kaggle.com/meminozturk/into-the-wild-lgb-regression">https://www.kaggle.com/meminozturk/into-the-wild-lgb-regression</a></p>
<p><strong>Wavenet Model;</strong>
<a href="https://www.kaggle.com/meminozturk/into-the-wild-wavenet/">https://www.kaggle.com/meminozturk/into-the-wild-wavenet/</a></p>
<p><strong>XGB Submission Model;</strong>
<a href="https://www.kaggle.com/meminozturk/into-the-wild-xgb-submission">https://www.kaggle.com/meminozturk/into-the-wild-xgb-submission</a></p> |
University of Liverpool - Ion Switching | 6th place solution, Factorial HMM with a shared factor | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank your for your comment. You are nice guy! Your team was a good competitor in this competition. I will do my best to have a good match with you again.</p>
<p>Our model is one of factorial hidden Markov models (FHMM). See this paper for details of FHMM.
<a href="http://www.ee.columbia.edu/~sfchang/course/svia-F03/papers/factorial-HMM-97.pdf">http://www.ee.columbia.edu/~sfchang/course/svia-F03/papers/factorial-HMM-97.pdf</a></p>
<p>The FHMM for this completion is illustrated in the figure bellow. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4168019%2F29feccf2b47f6aff20f69b5ef180a212%2FFHMM.png?generation=1590495412135845&alt=media" alt=""></p>
<p>\( M\) and \( N\) represent the number of channels and time steps respectively. \( Z_{mn} \in \{0,1,2,3\} \) represents the hidden state of channel \( m\) at time \( n\). The hidden state of each channel follows the 1st order Markov process independently with the shared transition probabilities; \( P(Z_{m,n+1}=k | Z_{mn}=j) =A_{jk} \). \( C_n\) is the number of open channels at time \( n \) and it is calculated as \( C_n = \sum_{m=1}^M F(Z_{mn}) \), where \( F(0)=F(1)=0\) (close states) and \(F(2)=F(3)=1\) (open states). \( S_n\) represents signal at time \( n\) and it is given by Gaussian emission; \( P(S_n| C_n) = \frac{1}{\sqrt{2\pi}\sigma} \exp \left[ -\frac{1}{2\sigma^2} (S_n-aC_n-b)^2\right] \). </p>
<p>In our model above the adjustable parameters are \(A,a,b~{\rm and}~\sigma \).
In the training phase \( A \) is optimized, and the other parameters are optimized in the prediction phase. </p>
<p>(1) Training phase; obtain A <br>
Using the time-series of open_channels in the train data, the transition probability A is optimized to maximize the likelihood function \( P(C)=\sum_{Z} P(C,Z) \), where \( C=(C_1,C_2,...,C_N) \) and \( Z =(Z_{11},Z_{21},...,Z_{M1},Z_{12},Z_{22},...,Z_{M2},...,Z_{1N},Z_{2N},...,Z_{MN}) \). The usual EM algorithm is used for this maximization. The computational cost is \( O({}_{M+3} C_{3} N) \) for one step of the EM algorithm. This cost is largely reduced from the cost \( O(4^{M+1} N) \) of the original FHMM in the above paper. <br>
Below notebook shows the implementation for this phase. <br>
<a href="https://www.kaggle.com/shimizumasaki/6th-place-solution-training-phase">https://www.kaggle.com/shimizumasaki/6th-place-solution-training-phase</a></p>
<p>(2) Prediction phase; obtain \( a,b,\) and \( \sigma \) <br>
In the test data predictions of the number of "open_channels" at each time point, \( C_{n} \), are calculated as bellow. Using the time-series of "signal" \( S \) in the test data and the transition probability obtained in the phase (1), the remaining parameters, \( a,b,\sigma\), are optimized to maximize the likelihood function \( P(S)=\sum_{C,Z} P(S,C,Z) \). This optimization is basically same as in (1). The posterior probability of \(C_n\) under given signal, \( P(C_n|S) \), can be obtained at the end of the EM algorithm for this optimization. Then, predictions of "open_channels" are decided from MAP estimation, \(Pr_n={\rm argmax}_{C_n} P(C_n|S) \). <br>
NOTE: We fixed \( a \simeq 1.234 \) since the EM algorithm fails to obtain it in the sections with the lowest open activities. </p>
<p><a href="https://www.kaggle.com/shimizumasaki/6th-place-solution-fhmm-with-a-shared-factor">https://www.kaggle.com/shimizumasaki/6th-place-solution-fhmm-with-a-shared-factor</a></p>
<p>(3) noise removal <br>
Using the predicted time-series of "open_channles", \(Pr \), the time-series \(S - aPr -b \) is decomposed into Fourier series by short-time Fourier transform. Then, several peaks of power spectrum are removed, and the inverse short-time Fourier transform is used to obtain clean signal. Phases (2) and (3) are repeated several times to clean signal gradually. See above notebook for details of our noise cleaning. However, we don't think our cleaning process is good way. </p>
<p>Thanks. I'm sorry too! Now I'm happy! </p> |
University of Liverpool - Ion Switching | 7th Place Solution - Part 1: Removing Noise from Signal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all winners!</p>
<p>First of all i want to thank my wonderful teammates <a href="/philippsinger">@philippsinger</a>, <a href="/dott1718">@dott1718</a> and <a href="/cdeotte">@cdeotte</a> for the awesome collaborative teamwork. Everyone gave so much effort to get the best result and I am already looking forward to teaming-up with you again.</p>
<p>In this write-up I will get into details of our routine to remove noise from the provided signal. Afterwards, <a href="/cdeotte">@cdeotte</a> will go into more detail about the models that we used in our final submission. See his discussion <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/154253">here</a>.</p>
<h1>Data Cleaning</h1>
<p>One of the most difficult but at the same time also one of the most rewarding things to do in this challenge was the cleaning of the dataset (train and test). Prior to teaming-up, I had exclusively worked on that part and only used public kernels such as the WaveNet kernel. Thank you <a href="/siavrez">@siavrez</a> for this kernel! I also found HMM (just straightforward hmmlearn) to be a good choice for quickly testing the cleaning but we didn’t use HMM in our final solution.</p>
<p>The original data could be divided into batches of 10 s or 50 s and the “signal” already showed a pearson correlation of 0.8017 with “open_channels”.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F651a59c5de2005b4b2c0bf1d878a98ff%2F__results___7_0.png?generation=1590526158397566&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F276ea09f74de3bbf52f13662f6fc177d%2F__results___8_0.png?generation=1590526175204985&alt=media" alt=""></p>
<p>For ease of understanding, I will refer to parts of the train and test set with their corresponding maximum open channels and opening probability or their type by "color". The “pink” parts in the test set are assumed to all have a maximum of 5 channels with low opening probability.</p>
<h2>Sine Drift</h2>
<p>Early on in the competition, an artificial sine drift added to parts of the train and test set has been spotted (<a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/133874">https://www.kaggle.com/c/liverpool-ion-switching/discussion/133874</a>).
The drift was identified to be a 50 s long half sine wave (or just the first 10 s of that) with an amplitude of 5 and may be removed with the following code snippet:</p>
<p>```
def sine_func(x, a, b):
return -a * np.sin(b * x)</p>
<p>sine_drift = sine_func(np.linspace(0, 499999, 500000), -5, np.pi / 500000)
sine_drift_short = sine_drift[:100000]</p>
<h1>Clean the train set</h1>
<p>train_csv.loc[train_csv["100k_batch"] == 5, 'signal'] -= sine_drift_short # batch 2 first segment
train_csv.loc[train_csv["500k_batch"] == 6, 'signal'] -= sine_drift
train_csv.loc[train_csv["500k_batch"] == 7, 'signal'] -= sine_drift
train_csv.loc[train_csv["500k_batch"] == 8, 'signal'] -= sine_drift
train_csv.loc[train_csv["500k_batch"] == 9, 'signal'] -= sine_drift</p>
<h1>Clean the test set</h1>
<p>test_csv.loc[test_csv["100k_batch"] == 0, 'signal'] -= sine_drift_short
test_csv.loc[test_csv["100k_batch"] == 1, 'signal'] -= sine_drift_short
test_csv.loc[test_csv["100k_batch"] == 4, 'signal'] -= sine_drift_short
test_csv.loc[test_csv["100k_batch"] == 6, 'signal'] -= sine_drift_short
test_csv.loc[test_csv["100k_batch"] == 7, 'signal'] -= sine_drift_short
test_csv.loc[test_csv["100k_batch"] == 8, 'signal'] -= sine_drift_short
test_csv.loc[test_csv["500k_batch"] == 2, 'signal'] -= sine_drift
```</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F93bb094a57ab4145e141c9285beb27a0%2FBild1.png?generation=1590526366735290&alt=media" alt=""></p>
<h2>10 Channel Offset (“Ghost-Drift”)</h2>
<p>The next thing that becomes obvious in the data, was a large offset between the 10 channel batches and the rest of the data. We now know where this difference originates from (leak), but at the time of evaluating, we were not sure about the exact offset and we have used various techniques for calculating it. Depending on the technique, this offset was calculated to be between -2.69 and -2.76. By removing that offset, neural nets may have an easier time at prediction by cross learning from other batches. </p>
<p>A straightforward method for removing the offset is using a linear regression for the 10 channel batches (open_channels vs signal) and for the rest of the batches. </p>
<p>```
def linear_fx(x, m, y0):
return (m * x) + y0</p>
<p>xdata = train_csv_batch10["open_channels"]
ydata = train_csv_batch10["signal"]
popt1, pcov = curve_fit(linear_fx, xdata, ydata, method='lm')</p>
<p>xdata = train_csv_batch5["open_channels"]
ydata = train_csv_batch5["signal"]
popt2, pcov = curve_fit(linear_fx, xdata, ydata, method='lm')</p>
<p>static_diff = popt1[1] - popt2[1]
print("Offset 10c to rest:", static_diff)
```</p>
<p>While this method neglects possible differences in slope and minor offsets between batches, it gave a robust first guess of the actual offset.</p>
<p>Later on, <a href="/dott1718">@dott1718</a> developed a Gaussian Mixed Model for fitting the Gaussians of each batch using equal spacings and a binomial distribution. In our cross-validation we could not clearly identify which was doing better. </p>
<p>When adding a second dimension to this GMM one can clearly identify a correlation between previous states and following states. It appears, that the signal has a decay that is within the magnitude of the recording frequency of 10 kHz. That means, if a previous signal was larger than the current signal, the current signal will be slightly higher than expected. We assume this to be an artifact from the signal recording or from the electrical ion channel model (electrical circuit) and tried to make use of that feature (other called it memory) in any way and improve out models on that, but to our amazement, WaveNets (and probably many other ML models) seem to learn that behavior already on their own and any additional cleaning is not yielding improvements. We did gain a slight improvement in our test HMM using that knowledge. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F006251f677ff0d36da2c3381dd73ee51%2F2d_gmm_5chans.PNG?generation=1590526427020665&alt=media" alt=""></p>
<h2>Outlier removal</h2>
<p>In the first 500k batch and in the 8th 500k batch of the train set, severe outliers have been spotted. </p>
<p>The outlier in the first batch, we simply replaced by gaussian noise with an appropriate mean and variance. </p>
<p><code>
FIRST_OUTLIER = (47.858, 47.8629)
noise = np.random.normal(loc=-2.732151, scale=0.19, size=50)
train_csv.loc[(train_csv.time &gt;= FIRST_OUTLIER[0]) &amp; (train_csv.time &lt;= FIRST_OUTLIER[1]), "signal"] = noise
</code></p>
<p>The second sequence with outliers, we just removed for training.
<code>
SECOND_OUTLIER = (364.290, 382.2899)
train_csv = train_csv.loc[(train_csv.time &lt; SECOND_OUTLIER[0]) | (train_csv.time &gt; SECOND_OUTLIER[1]), :]
</code></p>
<h2>Zero Level Offset</h2>
<p>Just like the 10 channel offset, we have a zero level offset when no channels are open. Simple linear regression can be used to shift the datasets from ~2.73 and align the “zero channel” with zero mean. We can also use the results of linear regression to extract the scaling between “signal” and “open_channels” which allows us to extract the noise from the train signal and study it. </p>
<h2>50 Hz Line Noise</h2>
<p>When plotting the FFT of the signal and of the noise, we can clearly see that most of the noise seems to be white gaussian noise. But we can also identify some sharp peaks at 50 Hz, ~1150 Hz, ~1250 Hz, ~1450 Hz and ~3130 Hz. In Liverpool, they use 50 Hz in their power Lines and applying a narrow hann window to attenuate that frequency greatly improved our modeling capabilities. For best results, we tuned the attenuation with regard to signal to noise ratio at 50 Hz (higher attenuation for slow open channels compared to the fast open channels). With a grid search, we optimized the parameters "hann windows width", "attenuation high" and "attenuation low".
We also tried removing of the other peaks, that are somewhat random in the data, but without ever getting a significant improvement in our CV score. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F228cf45c1c04ee6a4b6e2241396c178a%2Fnoise_level_train.png?generation=1590526567559238&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F59a7515a8528ebebe46442b334a0edd1%2F__results___12_1.png?generation=1590526594525784&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2Fd5453821f3ef30f4b64d495f8b98a99a%2F__results___13_1.png?generation=1590526609450668&alt=media" alt=""></p>
<h2>Minor Offsets (mostly in test set)</h2>
<p>When leveling all batches with the same static offset mentioned above (zero level offset), we still experience significant differences between the batches. The very first 100k of the training set seem to be shifted about 0.03 in positive direction.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2Fc1675b0640b5a0c78c696d2bffadf2fa%2F__results___33_0.png?generation=1590526711915787&alt=media" alt=""></p>
<p>In the test set, those shifts are much more frequent and careful removal of those minor offsets appeared to have significant influence on the predictions of our models. To remove the offset of the training set we can easily use the ground truth values of open_channels combined with the scaling factor from above. As we don’t have that information in the test set, we evaluated three different techniques to remove that offset and they all led to similar results.
To begin with, we have used old predictions for the test set, that scored high on the leaderboard to calculate the noise in the test set and ultimately the minor offsets. As this is a delicate thing, especially in the 10 channel batches with the high variance, we also checked how those values are changing by recursively using the predictions for calculation of the noise. As expected, the difference was somewhat significant in the 10 channel batches and almost non-existent in the remaining batches. To archive a more robust solution, we later based the entire shifting routine on the GMM mentioned above, modelling the gaussians with equal spacings and binomial distributions. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F6ba61b4a705885dab777035abd9360c8%2Fscreen_shot_2020-05-22_at_10.23.21_pm.png?generation=1590526726818390&alt=media" alt=""></p>
<h2>Extra Data</h2>
<p>We used 700k rows of annotated data (3 channels and 5 channels) from Richards GitHub (<a href="https://github.com/RichardBJ/Deep-Channel/tree/master/dataset">https://github.com/RichardBJ/Deep-Channel/tree/master/dataset</a>) which yielded a tiny improvement on CV. All the above steps have also been applied to that extra data. </p>
<h2>What didn't work</h2>
<ul>
<li>Creation of additional synthetic training data</li>
<li>Unequal spacing between gaussians</li>
<li>Kalman Filtering</li>
<li>Removal of ~1150 Hz, ~1250 Hz, ~1450 Hz and ~3130 Hz noise</li>
<li>Prior modelling of the signal decay rate before feeding it to WaveNet</li>
<li>Unequal scaling of batches</li>
<li>Single models for each “type” of batch</li>
</ul>
<p>so, <code>cleaned_signal</code> is 5M train + 2M test concated together? Scaled to <code>open_channels</code> if i see that right?</p>
<p>I am actually very interested in the results of your HMM on that data <a href="/group16">@group16</a> www.kaggle.com/ilu000/liverpool-ion-clean-data. By how I understand it, HMM should benefit greatly from corrected means. </p>
<p>yes, slightly. We may want to try the other way round. Their cleaned data with our WaveNets </p>
<p>Scoring only a low 0.94448/0.94554. Not sure that i am doing it right 😄</p>
<p>but CV on train is already significantly lower for our dataset. I guess there is another cleaning needed for HMM than for WaveNet</p>
<p>yeah, would have been too easy ;)</p>
<p>part of batch 8 (200000 lines) in the training set was cut out by us.
<code>
SECOND_OUTLIER = (364.0001, 384.0000)
train_df = train_df.loc[(train_df.time &lt; SECOND_OUTLIER[0]) | (train_df.time &gt; SECOND_OUTLIER[1]), :]
</code>
the last 700000 lines of the training set were taken from the extra data, available at Richards GitHub</p>
<p>i've made the kernel where I tried to incorporate our data public <a href="https://www.kaggle.com/ilu000/hmm-m318">here</a></p>
<p>how do you expect the scaling to be done? I've used 1.23260128 in my experiment, which i got from linear regression. This may not be the optimal solution. </p> |
University of Liverpool - Ion Switching | 8th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to the winners and especially to Gilles & Kha Vo & Zidmie for an excellent effort leading the public lb and private non-leak scores. Despite the simulated nature of the data it was fun trying to piece together all of the aspects of the problem.</p>
<p><strong>Initial Approach</strong></p>
<p>Drift was removed from the raw data in the generally accepted manner using a sine curve as identified by Chris Deotte, Markus and Eunho.</p>
<p>A simple regression of the (cleaned of drift) signal against the target produced a reasonably good fit and thus I adopted the idea of defining the residuals from this regression (reg0) as <em>noise</em> which I could then attempt to model subsequently.</p>
<p>I also adopt the designation of the batches into five separate models depending on their nature as defined by Chris Deotte in his early notebook “One Feature Model”. I denote these separate models as m1s, m1f, m3, m5 and m10 depending on the maximum number of channels open in each batch and a further subdivision of the seemingly binary sections of the target data into ‘slow’ and ‘fast’.</p>
<p>(reg0) signal = 1.2336 x target – 2.7337 + <em>noise</em> m1s,m1f,m3,m5
signal = 1.2336 x target – 2.7337x2 + <em>noise</em> m10</p>
<p>Visual inspection of the <em>noise</em> over time suggested two regions of excessive <em>noise</em>, a large patch in m3 and a smaller one in m1s. These regions were excluded from all further analysis.</p>
<p>Even though the regression coefficients for the <em>noise</em> regression of signal against target were approximately the same for all models, the standard deviations of the residuals were different for each model suggesting that analysing the five models separately would probably be easier. The intercept for the m10 was almost exactly twice that of the others – see later.</p>
<p><em><strong>Noise</strong></em><strong> Modelling</strong></p>
<p>A set of simple features were first created for each model, initially for input into lightgbm. These were the signal and its leads and lags up to 14 time periods. In addition, I calculated the rolling 10-period min and max for both lead and lag for each observation as well as the rolling min and max for a 20-period lead and lag ahead of or behind the first 10 periods.
For an observation of the signal Xt, the features here are Xt-1…Xt-14, Xt+1…Xt+14, min(Xt-1..Xt-10), min(Xt-11..Xt-30), min(Xt+1..Xt+10), min(Xt+11..Xt+30), max(Xt-1..Xt-10), max(Xt-11..Xt-30), max(Xt+1..Xt+10), max(Xt+11..Xt+30).</p>
<p>I first used these features to estimate the signal itself in a lightgbm model. This estimated signal then formed the basis of another set of features in a similar manner to those I first calculated. Finally, I added a number of features based on the rolling min and max of the signal relative to each observation, e.g., min(Xt-1-Xt..Xt-10-Xt).</p>
<p>These features then were used in lightgbm to estimate the <em>noise</em> for both test and oof. These <em>noise</em> estimates could then be reversed back using the original regression (reg0) to estimate the target and calculate F1 metrics. Note that I found that only including 3 observation leads and lags for the signal (rather than 14) produced the best results here.</p>
<p><strong>Further Cleaning of the Signal</strong></p>
<p>Frequency analysis of the <em>noise</em> showed up clear contamination with a regular signal at about 50Hz along with higher frequencies with lower amplitudes. Attempting to remove this led me to find that each 10 second minibatch had a slightly different frequency (and phase) around 50Hz. My lack of expertise in signal processing led me to perform an exhaustive search for the best fit for each minibatch. Here I noticed some differences between models. Models m1s, m3 and m5 all had easy to spot signals in 10 second minibatches. Model m1f seems to have 1 second minibatches. Model m10, however, has a more complicated structure with one or two peaks for every 10 second minibatch. More on this later.
Removing the 50Hz contamination is reasonably straightforward for the train data but fortunately my <em>noise</em> estimates from the previous lgb also showed the same contamination (though with smaller amplitudes for m5 and m10). I was thus able to estimate the contamination for the test data and remove it. </p>
<p>With my newly cleaned signal I was then able to repeat all of the previous steps. I obtained a new <em>noise</em> estimate (which produced much better results in cv and on the public lb).
Further frequency analysis showed up harmonics of the 50Hz still contaminating the signal. The most prominent were at (50x23)Hz, (50x25)Hz and (50x29)Hz. For each of these harmonics I removed it from the signal and repeated all the steps so far, seeing whether it improved the cv each time. For m1s and m1f just the original 50Hz contamination removal was as good as it got. Model m3 had 2 cycles, m10 had 4 cycles. The m5 model kept going in this fashion for 7 cycles. (Note the seventh cycle for m5 was not a harmonic – maybe something else was contaminating the signal?)</p>
<p>I thus ended up with what I hope are fully clean signals for each model.</p>
<p><strong>Different Types of Modelling</strong></p>
<p>As well as using the lgb modelling, I also experimented with neural nets using Keras. Ultimately a simple 2-layer dense net trained with same features I built for the lgb model proved to be the best I could do.</p>
<p><strong>Funky Loss Objectives</strong></p>
<p>Noticing that the exact error doesn’t really matter when it is small or large but mainly when it is close to the boundary between the discrete channel numbers, I experimented with different loss objectives in both lgb and keras. Both of these showed better results on cv and public lb using a cubic error term compared to plain mean squared error. A quartic error term was also ok giving similar results to a cubic term.</p>
<p><strong>Model m10 and Data Augmentation</strong></p>
<p>There were a couple of clues in preceding sections. The intercept in the original regression (reg0) was twice as large for m10 compared to the others. Also, the 50Hz contamination showed single or double peaks for m10 only. Eventually I realised that the m10 model was comprised of the sum of two separate m5 models. The double peaks appear to be an interference pattern from two peaks from the m5 models. In addition, the variance of the <em>noise</em> for an m10 model is almost exactly twice that of an m5 model.</p>
<p>This meant that I could augment the m10 data by creating artificial data using the m5 data that we have and training on that. I created new data that looked like the m10 data by combining each 10 second minibatch of m5 data with the other 10 second minibatches for a total of 45 minibatches of new data. The hope that the data was similar enough to use for training was backed up by a small but worthwhile improvement in cv.</p>
<p><strong>Ensembling and Optimization</strong></p>
<p>For model m10 I used an ensemble of approaches utilising cubic and squared objectives, lgb and keras modelling. [This ensemble was then optimised slightly to obtain the maximum f1_macro score in cv.] For models m5, m3, m1f and m1s I used the single model that was best in cv. The rank order of my best m10 models in cv seemed to show little in common with the rank order on the public lb so it was hard to choose my final submissions. In the end I opted for a weighted ensemble of four m10 models that was good in cv (and fortunately ok on the leaderboards).
This ensemble scored 0.94443 in cv overall and gave 0.94685 on public lb, 0.94539 on private lb, good enough for 8th place.</p>
<p>Thanks <a href="/robikscube">@robikscube</a> ! It is always fun to have a meaningful public leaderboard scramble in a competition. I have tried cubic loss functions in many competitions but this is perhaps only the second time it has been helpful! Deflating a target series by a known variable can often be useful in a time series context - this is usually a price series thing but it seemed reasonable to use it here. Well done on your solo gold!</p> |
University of Liverpool - Ion Switching | 9th place short summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Short summary of our solution. </p>
<p>1) 'magic' shift 2.724 for 11class batches and deleting of 7th batch from train. It was our first "discovery" in this competition</p>
<p>2) combination of some wiener filters and bandstop filter for signal cleaning. It helped a lot for simple models (GMM 0.941 on CV, Naive Bayes 0.942 on CV) but unfortunately it had no any effect for our wavenet. So we just used it for our GMM and afterwards GMM weighted prediction and GMM probabilities were used as features for our NN</p>
<p>3) cleaning of 50hz noise</p>
<p>What we've tried but it didn't help:</p>
<p>1) HMM sampling
2) blending/stacking
3) TCNN
4) CRF</p> |
University of Liverpool - Ion Switching | Brief 3rd place solution (Kha Vo view) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes. Limited new academic or theoretical values here, but highly practical and engineering aspects</p>
<p><a href="/shimizumasaki">@shimizumasaki</a> it’s like a stacking in ML: use previous prediction to improve again prediction with the HMM model. The coefs are parameters to be optimized, in this case we just grid search manually</p>
<p>detailed solution updated!</p>
<p>detail updated!</p>
<p>It is more of engineering and practical though, to us it doesn't have any significantly novel academic values...</p>
<p>Sweet comment <a href="/alijs1">@alijs1</a>. I am hoping for your great finale in Abstract Reasoning. 👍 </p>
<p>For cat1, there are only 4 states [0 0 1 1]. Ptran is 4x4. Each signal point can fall in only 1 of those 4 states. Ptran is estimated by hmmlearn package in Python as
[0 0.1713 0 0
0.3297 0 0 0.01381
0 1 0 0
0 0.0002686 0 0]
Sorry, I'm not sure I understand your question...</p>
<p>First, I would like to thank Kaggle, the host <a href="/richardbj">@richardbj</a> , all competitors, and especially my two wonderful teammates <a href="/group16">@group16</a> <a href="/zidmie">@zidmie</a> who have quickest and brightest mindsets that I've ever luckily been working with. In the end, we are a little disappointed by the leak present in two top teams, given the tremendous amount of work in this competition as an optimization process. My teammates are writing towards a very detailed blog post about the solution. Now is their bedtime, so I will briefly write our solution. </p>
<p>FULL BLOG SOLUTION: <a href="https://medium.com/@gillesvandewiele/334fab86fc85">https://medium.com/@gillesvandewiele/334fab86fc85</a></p>
<p>NOTEBOOKS:
Authentic Modelling: <a href="https://www.kaggle.com/khahuras/hmm-m318">https://www.kaggle.com/khahuras/hmm-m318</a> (private 0.94575)
Leak Exploitation (updated soon)</p>
<p>LEAK EXPLAINED
<a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/153824">https://www.kaggle.com/c/liverpool-ion-switching/discussion/153824</a></p>
<p>----------------- BRIEF SOLUTION ---------------------</p>
<p>PREPROCESSING
We removed drift, add offset/gain to make signal-round(signal) = noise, and we removed some sine peaks, notably 50Hz. We also found some small peaks around 350Hz and 1150Hz. Removing all these is not a perfect process, so we did that by different ways. In the end, we got a few slightly different signals for modelling.</p>
<p>MODEL</p>
<p>Ptran and state discovery</p>
<p>We use a single HMM model, with highly customized forward-backward algorithm. The key thing in the modelling process is to decompose the signal of high cat into subprocesses of lower cats. Let us denote the signal categories as cat 1, 2, 3, 4, 5, 6, corresponding to maximum open_channels 1, 1, 3, 5, 10, 4 (cat 6 only appears in test). Starting from cat1 which has Ptran as a 4x4 matrix of states [0, 0, 1, 1], we extend this matrix into a bigger one for each higher cat. For example: cat3 is combined by three cat1 processes, then it has Ptran of size 20x20, where each state can get value in [0, 1, 2, 3], and it is the combination of 3 states of 0 or 1, which are 3 states of 3 processes of [0, 0, 1, 1]. The similar thing is done for cat4 and cat5, where cat4 has Ptran of size 56x56, and cat5 has Ptran of size 286x286. We optimize the raw matrix (4x4) as an optimization problem. </p>
<p>Algorithm</p>
<p>We made a highly customized fw-bw algorithm. If we denote the probability of signal at t falling in each state as s(t) (where each state is a row in the expanded Ptran matrix), then the normal forward pass is (roughly, and here for simplicity I skip normalization step for each time step calculation) we denote alpha0(t) = alpha0(t-1) x Ptran. Similarly the normal backward pass stopped at t would be beta0(t) = beta0(t+1) x Ptran^T, then pred(t) = alpha(t) * beta(t) * Psig. Here, Psig is the Gaussian emission of the noise (signal - round(signal)) at each timepoint, which we can optimize as parameters too, and pred is the prediction.
We found that by integrating a "memory" of forward in doing backpass, the score can be boosted. So the customized procedure is
beta1(t) = [(beta1(t+1) x Ptran^T )^coef1] * [(alpha0(t))^(1 - coef1)].</p>
<p>This procedure got us better beta. Similarly, we perform another pass where we use beta1 as "stacked" features for alpha2, which is
alpha2(t) = [(alpha2(t-1) x Ptran )^coef2] * [(beta1(t))^(1 - coef2)].
In the end, a customized sum of these alpha's, beta's got us final result as:
pred = [ c1 * (beta1) + (1-c1-c2) * alpha2 + c2 * (beta1 * alpha2) ] * Psig,
where c1, c2 are parameters to be optimized too.</p>
<p>BLENDING and THRESHOLDING</p>
<p>Given the pred, now we have the probability of each state (state here is a state of 4 states [0 0 1 1] of cat1, or 20 states [ 0 0 0 1 1 1 2 2 .....] (maximum 3) of cat 3. We take optimized averaged sum of several preds from different input preprocessed signals, and then simply calculate the single float prediction by states @ pred. This gives us a prediction like [0.45, 1.23, 0.24, 0.98, 3.55,...]. Now we need to threshold this pred into integer. We do that by an "unsupervised" method. For each signal batch of 100,000 points, we compute how much percentage that signal-round(signal) has absolute value less than a small value (such as 0.1, or 0.25 depending on cat). Then, in that 100K signal batch, we assign the same amount of labels from the prediction spectrum for each open_channel prediction. This method is to ensure that the label distribution estimation for each batch is applied to the prediction, so the prediction will roughly have the same label distribution as "good signal points" in that batch. This simple method seems to be very robust to overfitting, and quite fast. </p>
<p>The fully optimized single HMM model like this can got us as high as 0.94584 on private LB.</p>
<p>I am sorry if my write-up is too hard to understand. Questions are welcomed.</p>
<p>WHAT DISTILLED IN MY MIND</p>
<p>One should approach this competition as an optimization problem with an element of machine learning, rather than a machine learning problem with an element of optimization. We found that pure optimization works really great for this synthetic data, and no serious overfitting has been observed. That's why we can quickly escalate on LB.</p>
<p>This is an important result for each of us. Personally I truly thank you to all of you again for this great competition, and again especially, to my two wonderful teammates, which I learn a lot from them. We will make our demo Kaggle notebook code public immediately, after getting the green light from the host <a href="/richardbj">@richardbj</a> . And we will post another detailed solution write-up later, together with a full-scale github repo.</p>
<p>Thanks for reading!</p>
<p>EDIT: (scroll to the start of topic)
+ Full blog solution posted
+ Kaggle solution notebook published
+ Leak explained </p>
<p>You had been consistently ranked very high during the competition. Congrats on your final result, and impressive long time progress. Looking forward to see your 1st gold medal soon, and I'm gonna be one of the first people to congrats on that.</p> |
University of Liverpool - Ion Switching | Changing the Target Approach ( Top 10% solution ) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone from the Ion Switching competition, and congratulations to the winners !</p>
<p>We weren't first in that competition, but we are still proud of our ideas, method and of what we have achieved. After all, it's all about trying. Therefore, we present here the method that brought us a bronze medal. </p>
<p>This competition was definitely not easy, as the data was generated with a Hidden Markov Model (HMM). We worked with both the competition data and the <a href="https://www.kaggle.com/cdeotte/data-without-drift">data without drift</a> created by <a href="https://www.kaggle.com/cdeotte">Chris Deotte</a> for this competition. We thank him in the name of all other kagglers who used his processed data as well. </p>
<p>We considered the open_channels as states of a Markov Chain Process. Our main work was based on 2 features :
- the probability that open_channels at time T would change from T-1 (just an indicator of the change, not its value !)
- open_channels shifted from 1 (at previous timestep).
Our method was to estimate those and to make an iterative prediction on test. We did all predictions based on an aggregation by subbatches of 100_000.</p>
<p>We aggregated subbatches by resemblance, based on criterias such as :
- frequencies and amplitudes obtained from the Fourier decomposition ;
- mean, std ;
And that gave us the same group as <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/153269">here</a></p>
<ol>
<li><p>Modeling probability of leaving the previous state (probability to change) :
This was a binary classification on open_channels ! = open_channels_shift_1. We used a simple light gradient boosting method (lightgbm) which achived precision and recall superior to 0.95 on nearly all batches, and very often around 0.98+. This binary variable can be predicted in a very efficient way.</p></li>
<li><p>Iteratively predicting open_channels :
Taking into account both open_channels shifted from 1 and the probability to change was really important to get 94.2+ on cross-validation. The problem here was the cost in computation time, as we were making iterative prediction on test. For the value of open_channelsshifted_1 at the starting of test mini-batchs of 100_000, we just took them from public kernels which were well performing and had exactly the same values.</p></li>
<li><p>Final model :
Our final model is a lightgbm on 5 features : </p>
<ul><li>IsChanged (predicted by binary classification)</li>
<li>open_channels_shift_1 (iteratively predicted)</li>
<li>signal </li>
<li>signal_pct_change</li>
<li>signal shifted ahead / back from 1, 2, and 3 timesteps </li></ul></li>
</ol>
<p>This model achieved 94.2+ on cross-validation and 94.389 on private leaderboard.</p> |
University of Liverpool - Ion Switching | Gilles Vandewiele's A "better" but useless solution: Private 0.97266 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="/group16">@group16</a>, amended.</p>
<p>Running this kernel with kalman filtered version of Giba's cleaned data as input scored on the private LB=0.96921 and public LB=0.91398</p>
<p>Using Giba's data without kalman filtering scored lower i.e. private LB=0.96495 and public LB=0.90708</p>
<p>Gilles Vandewiele posted "<a href="https://www.kaggle.com/group16/private-0-9688-a-better-but-useless-solution">Private 0.9688: A "better" but useless solution</a>" (please upvote his kernel), revealing how to use the leak to score better than the 1st place private LB score.</p>
<p>Since I had better score by using <a href="/ragnar123">@ragnar123</a>'s <a href="https://www.kaggle.com/ragnar123/clean-kalman">clean_kalman</a> dataset than <a href="/cdeotte">@cdeotte</a>''s cleaned data during the contest, I decided to use that in Gilles's kernel. Suprisingly "<a href="https://www.kaggle.com/sheriytm/private-0-97266-a-better-but-useless-solution">Private 0.97266: A "better" but useless solution</a>" scored 0.97266 on the private LB with a very low CV and LB score just like the original. Both scores beat the 2nd place private LB score and it is totally useless.</p>
<p>This to me reveals the problem with the data better than anything. Share your comments below.</p> |
University of Liverpool - Ion Switching | Synthetic data for 0.945 public LB solution (no training data used at all) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Agreed. I don't see anything bad using the code to generate similar signals, and even more varied X to simulate different lab conditions (60 Hz noise, more or less gaussian, spikes, etc.). I also considered exploring for leaks (e.g. finding the exact matlab seed) but didn't pursue it (what's the point? for me one of the reasons to join this competition is the scientific aspect and for leak hunting I'd rather do math puzzles). </p>
<p>This competition was unusual in that we have data with some level of unremovable aleatoric noise (gaussian with a given std) added to sequence which we can model (the superposition of multiple HMMs). There's a cap to what we can achieve (given that is not possible to fully predict the output of Y), so our job is to:
a) capture the underlying distribution of the GT (Y), and
b) mimize our model's epistemic noise.</p>
<p><strong>Synthetic data</strong></p>
<p>It turns out you can build as much GT (Y) as you want but running the Matlab scripts provided by <a href="/richardbj">@richardbj</a> by modifying the max number of channels to 1) and get multiple runs of what they call M0 and M1. Then to generate either 3,5,10, etc. max open channels you just add non overlapping outputs. That's how I generated ground truth.</p>
<p>The next challenge comes from modeling X after Y. In their paper, X is the result of a transformation of Y (amplifier) plus added noise (50 Hz + thermal) + non-linearities and/or delays. I built a tiny network to model this whose job was to minimize the energy of the residual of F(Y) vs. X; which results in a convolution kernel shifted ~0.03 samples to the right (delayed):</p>
<p>Convolution kernel
<img src="https://i.imgur.com/FswlWPk.png" alt=""></p>
<p>Then I modeled the gaussian noise of multiple channels:</p>
<p><code>train_std_by_type = {
('l', 1): [0.24515989422798157, 0.24703997373580933],
('h', 1): [0.24486009776592255, 0.2447292059659958],
('h', 3): [0.265836238861084],
('h', 10): [0.4045635759830475, 0.40377894043922424],
('h', 5): [0.28642651438713074, 0.28378984332084656]}
</code></p>
<p>...where <code>l</code> is the low-probability model (M0) and <code>h</code> is the high-probability model. Why there is more noise as more channels get added I think is the weak point of my modeling since the noise should be independent of number of open channels; anyway, with all the above we can now generate unlimited training data:</p>
<p>Synthetic private:
<img src="https://i.imgur.com/51Z3VUq.png" alt=""></p>
<p>Synthetic public:
<img src="https://i.imgur.com/Pe02luW.png" alt=""></p>
<p>Synthetic train (small segment, orange) vs. Real train (blue)
<img src="https://i.imgur.com/aXnVeFm.png" alt="">
(there's a small shift/offset which is intentional, see below)</p>
<p>FWIW, if you think about it I could (should) have built a GAN to generate indistinguisable synthetic data and not model manually.</p>
<p>I also removed (substracted) perfectly phase-aligned 50 Hz waves to remove 50 Hz components.</p>
<p><strong>Architectures</strong></p>
<p>With the above I built balanced training data and some architectures and got me 0.945 in public LB with a single model (no ensembling, no averaging, etc.). </p>
<p>I used synthetic data modeled after public LB, private LB, train set, etc. to track my model. I tried numerous architectures (wavenet, "simple" but overparametrized convolutions, RNNs, QRNNs, Transformers and even the Reformer); no matter the architecture, the ceiling was always 0.945 with a single model. I also built an L2 model which didn't help.</p>
<p>The final architecture has two aspects: 1) a 6-depth conv layer (nothing fancy). This branch has limited view of the signal which is addressed by: 2) a histogram network that is given the whole segment (100,000 samples), builds an histogram and then builds a dense representation (256 features) of the histogram, and then final thresholding network (just bunch of FC layers) happens combining both. The histogram is built using coordconv-like 1d network but other networks work well.</p>
<p><strong>Unknowns, and augmenting the synthetic data</strong></p>
<p>Some unknowns were still problematic the the test data:
* The offset was unknown
* The noise was unknown
* The amplification was unknown</p>
<p>I added random offset (+- 0.01) which did help. I also added variable amplification which also helped (but less), and finally and most promising I also added variable (gaussian) noise which DID NOT help and I still don't know why. In retrospect I think I should have added a few extra heads to predict the noise std, amplification, and offset so the net would leverage that information to get better predictions. I didn't do it, thinking that with unlimited data, 3 GPUs, and ~8 hours of training the net would learn it by itself. It didn't.</p>
<p><strong>Code</strong></p>
<p>Code is at <a href="https://github.com/antorsae/liverpool-ion-switching">https://github.com/antorsae/liverpool-ion-switching</a> it uses Fastai2 and little more. </p> |
VSB Power Line Fault Detection | 26th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My solution based on kernel <a href="https://www.kaggle.com/tarunpaparaju/vsb-competition-attention-bilstm-with-features">VSB Competition : Base Neural Network</a> by <a href="https://www.kaggle.com/tarunpaparaju">Tarun Sriranga Paparaju</a>. But I'm added some modification:
- I used denoising algorithm based on wavelet(bior1.3)
- I changed neural network architecture
```
def model_gru(input_shape, feat_shape, alpha):
inp = Input(shape=(input_shape[1], input_shape[2],))
ifeat = Input(shape=(feat_shape[1],))
feat = Dense(8)(ifeat)
feat = BatchNormalization()(feat)
feat = Activation('tanh')(feat)</p>
<pre><code>x = SpatialDropout1D(0.2)(inp)
x = Bidirectional(CuDNNGRU(160, return_sequences=True))(x)
x = Bidirectional(CuDNNGRU(160, return_sequences=True))(x)
attention = Attention(input_shape[1])(x)
x = concatenate([attention, feat], axis=1)
x = Dropout(0.8)(x)
x = Dense(64)(x)
x = BatchNormalization()(x)
x = Activation('tanh')(x)
x = Dense(1, activation="sigmoid")(x)
model = Model(inputs=[inp, ifeat], outputs=x)
model.compile(loss=focal_loss(gamma=2, alpha=alpha), optimizer=Nadam(lr=0.003),
</code></pre>
<p>metrics=[matthews_correlation])</p>
<pre><code>return model
</code></pre>
<p><code>
- For prevent overfitting I used strong dropout (SpatialDropout 0.2 before GRU and dropout 0.8 before fully connected)
- I used focal loss with gamma = 2 and alpha calculated based on data
</code>
alpha = np.sqrt((1-sum(train_y)/len(train_y))*0.8))
<code>
- Since the signal is periodic, I used a generator with a simple augmentation.
</code>
def cyclic_shift(x, alpha=0.5):
s = np.random.uniform(0, alpha)
part = int(len(x)*s)
x_ = x[:part, :]
_x = x[-len(x)+part:, :]
return np.concatenate([<em>x, x</em>], axis=0)
```
- Model was trained on StragtifiedKfolds(10 folds) used Nadam and CLR with triangle mode (clr 300 steps lr= (1e-4, 6*1e-6)) on batch size 256 and 50 steps per epoch and early stoping with loss monitoring. </p>
<p>Public score: 0.59499
Private score: 0.67159</p> |
VSB Power Line Fault Detection | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ol>
<li>yes</li>
<li>DWT before getting peaks</li>
<li>[5,100] is the range of peaks I extracted, amplitude above 10(manual selection, because it seems that I don't see the specific value in the paper) was used to remove the false peak</li>
<li>public 0.61-0.68 private 0.63-0.7</li>
</ol>
<p>I found the model trained with 'val loss'(25 epochs) had a similar public score compared to the same model trained with 'val MCC'(40 epochs), but 'val_loss' gave me a better cv/lb relationship</p>
<p>Congratulations to all the winners!
And thanks a lot to VSB/Enet Center and Kaggle for this exciting competition.</p>
<h2>Here are the details about my single model:</h2>
<p>Preprocessing:
Before extracting features, I used DWT for noise reduction, and I think this is helpful for the stability of the model.<a href="https://www.kaggle.com/jackvial/dwt-signal-denoising">you can find here</a></p>
<h3>Features:</h3>
<p>I tried a lot of different methods, such as RNN taking 80, 100, 160, 250 time steps, basic features, energy features, peak features, and also tried to automatically extract features from CNN models, different RNN models. Unfortunately, none of these attempts have brought me a big improvement. Finally, the addition of the peak feature on the basic RNN architecture can effectively alleviate the over-fitting. This is the problem that I think is the most important problem to solve. Extracting the peak features using python's signal library, all this features can be found in <a href="https://ieeexplore.ieee.org/abstract/document/7909221">here</a>. Here is the code:</p>
<p>```python</p>
<h1>Extract peak features</h1>
<p>from scipy.signal import find_peaks, peak_widths, peak_prominences</p>
<p>def remove_false_peak(signal, p1, p2, maxDistance=10):
peak_diff = np.diff(p2)
if len(peak_diff) == 0:
return p1
ticks = []
for i, d in enumerate(peak_diff):
ratio = signal[p2[i+1]]/signal[p2[i]]
if d < maxDistance and -0.25 > ratio and ratio > -4:
ticks.append((p2[i], p2[i+1]))
mask = np.array([True]*len(p1))
for i, j in ticks:
mask = mask & ((p1 < i) | (p1 > 500+j))
return p1[mask]</p>
<p>def get_peaks(signal):
p1_1, _ = find_peaks(signal, height=[5, 100])
p1_2, _ = find_peaks(-signal, height=[5, 100])
p1 = np.union1d(p1_1, p1_2)
n_peaks, _ = find_peaks(-signal, height=[10, 100])
p_peaks, _ = find_peaks(signal, height=[10, 100])
p2 = np.union1d(n_peaks, p_peaks)
p = remove_false_peak(signal, p1, p2, maxDistance=10)
return np.intersect1d(p1_1, p), np.intersect1d(p1_2, p)</p>
<p>def extract_peak_feature(signal):
p_peaks, n_peaks = get_peaks(signal)</p>
<pre><code>num_p, num_n = len(p_peaks), len(n_peaks)
sig_peak_width = np.concatenate(
[peak_widths(signal, p_peaks)[0], peak_widths(-signal, n_peaks)[0]])
sig_peak_height = abs(signal[np.concatenate([p_peaks, n_peaks])])
if num_n or num_p:
height_mean = sig_peak_height.mean()
height_max = sig_peak_height.max()
height_min = sig_peak_height.min()
height_median = np.median(sig_peak_height)
width_mean = sig_peak_width.mean()
width_max = sig_peak_width.max()
width_min = sig_peak_width.min()
width_median = np.median(sig_peak_width)
return np.array([height_mean, height_max, height_min, height_median,
width_mean, width_max, width_min, width_median, num_p, num_n])
else:
return np.zeros(10)
</code></pre>
<p>```</p>
<p>At the same time, the added peak feature has a large number of outliers, which I convert to a missing value. Then perform the missing value processing(dividing the data into groups based on the attribute with the largest correlation coefficient of the missing value, and then calculating the average value of each group. Just put these averages in the missing values.)</p>
<h3>Training:</h3>
<p>epochs: 25
Checkpoint monitor='val_loss'</p>
<h2>My final solution was a ensemble of three models:</h2>
<p>My single Model
<a href="https://www.kaggle.com/tarunpaparaju/vsb-competition-attention-bilstm-with-features?scriptVersionId=10690570">VSB Competition : Stacked Attention Capsule BiLSTM</a>
<a href="https://www.kaggleusercontent.com/kf/10818864/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..rLDkCCqNGYG5hfrj7oMt9A.ucfuA1j7MlivrTFzzAvVq7SpDSojFzTdXNVHqC5T7q0Vc4AjG5OP-2Pi0EngziSLz6FHxHoY4lqxvYj02gOtfMh9dGMJnCitLHWZ4JrZX10kzWvvLYhAmbtfm6Mk2ej46868zJzHFQ9RKnvcUjjBNQ.abKNk9CPW8feEC28o41osg/__results__.html">Handmade features</a></p>
<p>If you think there is something incorrect or that could be improved, please leave your comments! And thank you everybody for the great kernels and discussions.(From 6th but it is exactly what I want to say)</p> |
VSB Power Line Fault Detection | 38th Solution & How I think I survived the shake-up | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I don't think I'll publish my work, mostly because it'll take me too much time to fit what I did in the kaggle kernels. I also tried to feed spectrograms to my nn, but it did not work at all.</p>
<p>Yup my validation scheme is just a classic 5 folds CV</p>
<p>Hello everybody!</p>
<p><strong>The Start :</strong>
I joined this comptetion approx. a month ago, at the start, my main idea was to use scipy.signal to extract features about the peaks, after applying wavelet denoising. I first went with LSTMs because of the hype around high scoring public kernels, but could not get close to 0.7 LB (public).</p>
<p><strong>Downfalls about public Kernels :</strong>
Here are a few things I believe were overused and caused the shaked_up.
- Keras CuDNNLSTMs (because of the randomness), move to PyTorch !
- Threshold selection for MCC optimization. It only caused overfitting on your oof data
- Checkpoints while training your model. Again, you're overfitting on your oof data
- Overall, I went with the idea that RNNs were predicting noise only. </p>
<p>Not sure about the last point, because I've got one of my very first submissions that got 0.674 private with the ideas above.</p>
<p><strong>To the magical submission : CV 0.659 - Public 0.652 - Private 0.667</strong></p>
<p>Yes that's pretty stable, that's why I trusted this model more than others.</p>
<p>Feature Engineering :
- Wavelet Denoizing
- "Bucketted" distribution features such as in the public kernels (exactly the same actually).
- The three phases were fed together in the network, BUT as three phases of an image. My input shape was (3, 160, nb_features). I used the most common target among the three phases as the label.</p>
<p>I tried all the features I came up with, these ones achieved the best CV, I don't know why.</p>
<p><strong>Model :</strong>
As my inputs were similar to images now, I went with an architecture similar to AlexNet, but with less parameters. The architecture looks like this approx. (I changed it a bit, not sure if this is the one that got me the best score) :</p>
<p>```
class Model(nn.Module):
def <strong>init</strong>(self):
super(Model, self).<strong>init</strong>()</p>
<pre><code> self.features = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(16, 32, kernel_size=3, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2))
self.avgpool = nn.AdaptiveAvgPool2d((6, 6))
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(32 * 6 * 6, 32),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(32, 1))
def forward(self, x):
x = self.features(x)
x = self.avgpool(x)
x = x.view(x.size(0), 32 * 6 * 6)
x = self.classifier(x)
return x
</code></pre>
<p>```
100 epochs, 5 folds CV, batch_size 128, Learning rate 0.001</p>
<p>I worked on my gaming laptop which has 16Gb of ram and a 1060, so the 800000 long signals were quite a pain in the a** but I dealt with it.</p>
<p><strong>Final Word</strong>
This competition was hard because you could not trust your LB, and you could only trust your CV if it achieved results similar to LB ones. Well that's what I understood at least.</p>
<p>Thanks everyone for participating, I'm waiting to read about others' solutions.!</p>
<p>Thanks Russ!
Not at all, but there sure are loads of possibilities here!</p>
<p>Not really, mostly because I used my tree models for feature selection, and the few I submitted scored Not really, my tree based models scored so poorly on LB that I decided to keep my 2 daily submissions for more promising models (i.e. NNs). My best private LB is 0.674, and the submissions I picked scored 0.667 and 0.662 private. My best Lgbm model scored 0.60 private and 0.46 public. </p> |
VSB Power Line Fault Detection | 44th solution... Thank you for your questions and advices | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My public lb ranks 107th and private lb ranks 44th.
The train set of this competition is very small. And the distribution of train set is very different from the distribution of LB dataset. So I don't think local cv can display your models' performance on private LB. Maybe this competition's key is fitting public LB without overfitting.</p>
<p>My solution is as below:</p>
<ul>
<li><p>feature extraction
My FE is similar as this <a href="https://www.kaggle.com/braquino/5-fold-lstm-attention-fully-commented-0-694/data">kenel</a>. But there is some difference as below. </p>
<p>I exclude the trend of signal before feature extraction as below:
<code>
rollingdata = rawdata.rolling(20000, center=True, min_periods=1, axis=0)
trend = rollingdata.mean() <br>
res = (rawdata - trend).values
</code></p>
<p>In order to mine more information from raw signal, I divide raw signal into 250 segments.
<code>
whole_signal_len = 800000
processed_len = 250
bin_len = int(whole_signal_len/processed_len)
</code></p>
<p>And I don't use 25% and 75% quantile. Because I think trend is useless for PD detection, and some quantiles are redund ant.
```
mean_slice = signal_slice.mean(axis=0)
std_slice = signal_slice.std(axis=0)</p></li>
</ul>
<p>std_top = mean_slice + std_slice
std_bot = mean_slice - std_slice</p>
<p>percentil_slice = np.percentile(signal_slice, [0, 1, 50, 99, 100], axis=0)
max_range = percentil_slice[-1] - percentil_slice[0]</p>
<p>relative_percentile = percentil_slice - mean_slice</p>
<p>out_slice = np.concatenate([
percentil_slice.T,
max_range.reshape(-1, 1),
relative_percentile.T,
std_top.reshape(-1, 1),
std_bot.reshape(-1, 1),
std_slice.reshape(-1, 1),
mean_slice.reshape(-1, 1)
], axis=1)
```</p>
<ul>
<li><p>model
There are two models in my solution.
```
class LSTM_softmax(nn.Module, myBaseModule):
def <strong>init</strong>(self, random_seed,
features_dims,
seq_len,
learning_rate,
lstm_out_dim=80,
lstm_layers=2,
linearReduction_dim=64,
env=None):
nn.Module.<strong>init</strong>(self)
myBaseModule.<strong>init</strong>(self, random_seed)</p>
<pre><code>self.l2_weight = 0.0000
self.lstm = nn.LSTM(features_dims, lstm_out_dim, lstm_layers, bidirectional=True, batch_first=True).cuda()
self.lstm_attention = Attention(lstm_out_dim*2, seq_len)
self.linear = nn.Linear(lstm_out_dim*4, linearReduction_dim).cuda()
self.relu = nn.ReLU()
self.dropout = nn.Dropout(0.10)
self.linear2 = nn.Linear(linearReduction_dim, 1).cuda()
self.sigmoid = nn.Sigmoid()
self.optimizer = torch.optim.Adam(self.parameters(), lr=learning_rate)
self.scheduler = torch.optim.lr_scheduler.LambdaLR(self.optimizer, lambda epoch_num: 1/math.sqrt(epoch_num+1))
self.loss_fn = torch.nn.BCELoss(reduction="sum")
self.vis = env
</code></pre>
<p>def forward(self, x):
h_lstm, _ = self.lstm(x)
h_lstm_atten = self.lstm_attention(h_lstm)</p>
<pre><code>lstm_max_pool, _ = torch.max(h_lstm, 1)
nn_out = torch.cat([lstm_max_pool, h_lstm_atten], 1)
nn_out = self.dropout(nn_out)
nn_out = self.relu(self.linear(nn_out))
out = self.sigmoid(self.linear2(nn_out))
return out
</code></pre></li>
</ul>
<p><code>
</code>
class LSTM_selfAttention_softmax(nn.Module, myBaseModule):
def <strong>init</strong>(self, random_seed,
features_dims,
seq_len,
learning_rate,
lstm_out_dim=80,
lstm_layers=2,
selfAttention_dim=80,
linearReduction_dim=64,
env=None):
nn.Module.<strong>init</strong>(self)
myBaseModule.<strong>init</strong>(self, random_seed)</p>
<pre><code> self.l2_weight = 0.0000
self.lstm = nn.LSTM(features_dims, lstm_out_dim, lstm_layers, bidirectional=True, batch_first=True).cuda()
self.selfAttention = selfAttention(selfAttention_dim, selfAttention_dim, 2*lstm_out_dim,dk=64)
self.lstm_attention = Attention(selfAttention_dim, seq_len)
self.linear = nn.Linear(2*(selfAttention_dim), linearReduction_dim).cuda()
self.relu = nn.ReLU()
self.dropout = nn.Dropout(0.10)
self.linear2 = nn.Linear(linearReduction_dim, 1).cuda()
self.sigmoid = nn.Sigmoid()
self.optimizer = torch.optim.Adam(self.parameters(), lr=learning_rate)
self.scheduler = torch.optim.lr_scheduler.LambdaLR(self.optimizer, lambda epoch_num: 1/math.sqrt(epoch_num+1))
self.loss_fn = torch.nn.BCELoss(reduction="sum")
self.vis = env
def forward(self, x):
h_lstm, _ = self.lstm(x)
h_lstm = self.selfAttention(h_lstm)
h_lstm_atten = self.lstm_attention(h_lstm)
lstm_max_pool, _ = torch.max(h_lstm, 1)
nn_out = torch.cat([lstm_max_pool, h_lstm_atten], 1)
nn_out = self.dropout(nn_out)
nn_out = self.relu(self.linear(nn_out))
out = self.sigmoid(self.linear2(nn_out))
return out
</code></pre>
<p>```
In order to reduce model complexity, I train 6 models for 3 phase data. Each phase data has 2 models. The final result is a weighted average of predictions of two models.</p>
<p><a href="https://github.com/qq563902455/VSB_Power_Line_Fault_Detection">github link</a>
If you think this is helpful, please star my github project and upvote my discussion.Thank you!!!</p>
<p>I think the key of avoiding overfiting is feature extraction.
So we need to extract features based on 'Prior Knowledge', such as <a href="https://ieeexplore.ieee.org/document/7909221/">paper</a></p>
<p>Specifically, in my solution, I exclude trend from raw data.
Secondly, I think simple models have better generalization capabilities. So it may be a wrong decision to use complex model to improve public LB score.</p> |
VSB Power Line Fault Detection | 67th solution: kernel only | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used only kernel in this competition.
Please ask me if you are interested in.</p>
<p><a href="https://www.kaggle.com/bigswimatom/5-fold-lstm-attention-stateful-metrics-with-exp">https://www.kaggle.com/bigswimatom/5-fold-lstm-attention-stateful-metrics-with-exp</a></p> |
VSB Power Line Fault Detection | 6th Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, <a href="/huyunwei">@huyunwei</a> . I took the mean of probability and then rounded the values to get the target. I tried voting ensemble just in the beginning... I should have tried at end.</p>
<p>I think the problem to use the LB prediction is overfitting. The model can just rely on your past submissions (it's good to have a number of low correlated models), limited data (number of submits) and the public samples. I used it in conjunction with my local CV. For example, my best solution got:
- Local CV ranging from 0.69x to : 0.73x (models of the final ensemble)
- LB Prediction: 0.75x
- Public LB: 0.77x
- Private LB: 0.69x</p>
<p>I have a public/private LB of 0.776/0.686 to the GBT+RNN ensemble against 0.771/0.696 in the final one</p>
<p>Yeah! Thanks for the kernel, <a href="/tarunpaparaju">@tarunpaparaju</a> !</p>
<p>Thanks! With the limitation of 2 submits per day, i didn't tested the results of my final single models. On local CV I've got RNN > GBT > CNN. I will submit the results and tell you later</p>
<p>Hi <a href="/nyleve">@nyleve</a>. Tomas Vantuch commented about the regions where PD patterns tend to occur: <a href="https://www.kaggle.com/c/vsb-power-line-fault-detection/discussion/75771#460739">Problem description</a> <br>
The downsampling was just because the models were taking too long to be trained. I was trying a lot of configurations and wanted use the original signal at the end but...
I chose the maximum trying to reduce the resample effect in the peaks</p>
<p>Thanks, <a href="/nicupetridean">@nicupetridean</a>.
I started the competition using GBT and changed to RNN at the time when the 0.694 public kernel was released. I also tried some few experiments with CNN but i didn't get good results because, I guess, was using raw data (have to confirm it) and just back to try this model in the beginning of march.
I was not using pseudo-labelling when experimenting with GBT+RNN so the ensemble was instable. I will submit some single model and partial ensemble predictions and update here de results.</p>
<p>Hi, <a href="/ratthachat">@ratthachat</a>. Thank you and no problem about the questions!!!
- I used the whole signal, aligned by the phase 0 signal almost until the end. Then, I noticed that cropping and aligning each signal gave me a boost on local CV to the GBT model. Since I did not have enought time to fine tuning and experimenting with each model, I changed all of them.
- For the CNN models, I was not able to get even decent results without denoising. I started using raw data with RNN, but after the experiments with CNN, I changed all my functions of preprocessing.
- I used the ensemble of predictions as targets to the test set. Then I trained the models with the train+test data, with increasing sample weights for the test data, and a proportion of 50/50 in the training batches.</p>
<p>To be honest, it was the first time I tried LB prediction. I heard that from Heng CherKeng: <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/63984">common tricks in kaggle image segmentation problems</a>. What I did was to train a Lasso using my submissions to predict (regression problem) the Public Score. So, input was a submission and output was the public LB</p>
<p>Many thanks, Russ. I tried some multilabel crossvalidation schemes based on clustering, number of peaks, statistics from fft but at the end I did a stratified classification on targets with the validation set using 50% data from training examples and 50% from pseudo labeled</p>
<p>Thanks to VSB/Enet Centre and Kaggle for this great competition. I had a lot of fun with that in the last months!</p>
<p>I'm going to brief the main points I believe have helped in my final score. I'm not a Pro (yet!), so if you think there is something incorrect or that could be improved, please leave your comments!</p>
<p><strong>My final solution was a ensemble of three main branchs:</strong></p>
<ol>
<li>Gradient Boosting Trees (Lightgbm)</li>
<li>Recurrent Neural Network (GRU and LSTM)</li>
<li>Convolutional Neural Network (Custom CNN, Resnet50, DenseNet101)</li>
</ol>
<p><strong>Preprocessing:</strong>
The 3 signal phases were used as just one sample. Each signal was aligned (using the 50Hz phase of the fourrier transform) to start where the signal crosses the axis from the negative to positive (𝜋/2) and the last quarter of the signal was removed.
In each model, I used a few denoised versions (varying thresholds) of this aligned/cropped signal:
1. Wavelet
- Following the MaxHalford's repository: <a href="https://github.com/MaxHalford/kaggle-vsb-power/blob/master/scripts/extract_solo_features.py">extract_solo_features.py</a>
2. Fourrier / IQR
- Eliminate low frequencies with Fourrier transform and points with interquartile range</p>
<p>I ended up with 4 signal version (Wavelet/Fourrier with 2 threshold levels)</p>
<p>To RNN and CNN, I undersampled the 600000 size signal to 300000 taking the position with maximum absolute value at each pair of points:
<code>und_signal = np.where(np.abs(signal[::2])>np.abs(signal[1::2]), signal[::2], signal[1::2])</code></p>
<p><strong>Inputs:</strong>
1. GBT:
- The features were min, max, mean, std, skew and kurtosis of:
- Peaks count, height, width, prominences: <a href="https://github.com/MaxHalford/kaggle-vsb-power/blob/master/scripts/extract_solo_features.py">extract_solo_features.py</a>
- Entropy & Fractal: <a href="https://www.kaggle.com/tarunpaparaju/vsb-competition-attention-bilstm-with-features">vsb-competition-attention-bilstm-with-features</a>
- Slope of lines connecting positive/negative peaks to the next negative/positive peaks
- Ratio of positive peaks to the minimum of the next maxDistance (defined in Vantuch's Thesis) points.
- Ratio of negative peaks to the maximum of the next maxDistance points.
- R² and weights of some polynomial regressions fit in positive/negative peaks</p>
<ol>
<li><p>RNN:</p>
<ul><li>Preprocessed signal reshaped in 40 columns</li></ul></li>
<li><p>CNN:</p>
<ul><li>Preprocessed signal reshaped in 200 columns</li></ul></li>
</ol>
<p><strong>LB Prediction</strong>
At some point in the competition, I built a Lasso to predict the LB score of a submission. I knew that it would create problems with overfitting so I tried to avoid it with:
- Threshold selection: I fixed all thresholds at the value 0.5
- Pseudo-Labelling: I used it in all models and I reduced the sample weights of test examples witch had low weight in the trained Lasso
- Ensemble: I should have ended up with 24 models (6 models x 4 preprocessed signals) but I got just 16 (lack of time...!)</p>
<p>I was not able to create a stable stacking so i used a very elaborate strategy to ensemble the models: Take the arithmetic mean of the predictions...</p>
<p>The last but not the least, I wanted to thank you everybody for the great kernels and discussions. I learned a lot and got my first gold medal!</p>
<p>That's all folks!</p>
<p>I'm not a RNN expert, so I don't know if it is a too much long sequence to that kind of model but the data was a bit sparse (after denoising), maybe it has helped.
- The rationale of CNN was easier: a kernel matrix multiplying 2 subsequent lines is using information of 2 points, in the signal, separated by 200 steps (lag-200). So, the number of columns control the distance of the information used in the convolution (at least in the first layer)
The RNN decision was a bit arbitrary. I chose to divide each 𝜋/2 arc of the signal in 10 parts. I was using 10000x40 before crop the signal
- The full RNN training was taking about 3-4 hours: 5-Folds x 20 Epochs x 2 minutes. I used a GTX1080TI in this competition.</p>
<p>Almost that...
I reshaped the 300,000 points signal to 7500x40 and concatenate the 3 phases (axis=-1)
So the input of RNN was a 2904x7500x120 matrix and the input to CNN was a 2904x1500x200x3 matrix</p> |
VSB Power Line Fault Detection | 73rd place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone!</p>
<p>Now that this exciting competition is over, it is interesting to read about other people's solution and I though I could also share mine.
This is the first Kaggle competition I ever took part in and I'm very excited about my 73rd rank.
I missed the silver medal by only one place! Maybe I was just lucky in the view of the major shakeup we observed here. Nevertheless, I'd like to share the key aspects of my solution with you. </p>
<p>From what I read in the discussions, many people used DWT denoising and RNN / LSTM architectures to tackle this problem. My appraoch is a bit different. I used a very simple curve-fitting based preprocessing routine and a CNN type of architecture.</p>
<h2>Preprocessing</h2>
<p>I fitted a sine curve with a fixed frequency of 50 Hz (corresponding to the frequency of the underlying electric grid) to each signal. Then I subtracted the fitted sine curve from the original signal. I used the calculated phase shift of the fitted curve to eliminate the phase shift in the original data. I did this, because I read in a paper that the occurrence of the PD pattern was correlated to the phase of the sine wave.</p>
<p>Here is the code to preprocess one signal <code>sig</code>:
```</p>
<h1>define the parameterized model for the fitted sine curve</h1>
<p>def model(t, alpha, phi):
return alpha * np.sin(2 * math.pi * 50 * t + phi)</p>
<p>tdata = np.linspace(0, 0.02, 800000)</p>
<h1>fit the sine curve</h1>
<p>popt, pcov = curve_fit(model, tdata, sig, p0=[30, 0])</p>
<h1>subtract the sine wave from the original signal</h1>
<p>sine = model(tdata, *popt)
sig_without_sine = sig - sine</p>
<h1>eliminate the phase shift</h1>
<p>phi = popt[1]
sign = np.sign(popt[0])
if (sign < 0):
shift = int((math.pi + phi) * 800000.0 // (2*math.pi))
else :
shift = int(phi * 800000.0 // (2*math.pi))</p>
<p>sig_without_sine_and_shifted = np.roll(sig_without_sine, shift)
```</p>
<h2>Model architecture</h2>
<p>I utilized a neural net with convolutional layers, max pooling and a dense layer as the output layer.
The model takes the preprocessed signals as inputs. Some people wrote that I would not be possible to train a conv net accepting this input size, but it worked quite well for me. </p>
<p><code>
model = Sequential()
model.add(Conv1D(16, kernel_size=11, activation='relu', input_shape=(800000, 1)))
model.add(MaxPooling1D(4))
model.add(Conv1D(16, kernel_size=11, activation='relu'))
model.add(MaxPooling1D(4))
model.add(Conv1D(32, kernel_size=11, activation='relu'))
model.add(MaxPooling1D(4))
model.add(Conv1D(32, kernel_size=11, activation='relu'))
model.add(MaxPooling1D(4))
model.add(Conv1D(64, kernel_size=11, activation='relu'))
model.add(MaxPooling1D(4))
model.add(Conv1D(64, kernel_size=11, activation='relu'))
model.add(MaxPooling1D(4))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
</code></p>
<h2>Training</h2>
<p>I used mean squared error as the loss function with a tuned learning rate. I tried binary cross entropy first, but it gave me worse results. I trained the model on two epochs of the data with class weights 20:80 (result of manual parameter tuning).</p>
<p>In fact, I did not train only one model, but 25 of them, on different splits of the training data. I used 80% of the data for training and 20% for validation. None of the 25 models showed clear signs of overfitting, so in the end I kept all of them. </p>
<p><code>
model.compile(loss='mean_squared_error', optimizer=optimizers.SGD(lr=0.005))
model.fit_generator(train_generator, epochs=2, class_weight = {0: 20., 1: 80.})
</code></p>
<h2>Stacking</h2>
<p>I used a weighted voting scheme to combine the predictions of the 25 models into one final output. I trained the weights on the complete training set using a small NN with basically only one dense layer taking 25 inputs and producing one output. I achieved an MCC of 0.70 on the training set. </p>
<h2>A very simple, yet effective trick</h2>
<p>I observed that in almost all cases, all three signals taken from the same measurement had the same label in the training data, so I assumed that this would also be the case in the test data. Therefore, I applyed a final correction step where I inspected the predictions of three corresponding signals and if one of the signals would get a different label than the other two, I changed that label to match the majority vote. Therefore, all three signals in one measurement would always get the same prediction. This increased my final score on the training set from 0.70 to 0.72. </p>
<h2>Leaderboard scores</h2>
<p>My final model achieved 0.72498 on the training set, 0.52441 on the public leaderboard and 0.65595 on the private leaderboard. </p>
<p>This was not my best scoring model on the public leaderboard.
My best scoring model on the public leaderboard was a simpler ensemble made up of only 5 CNNs with a slightly different architecture and slightly different preprocessing routine. This model scored 0.62432 on the private leaderboard (also not too bad). </p>
<p>Nevertheless, I also selected the ensemble of 25, maily because I had put quite some time into tuning and training it ;-) Turned out to be a good decision and got me my first bronze medal.</p>
<p>Please let me know what you think of my approach or if you have any questions. </p>
<p>And thank you all for taking part in this very interesting competition and sharing your thoughts. </p> |
VSB Power Line Fault Detection | 9th Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for your comment!<br><br></p>
<p>Unfortunately, after I decreased val-auc to 0.7, the problem was not completely solved.
I don't know whether this result is from the gap of neg/pos-ratio between train and test.<br><br></p>
<p>But compared to un-filtered / un-denoised models, the gap (especially between public LB and private LB) seems to get small.<br><br></p>
<p><strong>High-pass filtering and DWT-denoising</strong>
<em>local cv: public lb: private lb</em>
0.765: 0.688: 0.674
0.728: 0.677: 0.673
0.710: 0.677: 0.677<br><br></p>
<p><strong>Without any filtering and denoising</strong>
<em>local cv: public lb: private lb</em>
0.753: 0.682: 0.611<br><br></p>
<p>My best public LB (0.766) keeps 0.680 in private.
Since this score is resulted from repetitive hard-votings, CV score is missing.</p>
<p>Congratulations to all the winners!
And thanks a lot to VSB/Enet Center and Kaggle for this exciting competition. Since this is my first Kaggle competition, I'm surprised and glad to this honorable result.<br><br></p>
<p>I'm going to briefly share all of you my 10th place solution, which keeps the score of 0.688 in the private LB. <br><br></p>
<p>First of all, I have to tell you that this solution is not my best public LB model (RNN), which is in the end 13th place in public LB (0.766). <br>
I think that the keys of this competition are high-pass filtering and DWT-denoising. In this point, I owe a lot to Jack's <a href="https://www.kaggle.com/jackvial/dwt-signal-denoising">kernel</a>.<br><br></p>
<p>As many other guys mentioned in public kernel and discussion, the problem of this competition is the discrepancy between train / test distributions(and the unstability of CV / LB resulted from that).
So, I chose as a final submission more <em>conservative</em> RNN model, which kept modest score in public LB but might be more robust in my view.<br><br></p>
<p>In order to choose a more robust model, I spent time to decrease val_auc score in adversarial validation.
I'm going to show what I thought and tried as below.<br><br></p>
<ol>
<li>Without any filtering / denoising, the discrepancy between train / test distributions was huge (val-auc was over 0.96 in adversarial validation. So, I thought that a model without any preprocessing was in danger of shake.</li>
<li>With high-pass filtered and DWT-denoised, val-auc decreased to under 0.78. Therefore, I selected high-pass filtered and DWT-denoised model as a base one (high-pass filtering had the stronger influence).</li>
<li>Next, I cut down features which contributed to increase val-auc in adversarial validation, and finally succeeded to decrease val_auc to under 0.7. (Notwithstanding, the instability of CV / LB was not completely solved...)<br><br></li>
</ol>
<p>The architecture of my RNN model is simple and not novel.
Similar to other guys, I selected <a href="https://www.kaggle.com/braquino/5-fold-lstm-attention-fully-commented-0-694">Bruno-based</a> BiLSTM x 2 + Attention model and <a href="https://www.kaggle.com/tarunpaparaju/vsb-competition-attention-bilstm-with-features">Tarun-based</a> BiLSTM x 2 + Attention + feature concatenation model.<br><br></p>
<p>And I hard-voted 8 predictions (each resulted from stratified 5- or 4-fold CV models) which were a little bit different from each other in terms of features and random seeds.
When I selected models, I kept in mind to choose ones whose train / validation losses were relatively small(because of CV / LB scores' instability).</p>
<p>Moreover, I used recently released AdaBound optimizer. Although It contributed to increase local CV score, I don't know it is a good choice especially in this easily-overfitting competition.</p>
<p>Thanks, lucaskg.<br></p>
<p>As I'm not an expert to signal processing, I used just the same parameters as Jack's.
And I used the same inputs and the same RNN architecture.</p>
<p>Thanks, Strideradu!<br><br></p>
<p>I mean that I simply kept in mind to avoid the overfit to the train.
(i.e. I didn't choose high local CV / low public LB models)</p>
<p>Thanks, Jack.
I learned a lot from your excellent kernel!</p>
<p>Oh, sorry for missing 3).<br><br></p>
<p>I used keras ModelCheckpoint callbacks in a usual way. <em>(monitor="val_matthews_corr_coeff", save_best_only=True, mode="max")</em>
What I mean above is that I tried to adjust the num of epochs so as to avoid the risk of the overfitting to the train.<br><br></p>
<p>The more the num of epochs I set (I mean <em>thousands</em> of epochs instead of tens or hundreds), the higher the local CV score got. But simultaneously, the gap between train / validation losses widened, and the public LB of such model got lower.
I adjusted the num of epochs lower than the threshold from which the gap began widening.</p>
<p>You too! Thanks, YaGana.</p>
<p>Not at all, <a href="/ratthachat">@ratthachat</a> ! Thank you for your question.
My answers are below.<br><br></p>
<p>1)
features I used
* mean
* max_range
* percentile (0, 0.01, 0.05, 0.25, 0.5, 0.75, 0.95, 0.99, 1)
* relative_percentile (0, 0.01, 0.05, 0.25, 0.5, 0.75, 0.95, 0.99, 1)<br></p>
<p>I added percentile 0.05 and 0.95 but they affected the val_auc little.<br><br></p>
<p>And features below which Tarun provided us in a public kernel
* perm_entropy
* svd_entropy
* app_entropy
* sample_entropy
* katz_fd
* higuchi_fd<br></p>
<p>Features which Tarun provided us affected effectively to decrease the val_auc score.<br><br></p>
<p>features I didn't use
* std
* std_top
* std_bot
* petrosian_fd<br><br></p>
<p>2)
My inputs are <br>
* input_1: 2904 x 160 x 60(Bruno-based features)
* input_2: 2904 x 6(Tarun-based features, I mean entropies and fds above.)<br></p>
<p>I concatenated a and b below, and then input them to the Dense layers.
a: input_1 -> BiLSTM -> BiLSTM -> Attention
b: input_2<br><br></p>
<p>3)
Since I didn't have enough time and submission data to make an exact experiment how they affected to the scores, I simply didn't use the removed features.<br><br></p>
<p>4)
The public LB score is 0.72213.</p>
<p>Thanks a lot, <a href="/tarunpaparaju">@tarunpaparaju</a> !
You really made a great contribution for this competition!</p> |
VSB Power Line Fault Detection | My Best Submissions (Public 0.365 → Private 0.694) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My best private score is 0.694(gold rank), but public score is 0.365 and 0.511.
What on earth is that? </p>
<ol>
<li>LGBM with basic statistics(mean,minmax,std, etc.) of raw signals + basic statistics of FFT denoised signals.</li>
</ol>
<p>Public Private
0.365 → 0.694</p>
<p>Local CV
0.63991</p>
<ol>
<li>LGBM with basic statistics(mean,minmax,std, etc.) of raw signals and features of Host’s papers (Mean width of peaks ,Max width of peaks etc.) of FFT denoised signals.</li>
</ol>
<p>Public Private
0.511 → 0.694</p>
<p>Local CV
0.67035</p>
<p>I can’t select these submissions. It’s so difficult.</p> |
VSB Power Line Fault Detection | My Solution to this Competition with Code (Score: 0.58655) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You can find an explanation of my solution with the code used in this Github repository:</p>
<p><a href="https://github.com/ammar1y/My-Solution-to-VSB-Power-Line-Fault-Detection-Competition">https://github.com/ammar1y/My-Solution-to-VSB-Power-Line-Fault-Detection-Competition</a></p>
<p>The solution is simple and can be summarized in the following steps:
EDA -> Signal filtering -> Peak calculation and other feature engineering techniques -> Modeling by combining multiple models.</p> |
VSB Power Line Fault Detection | My stable solution to get a bronze medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>After reviewing my submissions, I found that
- Solutions with a single LSTM model tend to have a huge shakedown
- Solutions with a single LightGBM model tend to have a huge <strong>shakeup</strong> (such as public 0.521 and private 0.698).
- Ensembling/Stacking of LSTM and LightGBM models were stable and had just ± 0.02.</p> |
VSB Power Line Fault Detection | Overview of 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulation to all the winners and thanks to VSB for hosting the competition.</p>
<p>I'll provide an overview of the approach I took here. If you want to take a closer look I have uploaded the <a href="https://www.kaggle.com/mark4h/vsb-1st-place-solution">winning kernel</a>.</p>
<p>Like everyone has been discussing, it was challenging to cope with the peculiarities of the dataset. I didn't use any specific CV technique to prevent over-fitting, instead I spent most of the time trying to understand the data. In the end my best model was a simple LightGBM model, with some preprocessing of the data and a small number (9 in total) of features based on the patterns common to the train and test data. </p>
<p>Preprocessing:</p>
<pre><code>• I spent some time trying to denoise the traces but in the end came to the conclusion that it was just as likely to remove signal as it was noise
• Instead I focused on finding a way to pick a threshold for each trace to separate the peaks (signal) from the noise
• First I found the local maxima in the data with a window size of 51. I picked the window sized based on the data.
• I then used a knee point detection to find the noise floor. I ordered the peaks from highest to lowest and then looked for the point where the height of the peaks flattened off.
</code></pre>
<p>Features:</p>
<p>For each peak identified in the preprocessing I then calculated a few features, of which the first 2 where the most important:</p>
<pre><code>1. The absolute height of the peak
2. The RMSE between the peak and a sawtooth shaped template. I picked this because I noticed this sort of shape was common in traces marked as faulty
3. The absolute ratio of the peak to the next data point
4. The absolute ratio of the peak to the previous data point
5. The distance to the maximum, of opposite polarity to the peak, within a window of 5 either side of the peak
</code></pre>
<p>For the features I fed into the LightGBM model I then aggregated the peak features for each measurement ID. The model was trained on the measurement ID instead of the signal ID because there were a number of examples in the training data where it looked as if all 3 signal traces had been marked as faulty even though only one had any obvious signs of a fault. Some of the features were calculated for only certain portions of the phase of each signal so I phase resolved the position of each of the peaks. The features I then calculated, of which the first 5 were the most important, for each measurement ID were:</p>
<pre><code>1. The total number of peaks
2. The number of peaks in quarters 0 and 2
3. The number of peaks in quarters 1 and 3
4. The mean “sawtooth” RMSE value in quarters 0 and 2
5. The std height in quarters 0 and 2
6. The mean height in quarters 0 and 2
7. The mean value of the ratio with the previous data point feature in quarters 0 and 2
8. The mean value of the ratio with the next data point feature in quarters 0 and 2
9. The mean value of the absolute distance to the opposite polarity maximum
</code></pre>
<p>Finally, I trained a LightGBM model using 5 fold CV but repeated 25 times using a different seed for the CV split each time.</p>
<p>It would be great to get peoples thoughts on this approach.</p>
<p>mark4h</p>
<p>Thanks everyone.</p>
<p><a href="/sergeyzlobin">@sergeyzlobin</a>, that’s right, no neural network, no blending, just a single LightGBM model.</p> |
VSB Power Line Fault Detection | Unexpected Bronze Medal Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Moreover making notes would be a good practice, i shall note down the CV and LB’s after every submission and then select the two submission, one best with CV, other best in LB</p>
<p>Thats true, sometimes i learn the things hard way</p>
<p>Yes this was truly unexpected.</p>
<p>CV Score = 0.6787. Thats why the experts suggest to trust the CV or LB.</p>
<p>Just reviewed my score submissions and found that I could have won a bronze - had i selected this as my final submission, but my public lb score was too low to convince me</p>
<p>Here was my solution for the bronze medal
<a href="https://www.kaggle.com/roydatascience/eda-iso-pca-lle-stratified-lstm-attention?scriptVersionId=11248695">https://www.kaggle.com/roydatascience/eda-iso-pca-lle-stratified-lstm-attention?scriptVersionId=11248695</a> </p> |
VSB Power Line Fault Detection | single silver models (hard voting, mixup, SEResNet) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VSB Power Line Fault Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I share two of my single "possibly" silver models.</p>
<p>To my disappointment, I don't select these model as final model,
so I missed the silver medal because I underestimated the local scores. <br>
However, these models contributed to my final ensemble ones and brought me the bronze medal. <br>
These 2 models are inspired by <a href="https://www.kaggle.com/c/tensorflow-speech-recognition-challenge">TensorFlow Speech Recognition Challenge</a> I joined in past. </p>
<ol>
<li><p><strong>bigru + attention + mix-up + hard voting (private LB 0.668 (same as 38th) / public LB 0.568)</strong>
It's similar model with <a href="https://www.kaggle.com/braquino/5-fold-lstm-attention-fully-commented-0-694">Bruno Aquino's kenel</a>, but 2 main different points. </p>
<ul><li><em>hard voting of each cv model's classification (threshold=0.5)</em>
Instead of tuning the threshold with the averaged probabilities of those by cv models,
the classes of the samples are decided by majority of the cv models with the fixed threshold 0.5. I guess this hard voting bring the model robustness. </li>
<li><em>mix-up</em>
This augmentation method is introduced by the paper <a href="https://arxiv.org/abs/1710.09412">mixup: Beyond Empirical Risk Minimization</a>. <br>
The augmentation generates a new sample from 2 training samples based of the adding. <br>
<code>math
\tilde{x} = λx_i +(1−λ)x_j
\tilde{y} = λy_i +(1−λ)y_j
</code>
If \tilde{y} >= 0.5 \, I labeled the new sample as class 1. <br>
I used 3 as the mix-up ratio, but I don't have enough time to tune the ratio.
Please see the blog post [the detail](<a href="http://www.inference.vc/mixup-data-dependent-data-">http://www.inference.vc/mixup-data-dependent-data-</a>
augmentation/) about mix-up. <br>
The mix-up improved private LB to 0.668 from 0.599 but worsened public LB to 0.568 from
0.632. Therefore, it requires more careful evaluation to know whether mix-up really works. </li></ul>
<p>In addition, I implement it with PyTorch instead of Keras (I don't know what effect that brings the model performance. ) <br>
<br></p></li>
<li><p><strong>SE-ResNet 50 + window (chunk) 40 step 20 summary feature + hard voting (private LB 0.663 (same as 53th) / public LB 0.587)</strong>
This model composed of 2 specific features. </p>
<ul><li><p><em>2D-1D SE-Resnet 50</em>
I learned cnn works well for signal classification tasks in the above speech recognition
competition. That drives me to adopt cnn models for the power line signals. <br>
Instead of simple cnn models, I used SE-ResNet 50 expecting it to find more deep features. <br>
(I also tried more simple CNN models with attention, but they are inferior to SE-ResNet in both public/private LB. ) </p>
<p>To support signal inputs, I changed 3 points from original 2D SE-ResNet for image.</p>
<ul><li>input concatenated features from 3 phase in height axis.</li>
<li>2D 1st conv with kernel size = (n feature(height), your kernel size for time step axis) . It weaves the features into 1D time steps and the channels. </li>
<li>following 1D conv blocks </li></ul></li></ul>
<p>These steps are inspired by <a href="https://arxiv.org/abs/1703.05051">Deep Learning With Convolutional Neural Networks for EEG Decoding and Visualization by Schirrmeister etc.</a> and <a href="https://arxiv.org/abs/1611.01942">DeepSense: A Unified Deep Learning Framework for Time-Series Mobile Sensing Data Processing by Yao etc.
arXiv:1611.01942v2</a>. </p>
<ul><li><em>smaller window size to extract summarized feature</em>
In the public kernels, the features are extracted with window size 5000. <br>
However, I want to leave feature extracting to cnn as much as possible . <br>
Therefore the inputted features are extracted with more smaller window (40) and step (20) size. <br>
I also tried to train the model with the raw 3 phase signals or the features from smaller window
sizes, but I cannot do due to oom. </li></ul>
<p>The SE-ResNet also seems over-fitting to privte LB and it leaves improvements for the model parameters. </p></li>
</ol>
<p>In conclusion, the both of them seem to include some hints but requires more improvements
to obtain the high scores in both private and public LB.
I'll share my codes in near feature.</p> |
Vesuvius Challenge - Ink Detection | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>Summary</h3>
<ul>
<li>pretrain 3D encoder using the <a href="https://github.com/educelab/ink-id/blob/develop/inkid/model/model.py" target="_blank">ink-id InkClassifier3DCNN</a> sampled size = 64, stride = 32 (improved CV by 0.03-0.06)</li>
<li>3D encoder -> pool along z-axis -> 2D FPN Decoder</li>
<li>Simple self-implemented attention pooling</li>
<li>Data Sampling -> 1 region containing positive ink : 8 region has no ink at all</li>
<li>Denoiser</li>
</ul>
<h3>Data preparation</h3>
<ul>
<li>5 folds, split ink_id = 2 into 3 equal regions along height</li>
<li>Resolution of 224 , stride of 56, slice 20-36</li>
<li>Sampled regions with ink and regions without ink using a ratio of 1:8 to generate more data, this also makes training more stable as compared to sampling with moving window</li>
<li>Augmentation:</li>
</ul>
<pre><code>[
A.HorizontalFlip(=0.5),
A.VerticalFlip(=0.5),
A.RandomRotate90(=0.5),
A.Affine(=0, =0.1, scale=[0.9,1.5], =0, =0.5),
A.OneOf([
A.RandomToneCurve(=0.3, =0.2),
A.RandomBrightnessContrast(brightness_limit=(-0.1, 0.2), contrast_limit=(-0.4, 0.5), =, =, =0.8)
], =0.5),
A.OneOf([
A.ShiftScaleRotate(=None, scale_limit=[-0.15, 0.15], rotate_limit=[-30, 30], =cv2.INTER_LINEAR,border_mode=cv2.BORDER_CONSTANT, =0, =None, shift_limit_x=[-0.1, 0.1],shift_limit_y=[-0.2, 0.2], =, =0.5),
A.ElasticTransform(=1, =20, =10, =cv2.INTER_LINEAR, =cv2.BORDER_CONSTANT, =0, =None, =, =, =0.5),
A.GridDistortion(=5, =0.3, =cv2.INTER_LINEAR, =cv2.BORDER_CONSTANT, =0, =None, =, =0.5),
], =0.5),
A.OneOf([
A.GaussNoise(var_limit=[10, 50], =0.5),
A.GaussianBlur(=0.5),
A.MotionBlur(=0.5),
], =0.5),
A.CoarseDropout(=3, =0.15, =0.25, =0, =0.5),
A.Normalize(
mean=[0]*in_chans,
std=[1]*in_chans,
),
ToTensorV2(=),
]
</code></pre>
<h3>Model</h3>
<ul>
<li><a href="https://github.com/kenshohara/3D-ResNets-PyTorch" target="_blank">resnet3D</a> as backbone</li>
<li>Attention pooling</li>
</ul>
<pre><code> (torch.nn.Module):
():
().__init__()
.attention_weights = nn.Parameter(torch.ones(, , depth, height, width))
.softmax = nn.Softmax(dim=)
():
attention_weights = .softmax(.attention_weights)
pooled_output = torch.mul(attention_weights, x)
pooled_output = torch.sum(pooled_output, dim=)
pooled_output
</code></pre>
<ul>
<li>FPN as decoder</li>
<li>Denoiser, this was inspired by the diffusion model, whereas the model predicts the noise</li>
</ul>
<pre><code> cfg.use_denoiser:
self.denoiser = smp.Unet(
=, #
=,
=1,
=1,
=None,
)
self.cfg.use_denoiser:
noise = self.denoiser(masks)
masks = masks - noise
</code></pre>
<ul>
<li>loss: bce</li>
</ul>
<h3>Training</h3>
<ul>
<li>amp</li>
<li>adamw</li>
</ul>
<h3>Inference</h3>
<ul>
<li>tta: rot90 * 3 + original</li>
<li>threshold mainly determined by cv, tried to use a percentile of 93 but did not use as I was unsure about the ink composition in private dataset.</li>
</ul>
<h3>General pipeline</h3>
<ul>
<li>pretrain with ink-id InkClassifier3DCNN -> save encoder -> load into segementation model -> train segmentation model -> inference</li>
</ul>
<h3>Result</h3>
<ul>
<li>CV: 0.685 </li>
<li>public LB: 0.69 </li>
<li>private LB: 0.65</li>
</ul>
<h2>Code</h2>
<ul>
<li><a href="https://www.kaggle.com/code/fengqilong/vesuvius-inference" target="_blank">Inference</a></li>
<li><a href="https://github.com/fengql123/kaggle-vesuvius-10th-place-solution/tree/main" target="_blank">Training Code</a></li>
</ul> |
Vesuvius Challenge - Ink Detection | 12th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you, and congratulations on winning first place!</p>
<h1><strong>12th place solution</strong></h1>
<p>Thank you to the organizers for hosting this exciting competition and congratulations to all the winners! This is my first time participating in a competition like this, and below is my solution and approach.</p>
<h2><strong>overview</strong></h2>
<p>we use 3D Resnet(<a href="https://github.com/kenshohara/3D-ResNets-PyTorch/blob/master/models/resnet.py" target="_blank">https://github.com/kenshohara/3D-ResNets-PyTorch/blob/master/models/resnet.py</a>) for our encoder and CNN decoder.</p>
<p>We tried different decoders, such as Unet, Uperhead, CNN, and ResCNN. In the end, the CNN decoder, while being the simplest, achieved the best results.</p>
<h2><strong>model architecture</strong></h2>
<table>
<thead>
<tr>
<th>backbone</th>
<th>decoder</th>
<th>Z-DIM</th>
</tr>
</thead>
<tbody>
<tr>
<td>3DResnet-18</td>
<td>CNN</td>
<td>22</td>
</tr>
<tr>
<td>3DResnet-34</td>
<td>CNN</td>
<td>22</td>
</tr>
</tbody>
</table>
<h2><strong>training</strong></h2>
<p>Z_DIM:21->43<br>
img_size:224<br>
stride_size:56</p>
<h3>augmentation:</h3>
<p>A.Rotate(limit=90, p=0.5),<br>
A.HorizontalFlip(p=0.5),<br>
A.VerticalFlip(p=0.5),<br>
A.RandomBrightnessContrast(p=0.75),<br>
A.ShiftScaleRotate(p=0.75),<br>
A.OneOf([<br>
A.GaussNoise(),<br>
A.GaussianBlur(),<br>
A.MotionBlur(),<br>
], p=0.5),<br>
A.GridDistortion(num_steps=5, distort_limit=0.3, p=0.5),<br>
A.CoarseDropout(max_holes=1, max_width=int(size * 0.3), max_height=int(size * 0.3), <br>
mask_fill_value=0, p=0.5),<br>
A.Normalize(<br>
mean= [0] * in_chans,<br>
std= [1] * in_chans<br>
),<br>
ToTensorV2(transpose_mask=True)</p>
<h3>training details</h3>
<p>We used two different training methods. The first method involved selecting one folder from 1, 2, and 3 as the validation set and training three models. However, due to poor performance on validation set 2, we only used models trained with validation sets 1 and 3 for our 3DRsnet-18 model. The second method involved randomly selecting image blocks from 1, 2, and 3 as the validation set. We used this method to train 4 models: two based on 3DResnet-18 with a stride of 56 and 37, and another two based on 3DResnet-34 with a stride of 56 and 37.</p>
<h2>others</h2>
<p>BCELoss<br>
Adamw<br>
GradualWarmupSchedulerV2(<a href="https://www.kaggle.com/code/underwearfitting/single-fold-training-of-resnet200d-lb0-965" target="_blank">https://www.kaggle.com/code/underwearfitting/single-fold-training-of-resnet200d-lb0-965</a>)</p>
<h2><strong>inference</strong></h2>
<p>img_size:224<br>
stride_size:224<br>
th:0.5<br>
TTA: 4 rotates, h/v flips<br>
last model:<br>
(3DResnet-18 + CNN) * 4 + (3DResnet-34 +CNN) * 2<br>
public:0.786897<br>
private:0.652892</p>
<p>training code:<a href="https://github.com/tkz24589/Deep_Learning_work_space/tree/main/convolution/segmentation/vesuvius-challenge-ink-detection-tutorial" target="_blank">https://github.com/tkz24589/Deep_Learning_work_space/tree/main/convolution/segmentation/vesuvius-challenge-ink-detection-tutorial</a><br>
notebook example: <a href="https://www.kaggle.com/code/kongzhangtang/resnet3d-cnn" target="_blank">https://www.kaggle.com/code/kongzhangtang/resnet3d-cnn</a></p> |
Vesuvius Challenge - Ink Detection | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone! Here is a write-up of my best private leaderboard solution (top 13 with 0.641814 score, 0.723345 on public LB). </p>
<p>I am very grateful to organizers and participants for this competition, it was very interesting and challenging. I've learned a lot about segmentation and had a lot of fun! Some of the ideas I have used in the solution came from code / discussion tabs on Kaggle, hopefully I have mentioned them all.</p>
<p>If you have any questions, please do not hesitate to reach me in comments.</p>
<h1>Overview</h1>
<p>Data:</p>
<ul>
<li>Z shift and scale pre-processing based on mean layer intensity curve fitting to fragment 3's mean intensity curve (inspired by <a href="https://www.kaggle.com/code/ajland/eda-a-slice-by-slice-analysis" target="_blank">this</a> notebook)</li>
<li>5 fold training (1, 2a, 2b, 2c, 3 fragments, 2nd fragment is split by scroll mask area: top part + 2 bottom parts)</li>
<li>weighted sampling from fragments according to scroll mask area</li>
<li>patch size is 384, on test overlap is 192 via spline-weighted averaging (inspired by <a href="https://github.com/bnsreenu/python_for_microscopists/blob/master/229_smooth_predictions_by_blending_patches/smooth_tiled_predictions.py" target="_blank">this</a> approach)</li>
<li>24 slices with indices in [20, 44), random crop of 18 of them, each 3 slices window is fed to model and 6 resulting logits aggregated via linear layer</li>
</ul>
<p>Training:</p>
<ul>
<li>model is SMP Unet with 2D pre-trained encoder + linear layer aggregation</li>
<li>random crop inside scroll mask + standard vision augmentations</li>
<li>BCE loss with 0.5 weight of positive examples, selection by F0.5 score on validation fold</li>
<li>64 epoch training, epoch length is ~ number of patches in train set (it is not constant because of random cropping)</li>
<li>LR is 1e-4, no layers freezing / LR decay, schedule is cosine annealing with warmup (1e-1, 1, 1e-2 factors, 10% warmup), optimizer is AdamW</li>
</ul>
<p>Inference:</p>
<ul>
<li>TTA: cartesian product of 4 rotations (0, 90, 180, 270) and 3 flips (no flip, horizontal, vertical) -> 12 predictions per volume (brought this idea from <a href="https://www.kaggle.com/code/yoyobar/2-5d-segmentaion-model-with-rotate-tta" target="_blank">here</a>)</li>
<li>TTA probabilities are averaged, writen to image file, then 5 models images averaged</li>
</ul>
<p>Environment & tools:</p>
<ul>
<li>docker, pytorch lightning CLI, wandb</li>
<li>1 x 3090, 1 x 3080Ti mobile</li>
</ul>
<h1>Some details</h1>
<h2>Data</h2>
<p>Inspired by <a href="https://www.kaggle.com/code/ajland/eda-a-slice-by-slice-analysis" target="_blank">this</a> analysis, I plot the mean intensity of each layer of each fragment for full image, only for ink and only for backgound.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F011cb02384ff04722ab4cdd093d4a888%2Fintencity_curve.png?generation=1686855075215824&alt=media" alt="Mean intensity of each layer of each fragment"></p>
<p>Here it could be observed that fragments 1 and 3 are more or less similar in terms of both intencity values and minimum and maximum positions, while fragment 2 is different. </p>
<p>My physical intuition behind this is following: papirus seems not being aligned by depth inside 3D volume, moreover papirus thinknes seems to vary between fragments 1, 3 and fragment 2. </p>
<p>So, I've decided to align and stretch / compress papirus in z dimension by shift and scale operation:</p>
<ol>
<li>Split the volume on overlapping patches in H and W dimentions</li>
<li>For each patch </li>
</ol>
<ul>
<li>calculate mean intensity of each layer</li>
<li>fit <code>z_shift</code> parameter to minimize following loss (<code>y_target</code> here is the intensity curve of full fragment 3)</li>
</ul>
<pre><code>z_shifted = z + z_shift
f = interpolate.interp1d(z_shifted, y, =, =)
return f(z_target) - y_target
</code></pre>
<ol>
<li>For full volume fit single <code>z_scale</code> (assuming pairus thickness is same for same fragment) in a similar manner.</li>
<li>Linearly upsample resulting <code>z_shift</code> and <code>z_scale</code> maps to full volume size and save them on disk.</li>
<li>Apply <code>z_shift</code> and <code>z_scale</code> to full volume via <code>scipy.ndimage.geometric_transform</code> with transform writen as C module for speed.</li>
<li>For training save it on disk, for inference apply on the fly.</li>
</ol>
<p>Resulting <code>z_shift</code> maps and <code>z_scale</code> values are following:</p>
<table>
<thead>
<tr>
<th>Fragment 1</th>
<th>Fragment 2</th>
<th>Fragment 3</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F387734e4ccdf846560e4f61421ffb52c%2F1_z_shift.png?generation=1686854266142975&alt=media" alt="Z shift map and its histogram"></td>
<td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F4a66d91e17b7a2332b5574defb0d1b37%2F2_z_shift.png?generation=1686854298329163&alt=media" alt="Z scale map and its histogram"></td>
<td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F3510608c738e619389b036e2d1d871ef%2F3_z_shift.png?generation=1686854366312885&alt=media" alt="Z scale map and its histogram"></td>
</tr>
</tbody>
</table>
<p>Z scale is ~ 1.0 for fragments 1 and 3 and ~ 0.6 for fragment 2 which could indicate the fragment 2 being ~ twice thicker that others.</p>
<p>Such operation should be beneficial for 2D approaches. It seems to improve CV on ~0.03-0.05 F05, but I did not run full comparison due to time constraints.</p>
<p>Also, I've tried to fit / calculate normalization for each fragment separately to completely fit the intencity curve, but it did not work well.</p>
<h2>Model</h2>
<p>Pre-trained maxvit encoder + SMP 2D unet with aggregation has shown the best CV score, so <code>maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k</code> is the one used in best solution. </p>
<p>2D unet with aggregation works as follows: unet model is applied on each slice of the input with size = step = 3 along z dimention (total 6 slices), then 6 resulting logits are aggregated via linear layer.</p>
<p>Usage of pre-trained models is yielding far better results that random initialization. Unfortunately I did not came up with better architecture allowing to handle 3D inputs and 2D outputs while allowing to use pre-trained models.</p>
<p>Best CV scores of ths model and corresponding predictions are following:</p>
<table>
<thead>
<tr>
<th>Fragment</th>
<th>F05</th>
<th>Prediction</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0.6544</td>
<td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2Fc9e6f42f6cf07b913a23b6fd989076d0%2F1_3jzUWpdZJR.png?generation=1686854422630598&alt=media" alt="Fragment 1 probabilities"></td>
</tr>
<tr>
<td>2a</td>
<td>0.6643</td>
<td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2Feeef4afaf95c79def86b96014d5a7b5b%2F2a_jP08265cbJ.png?generation=1686854447976196&alt=media" alt="Fragment 2a probabilities"></td>
</tr>
<tr>
<td>2b</td>
<td>0.7489</td>
<td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2Fe70babdbeb33a34a0976195453bf995d%2F2b_whqg8ITAI7.png?generation=1686854469983013&alt=media" alt="Fragment 2b probabilities"></td>
</tr>
<tr>
<td>2c</td>
<td>0.6583</td>
<td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2Fff2531343b00439ebdb81f310894b61b%2F2c_Aomw3d44sk.png?generation=1686854491195589&alt=media" alt="Fragment 2c probabilities"></td>
</tr>
<tr>
<td>3</td>
<td>0.7027</td>
<td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2190976%2F626278b3fa70ceb963b517c5069f1d3f%2F3_AM8P9eim4r.png?generation=1686854513349728&alt=media" alt="Fragment 3 probabilities"></td>
</tr>
</tbody>
</table>
<p><strong>Tried, but did not work well</strong></p>
<p>Multiple backbone models:</p>
<ul>
<li>convnext v2</li>
<li>swin transformer v2</li>
<li>caformer</li>
<li>eve02</li>
<li>maxvit </li>
</ul>
<p>And multiple full model approaches: full 3D with aggregation on the last layer, 2.5D, 2D with aggregation, custom 3D models with pre-trained weights conversion from 2D to 3D.</p>
<h2>Transforms</h2>
<p>Following augmentations are used for training</p>
<pre><code>train_transform = A.Compose(
[
RandomCropVolumeInside2dMask(
=self.hparams.img_size,
=self.hparams.img_size_z,
scale=(0.5, 2.0),
ratio=(0.9, 1.1),
scale_z=(1 / self.hparams.z_scale_limit, self.hparams.z_scale_limit),
=,
=0,
),
A.Rotate(
=0.5,
=rotate_limit_degrees_xy,
=,
),
ResizeVolume(
=self.hparams.img_size,
=self.hparams.img_size,
=self.hparams.img_size_z,
=,
),
A.HorizontalFlip(=0.5),
A.VerticalFlip(=0.5),
A.RandomRotate90(=0.5),
A.RandomBrightnessContrast(=0.5, =0.1, =0.1),
A.OneOf(
[
A.GaussNoise(var_limit=[10, 50]),
A.GaussianBlur(),
A.MotionBlur(),
],
=0.4
),
A.GridDistortion(=5, =0.3, =0.5),
A.CoarseDropout(
=1,
=int(self.hparams.img_size * 0.3),
=int(self.hparams.img_size * 0.3),
=0, =0.5
),
A.Normalize(
=MAX_PIXEL_VALUE,
=self.train_volume_mean,
=self.train_volume_std,
=,
),
ToTensorV2(),
ToCHWD(=),
],
)
</code></pre>
<p>Here, <code>RandomCropVolumeInside2dMask</code> crops random 3D patch from volume so that its center's (x, y) are inside mask, <code>ResizeVolume</code> is trilinear 3D resize and <code>ToCHWD</code> simply permutes dimentions. <code>img_size</code> = 384, <code>img_size_z</code> = 18, <code>z_scale_limit</code> = 1.33, <code>rotate_limit_degrees_xy</code> = 45, <code>train_volume_mean</code> and <code>train_volume_std</code> are average of imagenet stats.</p>
<p>For inference, following transforms are used</p>
<pre><code>test_transform = A.Compose(
[
CenterCropVolume(
=None,
=None,
=math.ceil(self.hparams.img_size_z * self.hparams.z_scale_limit),
=,
=,
=1.0,
),
ResizeVolume(
=self.hparams.img_size,
=self.hparams.img_size,
=self.hparams.img_size_z,
=,
),
A.Normalize(
=MAX_PIXEL_VALUE,
=self.train_volume_mean,
=self.train_volume_std,
=,
),
ToTensorV2(),
ToCHWD(=),
],
)
</code></pre>
<p>Here, <code>CenterCropVolume</code> is center crop of 3D volume.</p>
<p>Mixing approaches (cutmix, mixup, custom stuff like copy-paste the positive class voxels) also were tried but shown contradicting results, so were not included.</p>
<p>Cartesian product of <code>no flip / H flip / V flip</code> and <code>no 90 rotation / 90 rotation / 180 rotation / 270 rotation</code> is used for TTA yielding total 12 predictions.</p>
<p><strong>Update</strong>: project source code for training could be found <a href="https://github.com/mkotyushev/scrolls" target="_blank">here</a></p> |
Vesuvius Challenge - Ink Detection | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It was a very nervy ending to this competition, we held first for a long time but I am always scared of a shake-up. We tried to play it very defensively to ensure that we weren't doing anything to over-optimize for the leaderboard. Still, there is always a certain level of uncertainty when the public leaderboard is so small and your local validation is not extremely stable.</p>
<p><strong>tl;dr</strong><br>
At a high level, I would attribute most of our success to:</p>
<ul>
<li>Larger crops </li>
<li>Strong depth-invariant models</li>
<li>Averaging several models to give us better calibration</li>
<li>Training against all available data after validating against fragment 1 rigorously</li>
</ul>
<p><strong>Data prep</strong><br>
We initially started with the <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training" target="_blank">2.5d starter code</a> and evolved it over time. What we ended up settling on was just taking the middle 16 layers, but we tried plenty of other things that didn't work. I will save that for another post. Initially, the code was configured to train against all of the crops of the image, but we filtered this very simply based on if a crop was blank. This greatly reduced our training time because many of the crops had no data in them at all. </p>
<p>Early on there was very little progress being made and then we ran simple ablations to see how performance was impacted by the crop size and found that there was good scaling potential there. Seemed like one of the clearest patterns we saw early on. 128 got beat by 512 which got beaten by 1024. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Fadddcf67a8c6ed4f68716e050215eb2d%2FWB%20Chart%206_15_2023%203_13_17%20PM.png?generation=1686867276107142&alt=media" alt=""></p>
<p>It seemed that the x,y context was very important here because the crop could look at a whole letter or even a couple letters at a time and draw them out more completely. Smaller crops yielded much patchier-looking outputs. One nice thing about this approach was it actually didn't increase training time at all. if you ran it with 1024x1024 crops you had to train against way less crops than if you did 128x128 crops so the epoch time and wall-clock convergence time was virtually identical, unlike some other competitions where increasing the resize makes the computation take longer. </p>
<p><strong>Models</strong><br>
We started with the 2.5D approach but it became apparent that it wasn't optimal because the model was learning which layers had ink in them and we knew that this varied between fragments. We wanted a solution that was depth-invariant, that could detect ink in any layer. One approach would be training a simple 2d model on each slice of the 3d volume, but with the way the data was labeled, only 2d to start with, we would be giving bad signal to the model if we told it ink was in layers that it actually wasnt and vice versa. We had many discussions about how we handle this, 1D convolutions, max pooling, 3d convolutions with size (8, 1, 1) so it was only truly looking across the depth patterns. What we found to be the most performant was using strong 3d models that would output a new 3d volume with many channels and then we could flatten them along the depth axis with a max. </p>
<p>So for example, our first approach in this vein was a simple 3dcnn. input shape was (batch_size, 1, 16, 1024, 1024). We applied 4 layers of 3d convolutions on top of this with progressively more filters until finally, we had an output volume of (batch_size, 16, 16, 1024, 1024). At this point we just took the max across the z-dimension to squish it down to (batch_size, 16, 1024, 1024) so our 16 depth dimensions were now replaced with 16 feature dimensions. This alone was a decent approach but then passing this through a strong 2d segmentation model made it much better. We heavily relied on segformer for this, as others have mentioned b3 backbone worked great, but we also found that directionally the even bigger models performed better on the leaderboard even if they didnt perform measurably better on local validation. </p>
<p>We iterated on this design because it seemed to satisfy the qualities we wanted, a strong 2d segmenter applied to a 3d volume that was invariant to depth. We tried different first-stage 3d methods and found that even stronger 3d models yielded better results. Evolving to 3d unet's and then eventually 3d unetr. Sometimes the results were unclear if they were better but we found a clear trend that the on-paper better models also did better on the leaderboard. Our best individual model was a UNETR first stage that passed 32 channels into a b5 segformer which scored 0.82 on the public leaderboard and 0.67 on the private. We applied a small amount of dropout on the channels in between the 3d and 2d stage. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Fb59a1b2b524ffdbea3882fd6883e87ad%2FScreen%20Shot%202023-06-15%20at%204.12.57%20PM.png?generation=1686870810392089&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Fd3566ee53b477abe6e4026db29d35531%2FScreen%20Shot%202023-06-15%20at%204.03.32%20PM.png?generation=1686870239615259&alt=media" alt=""></p>
<p>One thing I always wondered about was if the strength of the segformer wasnt actually that it was the best model but that it was making predictions at a lower resolution. We pass it the 1024x1024 and it returns back a 256x256 segmentation map. We simply upscaled it with a very simple conv2d transpose. I believe the second place solution verifies this that the lower resolution actually helps because it is doing much coarser classification which makes it easier than trying to more precisely get every pixel. </p>
<p>Our final best solution was 9 different models</p>
<table>
<thead>
<tr>
<th>1st stage</th>
<th>2nd stage</th>
<th>Resolution</th>
<th>Public Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>3d unet(16 channels)</td>
<td>segformer b3</td>
<td>1024</td>
<td>.78</td>
</tr>
<tr>
<td>3d unet(16 channels)</td>
<td>segformer b3</td>
<td>512</td>
<td>.77</td>
</tr>
<tr>
<td>3d unet(16 channels with SWA)</td>
<td>segformer b3</td>
<td>512</td>
<td>.78</td>
</tr>
<tr>
<td>3d cnn(32 channels)</td>
<td>segformer b3</td>
<td>1024</td>
<td>.78</td>
</tr>
<tr>
<td>3d cnn(32 channels)</td>
<td>segformer b5</td>
<td>1024</td>
<td>.77</td>
</tr>
<tr>
<td>3d cnn(64 channels)</td>
<td>segformer b3</td>
<td>1024</td>
<td>.78</td>
</tr>
<tr>
<td>3d unet(32 channels)</td>
<td>segformer b5</td>
<td>1024</td>
<td>.79</td>
</tr>
<tr>
<td>3d unetr(32 channels)</td>
<td>segformer b5</td>
<td>512</td>
<td>.82</td>
</tr>
<tr>
<td>3d unetr multiclass(32 channels)</td>
<td>segformer b5</td>
<td>512</td>
<td>?</td>
</tr>
</tbody>
</table>
<p>One thing we tried late-on was multi-class output. Instead of binary we tried to predict nothing-mask-ink. This ended up yielding a much cleaner looking output actually, in most of our models we had some amount of noise anywhere there was papyrus and this seemed to quiet that down a lot. We did not get to explore it thoroughly enough to confirm if this really worked or not though. </p>
<p><strong>Training procedure</strong><br>
Our training procedure was fairly standard, we used the existing adamW optimizer, dice+bce loss and hyperparameters, fixing some small bugs in setting the min-lr and continuing to use the gradual warmup learning rate scheduling. We added on stochastic weight averaging to get wider optima instead of needing to pick a specific checkpoint because we found these pretty inconsistent. We found that validating against fragment 1 wasnt perfect but was at least directionally useful. Once we found something that worked on local validation against fold 1 we would submit that to the leaderboard to double-check its validity and we would continue training with all folds for several epochs after. We would submit and evaluate the new checkpoint trained against all folds and it was typically about 0.04 better. Sometimes we would train against all fragments from the start instead of training fragments 2, 3 and latter adding in 1, but we did not thoroughly evaluate this. </p>
<p><strong>Augmentations</strong><br>
We tried many permutations of augmentation, mostly from albumentations, some custom, and some from 3d packages, but ultimately couldn't find strong alpha there with anything fancy. Our best model was trained with:</p>
<ul>
<li>50% horizontal and vertical flips</li>
<li>75% 90-degree rotations</li>
<li>50% brightness contrast</li>
<li>25% 1-2 channel dropout(in this case our channels was actually depth)</li>
<li>10% shift scale rotate</li>
<li>10% noise and blur</li>
<li>10% coarse dropout</li>
<li>10% grid distortion</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2F44380bdf9ac40366cfd12f6db39f1bef%2FScreen%20Shot%202023-06-15%20at%204.29.24%20PM.png?generation=1686871786148868&alt=media" alt=""></p>
<p>Overall seemed like the rotations and flips were crucial and everything else was a non-factor. Rotations seemed important but we did not ascertain that the test set itself was actually rotated, we just knew it was important so tried to make the model as invariant as possible. </p>
<p><strong>Ensembling</strong><br>
We ended up with a big pile of model checkpoints and had to whittle them down to what we believed to be the most performant trading off runtime vs throughput. We tried many different combinations, heavier tta with all rotations and flips, more models, smaller strides. The winner seemed to be 4x rotation TTA with 1/4 crop strided windows and as many good models as we could fit in. Halving the stride helped an extra .01, but that was trumped by being able to add way more models, flips didnt seem to add anything at all. It's possible with just the corrected orientation instead of TTA we could have done better but a model invariant to rotations seemed just as strong.</p>
<p>One thing we struggled with a lot was how to best combine predictions. For each model we predicted on the same pixel 4x because of our strided approach and 4x of that because of TTA. With many models this actually gave us a ton of options for aggregating the predictions per pixel. We tinkered with a lot of stuff locally but what seemed to work best was averaging per pixel for each model all ~16x predictions and then applying the sigmoid to that averaged signal and then average those probabilities together. This posed a tough memory constraint on us on kaggles system so we had to be a bit efficient with putting things away and accumulating them as densely as possible instead of just creating large arrays and averaging at the end. </p>
<p><strong>Thresholding</strong><br>
One thing we went back and forth on for a long time was the calibration of predictions. As discussed in other posts, deciding a threshold was critical to getting good results and choosing the wrong threshold could give you very misleading signal on your models performance. During training and evaluation we would constantly be monitoring the AUC, Precision, Recall and f0.5 at a sweep of thresholds. Some models were well-calibrated with an optimal threshold at 0.5, but many were not. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Ffee3f2b5c722068974d291af29fc7049%2FScreen%20Shot%202023-06-15%20at%204.58.40%20PM.png?generation=1686873556582356&alt=media" alt=""></p>
<p>Because of this, we knew that even a great model could show up as terrible on the leaderboard if its threshold wasn't set correctly. We considered using the percentile method that some people used but did not even make a submission for it because it seemed too risky if the distribution of ink was not what we expected. Ultimately what we relied on was that averaged out our predictions they would end up calibrated. We found this to be true on our local validation and held true for the leaderboard as well. Individual models would have wide optimal threshold ranges but after averaging many predictions from many models it was almost universally centered on 0.5. We actually took our last submission to be brave and try 0.55 but it did worse on the public leaderboard and finished with only 30 minutes left to spare so we didnt pick it. It did end up performing slightly higher on the private leaderboard though. I think we actually left a lot on the table in terms of ensembling and optimal thresholding. </p>
<p><strong>Postprocessing</strong><br>
One little magic function we borrowed from the old cloud segmentation competition was used to clean up our predictions. After everything was done and already binarized we would use cv2.connectedcomponents to find the masses and remove them if they were beyond a certain size. This would clean up anything that was too small, just little speckles and noise. We found it locally optimal to set our threshold a little lower and then clean up the extras, anything under 25000 pixels, but we only cleaned up things under 10k on our submissions and didnt try more aggressive cleaning. </p>
<p>Before</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2F25116a2dfc769b5784abb12704df8401%2F__results___27_30.png?generation=1686874373183834&alt=media" alt=""></p>
<p>After<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2F9a6ac60974f49a6ecfbb80f47fcccbb0%2F__results___27_31.png?generation=1686874392858212&alt=media" alt=""></p>
<p>Hoping to aggregate a list of all the things we tried that didn't pan out as well but will have to add those later. One ironic thing with all of this is that I was visiting nvidia for work while the competition was closing so I got to see the news at their hq. Being there and winning a competition with one of their models dawned on me part way through the day. </p>
<p>One thing we tried that didnt work was instead of filtering the crops based on if they were empty or not, filter them in case they had any ink or not. This massively reduced the number of crops and looked pretty good on paper but it taught the model that every single crop must have some ink and it didnt learn to handle blank papyrus well. Became overexcited and greatly over predicted. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2F3e6b629a11dc8824e2d9b9e54a818ade%2FScreen%20Shot%202023-06-15%20at%205.50.22%20PM.png?generation=1686876640893243&alt=media" alt=""></p>
<p>maybe this would have worked with bigger crop sizes. This made it converge extremely fast but you can see the result isnt that great especially in reference to precision. </p>
<p>generally I felt like segformer converged more quickly than other stage 2 models we tried like unet, upernet with various backbones. not sure exactly why you saw slow convergence but I never tried it on its own. </p>
<p>I mostly just looked at the general trend of the model. If I measured performance with the swa model performance movement was very slow and smooth curves, but we saw a similar effect just using the running average of the charts on wandb</p>
<p>It was all trained end to end. My belief is training end to end and optimizing directly for your goal is almost always universally better. Sometimes you have to do things in stages just because of memory/compute constraints but in general I will always prefer end to end</p>
<p>That was with the center 16 I believe. I also did a pretty thorough analysis of the info of ink in each layer of each fragment. I trained a single-layer model for each depth and then applied it to the held out fragment. You can see for each series there is a rise and fall of information density and at the ends there is nothing left. Just base guessing. </p>
<p>I think we left a little performance on the table by not honing in better on which layers had ink. We tried some approachs to move the depth per fragment or per crop and couldnt find anything that did consistently better than just taking the middle 16. We couldve maybe done with middle 32 and gotten even more signal. Early on I was seeing that 32 depth beat 16, but for double the vram usage it didnt seem worth it. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Faabd32462a191a9b9e092ee7be888983%2FScreen%20Shot%202023-06-16%20at%201.14.27%20PM.png?generation=1686946620916433&alt=media" alt=""></p>
<p>I tried varying placements and windows of depth but center 16 seemed sufficient. Couldnt find anything better. I tried to analytically find the best for each fragment individually. Taking the peak of average values per layer as the center point. I tried also doing that per crop, so for every crop I measured the average intensity per layer and then used the one with the highest value as the center and then went 8 up and down from there. </p>
<p>Was extremely surprising to find that it did pretty comparably even though many of the crops had way different values than the middle 16, for some crops it would pick like the 15th layer as the center instead of the 32nd. Didnt pursue it too far. Maybe there was some better method for finding the regions most likely to have ink and then hone into them or broadcast our 2d label into 3d in a smart analytical way and then train against our generated 3d labels but we weren't able to come up with something that worked</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2F4ffa67c0437055c8f3256673f1a20b6c%2FWB%20Chart%206_16_2023%201_42_18%20PM.png?generation=1686948170284985&alt=media" alt=""></p>
<p>this was trying to find the max response per layer per tile first. Maybe shouldve explored it more, but didnt immediately give any value. This was with old iterations of our model. Maybe worth revisiting. </p>
<p>One of the things we tried early on was backing into the 3d labels by having our 3d unet output the 3d volume and then just take the max of the depth axis while training and then we could inspect its 3d volume without the max and see where it was finding the ink from. We could clearly see that it was picking things up from different layers</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2Fe772fcafe4f29d22d7068ff6b0a57886%2F2-0_MdgQnL3V.gif?generation=1686949374516762&alt=media" alt=""></p>
<p>here is an example of our 3d models output. Sweeping through the 3d volume that it output before we crushed it with a max across the depth. You can see that it is seeing different stuff on different layers in different x,y regions. This could happen from the ink soaking in to different depths or it could be from the segmentation model used to flatten the fragments centering slightly differently. </p>
<p>I actually tried a channel shuffled trial. It failed to converge at all. I specifically wanted to see how a sort of bag of depths approach worked and it completely failed with our models and a couple variations on them. I'll have to see if I can find the training graph for that one. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1035002%2F642627376e3d046484bec6b82112a907%2FWB%20Chart%206_16_2023%2012_30_02%20PM.png?generation=1686943850364553&alt=media" alt=""></p>
<p>The one way down at the bottom is what happened when we shuffled the channels on the way into the model. Didnt seem to converge. We wouldve been better off just always grabbing the center channel than having them shuffled. </p>
<p>I think there is some potential for strong pseudo labeling still. First pass do the 3d volume to 2d label and then back out the 3d label from the model and train against that. Maybe the extra 3d localization helps the model be more precise and understand where things are coming from. Or maybe you just teach the second model to mimic the first. Hard to say without experiment. </p>
<p>It comes from the transformers package</p> |
Vesuvius Challenge - Ink Detection | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, let me explain in a little more detail.<br>
The input of the 2D encoder is (bs * groups, ch, h, w) and the output after going through the neck is (bs * groups, 512, h//32, w//32). Then permute it to (bs, ch, groups, h//32, w//32) and feed it to 3D CNN.</p>
<p>Before starting the competition, when I read the host's <a href="https://github.com/educelab/ink-id/blob/develop/inkid/scripts/train_and_predict.py#L134" target="_blank">inkid repo</a>, I found that they didn't use upsampling.<br>
So when I actually calculated the maximum score that could be achieved by downsampling the label and undoing it, I found that I could get 0.98, so I knew that 1/32 resolution was enough. <br>
Also, by lowering the resolution, the difficulty of the task is also lowered, and we believe that we are able to reduce the burden on the machine learning model.</p>
<p>Looking at our results, it seems that the optimal thresholds for fragment_a and fragment_b are not that different.<br>
However, we were unable to find any evidence to support it during the competition.</p>
<h2>tattaka & mipypf part</h2>
<h3>Summary</h3>
<ul>
<li>2.5D and 3D backbone and decoder without upsampling<ul>
<li>1/32 Resolution is sufficient for this task</li>
<li>By focusing resources on encoder, more layers can be used for training</li></ul></li>
<li>Strong regularization</li>
</ul>
<h3>Data preprocessing</h3>
<ul>
<li>Group K-fold is used for the validation, but is divided into 3 parts because of the large ink_id=2</li>
<li>Input resolution is 256x256 for 2.5D model and 192x192 for 3D model<ul>
<li>Cut out and store in npy in 32x32 (64x64 when inferring) to speed up data loading</li></ul></li>
<li>The label is downsampled to 1/32 resolution using bilinear interpolation</li>
<li>Input images were normalized for each sample </li>
</ul>
<pre><code>
mean = img.mean(dim=(, , ), keepdim=)
std = img.std(dim=(, , ), keepdim=) +
img = (img - mean) / std
</code></pre>
<h3>Model</h3>
<ul>
<li>images(batch_size x channel x group x height x width) -> 2dcnn backbone -> pointwise conv2d neck -> 3dcnn(ResBlockCSN like, 3 or 6 blocks) -> avg + max pooling(z axis) -> pointwise conv2d<ul>
<li>2dcnn backbone: <ul>
<li>resnetrs50</li>
<li>convnext_tiny</li>
<li>swinv2_tiny</li>
<li>resnext50</li></ul></li>
<li>The channel/group combinations used were 5x7 and 3x9. In other words, the middle 35 or 27 layers of the 65 layers are used.</li>
<li>Referring to <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>'s <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/392402#2170010" target="_blank">model architecture</a></li></ul></li>
<li>images(batch_size x 1 x layers x height x width) -> 3dcnn backbone -> max pooling(z axis) -> pointwise conv2d<ul>
<li>3DCNN backbone: <ul>
<li>resnet50-irCSN(layers: 32)</li>
<li>resnet152-irCSN(layers: 24)</li></ul></li></ul></li>
<li>loss: bce + global fbeta loss(calculate fbeta score in batch)</li>
</ul>
<h3>Training</h3>
<ul>
<li>amp</li>
<li>EMA(decay=0.99)</li>
<li>label smoothing(smooth = 0.1)</li>
<li>drop_path_rate=0.2</li>
<li>cutmix + mixup + manifold mixup</li>
<li>heavy augmentation<ul>
<li>cutout<ul>
<li>modified to match the output resolution (1/32)</li></ul></li>
<li>channel shuffle<ul>
<li>Shuffle within the group after splitting</li></ul></li>
<li>Random +-2 shift in z-direction of volume with 0.5 probability </li>
<li>other transforms</li></ul></li>
</ul>
<pre><code>albu.Compose(
[
albu.Flip(=0.5),
albu.RandomRotate90(=0.9),
albu.ShiftScaleRotate(
=0.0625,
=0.2,
=15,
=0.9,
),
albu.OneOf(
[
albu.ElasticTransform(=0.3),
albu.GaussianBlur(=0.3),
albu.GaussNoise(=0.3),
albu.OpticalDistortion(=0.3),
albu.GridDistortion(=0.1),
albu.PiecewiseAffine(=0.3), # IAAPiecewiseAffine
],
=0.9,
),
albu.RandomBrightnessContrast(
=0.3, =0.3, =0.3
),
ToTensorV2(),
]
)
</code></pre>
<h3>Inference</h3>
<ul>
<li>fp16 inference</li>
<li>stride=32</li>
<li>Use weights learned on val_ink_id=(1, 2a)</li>
<li>TTA<ul>
<li>h/v flip</li>
<li>Switch tta each time stride </li></ul></li>
<li><a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> proposed <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">percentile threshold</a><ul>
<li>Always predicts the same amount of positives, so it is independent of model performance and depends on the pos/neg ratio of the GT</li>
<li>Calculated for all pixels except the area outside the mask</li>
<li>0.9 and 0.93 were used</li>
<li>We expected 0.90 to be a better score in private, but 0.93 was better in the end!</li></ul></li>
</ul>
<h2>yukke42 part</h2>
<h3>Summary</h3>
<ul>
<li>3D encoder and 2D/1D encoder<ul>
<li>1/2 or 1/4 resolution prediction</li>
<li>very simple decoder<br>
</li></ul></li>
</ul>
<h3>Data preprocessing</h3>
<ul>
<li>split the 2nd fragment into two image vertically: 4 folds<br>
</li>
</ul>
<h3>Model</h3>
<ul>
<li>3D CNN encoder and 2D Encoder<ul>
<li>encoderbased on <a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a> 's Notebook: <a href="https://www.kaggle.com/code/samfc10/vesuvius-challenge-3d-resnet-training" target="_blank">Vesuvius Challenge - 3D ResNet Training</a><ul>
<li>remove maxpooling after the 1st CNN</li>
<li>use attention before reduce D-dim</li>
<li>use resnet18 and resnet34</li></ul></li>
<li>decoder<ul>
<li>a single 2D CNN layer</li>
<li>upsample with a nearest interpolation</li></ul></li>
<li>output resolution is downsampled to 1/2. then upsample with a bilinear interpolation</li></ul></li>
<li>3D transformer encoder and linear decoder<ul>
<li>encoder: use <a href="https://pytorch.org/vision/main/models/video_mvit.html" target="_blank">MViTv2-s</a> of the PyTorch official implementation and a pre-trained model<ul>
<li>modify forward function to get each scale output</li>
<li>replace MaxPool3d into MaxPool2d for the reproducibility</li></ul></li>
<li>decoder: a single linear and patch expanding to upscale low resolutions 3D images<ul>
<li>patch expanding is from <a href="https://arxiv.org/abs/2105.05537" target="_blank">Swin-Unet</a></li></ul></li>
<li>output resolution is downsampled to 1/2 or 1/4, then upsample with a bilinear interpolation<br>
</li></ul></li>
</ul>
<h3>Training</h3>
<ul>
<li>amp</li>
<li>torch.compile</li>
<li>label smoothing</li>
<li>cutout</li>
<li>cutmix</li>
<li>mixup</li>
<li>data augmentation<ul>
<li>referred <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> 's notebook: <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training" target="_blank">2.5d segmentaion baseline [training]</a></li>
<li>referred <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a> and <a href="https://www.kaggle.com/mipypf" target="_blank">@mipypf</a> 's</li></ul></li>
<li>patch_size=224 and stride=112<ul>
<li>stride=75 or stride=56 didn't work<br>
</li></ul></li>
</ul>
<h3>Inference</h3>
<ul>
<li>fp16 inference</li>
<li>stride=75<ul>
<li>better than stride=112</li></ul></li>
<li>ignore edge of output prediction<ul>
<li>use only the red area of prediction (figure below)<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2Fc309c518581d183e62897d4249eb7e7e%2Fimage.png?generation=1686801349236365&alt=media" alt=""></li></ul></li>
</ul>
<h2>Code & inference notebook</h2>
<ul>
<li>tattaka & ron part<ul>
<li>training code: <a href="https://github.com/mipypf/Vesuvius-Challenge/tree/winner_call/tattaka_ron" target="_blank">https://github.com/mipypf/Vesuvius-Challenge/tree/winner_call/tattaka_ron</a></li>
<li>inference code: <a href="https://www.kaggle.com/code/mipypf/ink-segmentation-2-5d-3dcnn-resnet3dcsn-fp16fold01?scriptVersionId=132226669" target="_blank">https://www.kaggle.com/code/mipypf/ink-segmentation-2-5d-3dcnn-resnet3dcsn-fp16fold01?scriptVersionId=132226669</a></li></ul></li>
<li>yukke42 part<ul>
<li>training code: <a href="https://github.com/yukke42/kaggle-vesuvius-challenge-ink-detection" target="_blank">https://github.com/yukke42/kaggle-vesuvius-challenge-ink-detection</a></li></ul></li>
</ul>
<p>The local score for val_ink_id=1 was about 0.69 in ensemble (also 0.7, but failed to submit)</p>
<p><a href="https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/csn/README.md" target="_blank">https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/csn/README.md</a></p>
<p><strong><em>Update</em></strong><br>
Submission of tattaka and ron's part was set to public!</p>
<ul>
<li>training code: <a href="https://github.com/mipypf/Vesuvius-Challenge/tree/winner_call/tattaka_ron" target="_blank">https://github.com/mipypf/Vesuvius-Challenge/tree/winner_call/tattaka_ron</a></li>
<li>inference code: <a href="https://www.kaggle.com/code/mipypf/ink-segmentation-2-5d-3dcnn-resnet3dcsn-fp16fold01?scriptVersionId=132226669" target="_blank">https://www.kaggle.com/code/mipypf/ink-segmentation-2-5d-3dcnn-resnet3dcsn-fp16fold01?scriptVersionId=132226669</a></li>
</ul> |
Vesuvius Challenge - Ink Detection | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we want to thank the hosts who brought us a great opportunity to be a part of resurrecting an ancient library, which is really awesome! And congrats to all winners!</p>
<h2>Summary</h2>
<p>Our final solution chosen is a 15-unet-ensemble, which only use ir-CSN as 3d encoder with simple mean pooling bridging to 2d decoder. This idea refers to <a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a>'s Notebook: <a href="https://www.kaggle.com/code/samfc10/vesuvius-challenge-3d-resnet-training" target="_blank">Vesuvius Challenge - 3D ResNet Training</a>. And our training and inference pipeline refer to <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a>'s great scalable sharing: <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training/comments" target="_blank">2.5d segmentaion baseline</a>. Let me explain the details below.</p>
<h2>Dataset</h2>
<h3>Fold split</h3>
<p>At first we just use the original 3 fragments for our local validation, and we got a very bad score on our first ensemble whose fold 2 scored the worst. So we thought it might caused by less training sample on fold 2.</p>
<p>We split fragment 2 to 3 fragments referring to <a href="https://www.kaggle.com/junxhuang" target="_blank">@junxhuang</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/398329#2201953" target="_blank">comment</a> and <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/401667" target="_blank">discussion</a>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9735064%2F7cbdc92edfa4491a4405db81528f99ce%2F1.png?generation=1686897867291657&alt=media" alt=""></p>
<p>And we found it interesting that our best public score 0.77 is a single ir-CSN-r50 unet trained on fold 2 with 4 rotation tta which gets 0.66 locally and 0.73 without tta. Of all our experiments, fold 2 always gets high score on public lb comparing to other folds.</p>
<h3>Slice choose</h3>
<p>We had struggled a long time on how many slices can bring us higher score. We chose slice following <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a>'s baseline, and experimented <code>8*i</code>slices with i select from 1, 2, 3, 4, 6. 24, 32 and 48 slices all gave us robust performance according to lb and cv. After seeing <a href="https://www.kaggle.com/pavelgonchar" target="_blank">@pavelgonchar</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/403348#2235071" target="_blank">discussion</a>, we stay in 24 slices for most of our experiments.</p>
<h3>Patch size</h3>
<p>We spent almost 2/3 time in tuning model on 224x224 cropping size patches, because the smaller the size is the more training samples we can get. And cropping stride 224//2 gave us 13272 samples in total, which is sufficient to train a r50 type unet. The most robust single r50 type model trained on 224x224 cropping size and 224//2 cropping stride is the fold 5 scoring 0.74(with 4 rotation tta) on lb and 0.74 cv.</p>
<p>In the last two weeks, after reading many brilliant winning solutions, e.g. <a href="https://www.kaggle.com/hesene" target="_blank">@hesene</a>'s <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337468" target="_blank">3rd place solution</a>, we decide to use bigger resolution and use multi-size cropping inference as our final submission. Because it is easy to get in mind that big cropping size can get the entire character for models. But there is a problem that the bigger patch we crop the little samples we get. When using a 864x864 size patch with 864//6 , our r152 type unet can easily overfit within 2 epochs. So we make some trade-off there, doing experiments on 500-700 size patch, and these sizes work for us. The best single big resolution model is 576x576 patch size r152 type unet on fold 5, scoring 0.77 locally.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9735064%2F0598a60aad240ecc111899ca897d5975%2F2.png?generation=1686897881870354&alt=media" alt=""></p>
<h2>Models</h2>
<h3>Architecture</h3>
<p>Only unet is used for our segmentation models.</p>
<h3>Backbone</h3>
<p>In most of our experiments, we chose ir-CSN-r50 as encoder of our unets. After reading <a href="https://www.kaggle.com/selimsef" target="_blank">@selimsef</a>'s <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/364837" target="_blank">4th place solution</a> and <a href="https://www.kaggle.com/nvnnghia" target="_blank">@nvnnghia</a>'s <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/391635" target="_blank">1st place solution</a>, we stick to openmmlab's great implementation and pretrained weights.</p>
<h3>Decoder</h3>
<p>This is our first time participating a segmentation competition. At the very beginning, we spent so much time to build a better decoder, but after reading some solutions, we found that the channels of decoder is the most unconcerned part. From the experiments, light weight decoders perform good enough, so all our unets' decoder channel start from 256. Our decoder code is using <a href="https://github.com/selimsef/xview3_solution" target="_blank">selimsef's xview3_solution</a>, and we made some modifications.</p>
<h3>Backbone-data align</h3>
<p>At first we gave model with 1 channel voxel. We need to sum the first conv pretrained weights to 1 channel like <a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a> did.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9735064%2Fe683a7577522d400d937e953a738270f%2F3.png?generation=1686897894991869&alt=media" alt=""></p>
<p>But after reading <a href="https://www.kaggle.com/pelegshilo" target="_blank">@pelegshilo</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/395676#2196236" target="_blank">comment</a> and <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s great <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972#2302894" target="_blank">topic</a>, we thought maybe we can make a full use of pretrained weights of first conv and let model choose the best z location. We simply do concatenation from temporal dimension to channel dimension with 3 overlapped voxels.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9735064%2Fd1c1046526fca219835eaff4e0bdb42e%2F4.png?generation=1686897907062423&alt=media" alt=""></p>
<p>In our experiments, input voxel size (28,224,224) with overlapped start indices [0,2,4] which give 3 24-slice voxels, boosting cv from 0.01 to 0.03 on different folds.</p>
<h2>Training settings</h2>
<h3>Data correlated</h3>
<ul>
<li><p>We use <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training/comments" target="_blank">2.5d segmentaion baseline</a> provided augmentations and change the probabilities for different experiment. </p></li>
<li><p>We set the ShiftScaleRotate's rotate_limit to 180 referring to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972#2302894" target="_blank">topic</a>.</p></li>
<li><p>Using mixup and cutmix boost cv differently on each fold and with different mix probabilities. We comment out cutout when use cutmix.</p></li>
<li><p>Segmenting on normalized pixels works, referring to <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar's</a> <a href="https://www.kaggle.com/code/yoyobar/3d-resnet-baseline-inference" target="_blank">sharing</a>.</p></li>
<li><p>We use clean input meaning we filtered out all zero input.</p></li>
</ul>
<h3>Model correlated</h3>
<ul>
<li><p>We use adamw optimizer and onecycle scheduler.</p></li>
<li><p>We use BCE and hard dice with different weighted average in our experiments.</p></li>
<li><p>We use EMA and it boosts cv sometimes.</p></li>
<li><p>DDP training and validation is used when training models on big resolution patches.</p></li>
<li><p>Gradient accumulation is used when training models on big resolution patches. We use batch size 16 in all our experiments.</p></li>
</ul>
<h2>Inference</h2>
<ul>
<li>We use smaller cropping stride comparing to which we use in training.</li>
<li>Multi-size cropping inference ensemble help us survive in such shrinking</li>
<li>We do not select threshold for each model prediction. We just use 0.5.</li>
</ul>
<p>Our final solution is a 224-384-576-15-unet ensemble with no tta which scoring 0.76 on public. In this setting we use 28-24 slice selection that means slice file id range from 18 to 46 and copying it to 3 24-slice voxels with slice start index [0,2,4]. For 576 size model we scale up to r152 and the rest are r50. This give us 3rd place on private.</p>
<p>We find that rotation tta do not boost much when add some big resolution models.</p>
<p>But we have 6 different ensemble settings higher than 0.682 private score. The best one scoring 0.687873 on private with 0.749338 on public. In this solution we add 5 more r152 unets with training input size 640 which has 36-32 slice with start index [0,2,4]. We didn't choose it because the low lb and some overfitting performance between train loss and validation loss.<br>
Here we list two of them.</p>
<table>
<thead>
<tr>
<th>ensemble models</th>
<th>input size</th>
<th>number of slices</th>
<th>cropping stride</th>
<th>public score</th>
<th>private score</th>
</tr>
</thead>
<tbody>
<tr>
<td>r50,r50,r152</td>
<td>224,384,576</td>
<td>28-24,28-24,28-24</td>
<td>224//8,384//6,576//8</td>
<td>0.763384</td>
<td>0.681137</td>
</tr>
<tr>
<td>r50,r50,r152,r152</td>
<td>224,384,576,640</td>
<td>28-24,28-24,28-24, 36-32</td>
<td>224//2,384//3,576//4,640//5</td>
<td>0.749338</td>
<td>0.687873</td>
</tr>
<tr>
<td>r152</td>
<td>576</td>
<td>28-24</td>
<td>576//12</td>
<td>0.720028</td>
<td>0.683273</td>
</tr>
</tbody>
</table>
<h2>Tried but didn't work</h2>
<ul>
<li>2d models</li>
<li>pure 3d models</li>
<li>stacked unet which use 2d denoiser</li>
<li>training a 3d denoiser</li>
<li>classification branch for stronger supervision</li>
<li>add maxpooling and conv layers between encoder and decoder</li>
</ul>
<h2>Acknowledgement</h2>
<p>We want to thank all those kagglers we mentioned above. We have learned a lot from your sharing, and there is still a lot we need to learn.</p>
<p>We also want to thank the kaggle community, everyone here is free to share their insight that is really helpful for one's growing up.</p>
<p>Again, Thanks to the hosts who bring such great competition! We have seen a lot of brilliant teams with wonderful solution.</p>
<p>This is my teammate <a href="https://www.kaggle.com/yufujiang" target="_blank">@yufujiang</a>'s first time participating in a kaggle competition who is also my roommate at college, many thanks to him. And this is my 3rd time competition on kaggle, finally I got my first medal!</p>
<h2>Code</h2>
<p><a href="https://www.kaggle.com/code/traptinblur/3rd-place-ensemble-576-8-384-6-224-8/notebook" target="_blank">3rd place inference code</a><br>
<a href="https://github.com/traptinblur/VCID_2023_3rd_place_code/tree/main" target="_blank">3rd place training code repo</a></p>
<p>Our training code is released <a href="https://github.com/traptinblur/VCID_2023_3rd_place_code/tree/main" target="_blank">here</a>.<br>
And we release our checkpoints for <a href="https://www.kaggle.com/code/traptinblur/3rd-place-ensemble-576-8-384-6-224-8/notebook" target="_blank">inference notebook</a>.</p> |
Vesuvius Challenge - Ink Detection | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Acknowledgement</h1>
<p>I'm very grateful for organizing very challenging and fantastic challenge.<br>
I feel like there are a lot of really great people and I respect them all.<br>
I would like to express my gratitude to both the organizers and participants of the competition.</p>
<p><br><br>
<br></p>
<h1>Solution</h1>
<p>I would like to describe my solution in the order of contribution of performance improvement.<br>
<strong>The number of ★ in the description below means the degree of contribution to performance improvement.</strong></p>
<p><br><br>
<br></p>
<h3>1. Temporal random crop & random paste & random cutout (★★★★★)</h3>
<h4>1.1. Thinking about the data itself</h4>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5725749%2Fb9819315244312a47802b241517dc57e%2F4.PNG?generation=1686971322910074&alt=media" alt=""></p>
<p>As you can see in the figure above, i thought that particular layers of fragment would not correspond to the same layers of other fragment. Using these points, i came up with an augmentation method that can give strong regularization to the model by reflecting the characteristics of the data.</p>
<h4>1.2. Application</h4>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5725749%2F8601353d50ccdc56721bfc1341395bff%2F8.PNG?generation=1686981811287731&alt=media" alt=""></p>
<p>As you can see in the figure above, there are 3 steps.<br>
1) temporal random crop</p>
<ul>
<li>First, I decided to use a total of 22 layers (21-42) out of 65 layers. Of these 22 layers, layers with a range of cropping_min to cropping_max are cropped randomly. After several ablation study, I set cropping_min = 12, cropping_max = 22.</li>
</ul>
<p>2) random paste</p>
<ul>
<li>The randomly cropped layers are attached to a random area of 22 while maintaining sequential information.<br>
<strong>This crop & paste method allows the model to learn wide, various range of layers rather than a fixed area of fragments, allowing generalization to learning various fragments.</strong></li>
</ul>
<p>3) random cutout</p>
<ul>
<li>Similar to spatial cutout augmentation, I applied temporal cutout augmentation. Among the pasted layers, 0 to 2 random layers are filled with 0 values.</li>
</ul>
<p>The above three processes can be seen simply by looking at the Pytorch Dataset code part below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5725749%2F2cb3e87617e91bbff8df5dbdf213838e%2F9.PNG?generation=1686982042582252&alt=media" alt=""></p>
<p><br><br>
<br></p>
<h3>2. Weight choice with low false positive (★★★★)</h3>
<p>As shown in the figure below, I saved the predicted mask for each epoch. The file name has the epoch, score, tp, fp, and fn, of which the fp value was important. <strong>Even if the score was similar, if the fp was large, it tended to be very bad in the public score.</strong> Therefore, I wanted to select a weight with a small fp value and a high score value, and I selected an appropriate fp value for each fold through several submissions. In addition, I tried to increase the generalization performance by ensemble models with a range of fp values.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5725749%2Faa46536cf349929607790d1c3a32a21c%2F55.PNG?generation=1687062720342906&alt=media" alt=""></p>
<p><br><br>
<br></p>
<h3>3. Used models & ensemble (★★★★)</h3>
<p>I ensembles 3 models.</p>
<ul>
<li><p>3D resnet152, 3D resnet200, 3D resnext101 with unet-like decoder.</p></li>
<li><p>Refered models was firstly inspired by JEBASTIN NADAR (<a href="url" target="_blank">https://www.kaggle.com/code/samfc10/vesuvius-challenge-3d-resnet-training</a>) and applied. Thanks for your effort and respect you. <br>
3D resnet152, 3D resnet200 is from <a href="https://github.com/kenshohara/3D-ResNets-PyTorch" target="_blank">https://github.com/kenshohara/3D-ResNets-PyTorch</a><br>
3D resnext101 is from <a href="https://github.com/okankop/Efficient-3DCNNs" target="_blank">https://github.com/okankop/Efficient-3DCNNs</a><br>
All models have mit licenses and are not against the rules.</p></li>
<li><p>3D Resnet was pretrained on Kinetic 710, and 3D resnext101 was pretrained on Kinetic 600.<br>
<strong>There was a huge difference between being trained in kinetics and not being able to.</strong> Therefore, during the competition, I tried to find a model trained in the kinetic 600 or 700.</p></li>
<li><p>All models are combined unet-like simple decoder. Features from 3D encoder are upsampled and concatenated using decoder.<br>
details are in code.</p></li>
</ul>
<p><br><br>
<br></p>
<h3>4. Label smoothing(0.3), cutmix augmentation, data clipping (★★★)</h3>
<ul>
<li>Cross entropy with label smoothing improved performance. After doing ablation study, i set the parameter of label smoothing to 0.3</li>
<li>Cutmix augmentation improved performance in cross validation & public score</li>
<li>Inspired by AJLAND(<a href="url" target="_blank">https://www.kaggle.com/code/ajland/eda-a-slice-by-slice-analysis</a>), I clipped the image below 50 and more than 200. Also thanks for your effort and respect you.<br>
(image = np.clip(image, 50, 200))</li>
</ul>
<p><br><br>
<br></p>
<h3>5. Belief for local cross validation (★★)</h3>
<p>As shown in the table below, the two models are the ones I submitted finally.<br>
<strong>Ensemble 2 had a lower public score than ensemble1, but performed better in cross validation.</strong> There were other models that could be adopted, but considering that it came out well in the cross validation, it was adopted, and <strong>the performance was improved more in the private, resulting in a better first place.</strong></p>
<table>
<thead>
<tr>
<th>models</th>
<th>cv score</th>
<th>threshold</th>
<th>public score</th>
<th>private score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ensemble 1</td>
<td>0.66 / 0.762/ 0.723 / 0.68</td>
<td>0.5</td>
<td>0.795835</td>
<td>0.663542</td>
</tr>
<tr>
<td>Ensemble 2</td>
<td>0.67 / 0.764 / 0.724 / 0.69</td>
<td>0.47</td>
<td>0.789024</td>
<td>0.674544</td>
</tr>
</tbody>
</table>
<p><br><br>
<br></p>
<h3>6. Other training details (★)</h3>
<p><br></p>
<h4>6.1. loss, optimizer, scheduler</h4>
<ul>
<li>loss : CrossEntropyLoss with label smoothing 0.3</li>
<li>optimizer = AdamW(lr=1e-4)</li>
<li>scheduler = cosine annealing with warmup</li>
</ul>
<p><br></p>
<h4>6.2. cross validation strategy & result</h4>
<p>I trained the model with 4fold</p>
<ul>
<li>fold1 : fragment 1</li>
<li>fold3 : fragment 3</li>
<li>fold2, 4 : fragment 2 divided into 2 sub-frag(9506 x 14830 -> (4300 x 14830) & (5206 x 14830))</li>
</ul>
<p>The results are shown in the table below.<br>
Ensemble means ensemble of 3 models with same weight.</p>
<table>
<thead>
<tr>
<th>models</th>
<th>cv score</th>
<th>threshold</th>
<th>public score</th>
<th>private score</th>
</tr>
</thead>
<tbody>
<tr>
<td>3D Resnet152</td>
<td>0.64 / 0.71 / 0.71 / 0.69</td>
<td>0.5</td>
<td>0.78</td>
<td>-</td>
</tr>
<tr>
<td>3D Resnet200</td>
<td>0.66 / 0.71 / 0.69 / 0.66</td>
<td>0.5</td>
<td>0.77</td>
<td>-</td>
</tr>
<tr>
<td>3D Resnext101</td>
<td>0.61 / 0.72 / 0.71 / 0.64</td>
<td>0.5</td>
<td>0.77</td>
<td>-</td>
</tr>
<tr>
<td>Ensemble</td>
<td>0.66 / 0.762/ 0.723 / 0.68</td>
<td>0.5</td>
<td>0.795835</td>
<td>0.663542</td>
</tr>
<tr>
<td>Ensemble</td>
<td>0.67 / 0.764 / 0.724 / 0.69</td>
<td>0.47</td>
<td>0.789024</td>
<td>0.674544</td>
</tr>
</tbody>
</table>
<p><br></p>
<h4>6.3. image augmentation</h4>
<p><code>all probability set to 0.6</code></p>
<ul>
<li>Horizontal, Vertical Flip</li>
<li>RandomGamma(limit=(50, 150))</li>
<li>RandomBrightnessContrast(brightness=0.2, contrast=0.2)</li>
<li>Gaussian Noise(10, 30), Gaussian blur</li>
<li>shift 0.1, scale 0.1, rotate 360</li>
<li>coarsedropout(holes=4, size=0.2 * image_size)</li>
</ul>
<h4>6.4 Data Preparation</h4>
<ul>
<li>stride rate 3</li>
<li>image size 256</li>
</ul>
<h4>6.5 etc</h4>
<ul>
<li>To achieve consistent and stable results, I selected the best approach based on threshold 0.5.</li>
<li>inference stride is (256 / 5)</li>
</ul>
<p><br><br>
<br></p>
<h3>7. Tried but not worked</h3>
<ul>
<li><p>mixup augmentation</p></li>
<li><p>temporal channel shuffle augmentation</p></li>
<li><p>3D transformer encoder(uniformerv2, video swin transformer) - cannot training… i don't know why loss didn't decrease.</p></li>
<li><p>spatial TTA</p></li>
<li><p>temporal TTA - because of my crop&paste method mentioned above, i tried temporal TTA. In other words, I tried tta by cropping 22, 20, and 18 layers in various ways and then averaging 3 predictions, sometimes performance increased and sometimes decreased. I didn't apply it because it took too much time.</p></li>
<li><p>other losses(tversky focal loss, fbeta loss)</p></li>
</ul>
<p><br><br>
<br></p>
<h3>8. Code</h3>
<p>8.1. Github version(based on Docker)</p>
<ul>
<li><a href="https://github.com/AhnHeeYoung/Competition/tree/master/kaggle" target="_blank">https://github.com/AhnHeeYoung/Competition/tree/master/kaggle</a></li>
</ul>
<p>8.2. Kaggle Notebook version</p>
<ul>
<li>Training : <a href="https://www.kaggle.com/code/ahnheeyoung1/ink-detection-training" target="_blank">https://www.kaggle.com/code/ahnheeyoung1/ink-detection-training</a></li>
<li>Inference : <a href="https://www.kaggle.com/code/ahnheeyoung1/ink-detection-inference/notebook?scriptVersionId=136610637" target="_blank">https://www.kaggle.com/code/ahnheeyoung1/ink-detection-inference/notebook?scriptVersionId=136610637</a></li>
</ul>
<p>I publised the (docker based) code on my github.<br>
(<a href="https://github.com/AhnHeeYoung/Competition/tree/master/kaggle" target="_blank">https://github.com/AhnHeeYoung/Competition/tree/master/kaggle</a>)</p>
<p>Thank you!</p> |
Vesuvius Challenge - Ink Detection | 55th silver solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for organizing interesting competition.<br>
I joined this competition 3 weeks before close, so I couldn't do much trial and error.<br>
However, reading many discussions and public notebooks helped me to make baseline faster.<br>
Thank you all kagglers for sharing nice contents.</p>
<h1><strong>Reference</strong></h1>
<ol>
<li><a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> 's 2.5d segmentation <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training" target="_blank">training</a>, <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-inference" target="_blank">inference</a></li>
<li><a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> 's <a href="https://www.kaggle.com/code/yoyobar/3d-resnet-baseline-inference" target="_blank">3D ResNet baseline [inference]</a></li>
<li><a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a> 's <a href="https://www.kaggle.com/code/samfc10/vesuvius-challenge-3d-resnet-training" target="_blank">Vesuvius Challenge - 3D ResNet Training</a></li>
<li><a href="https://scrollprize.org/tutorial4" target="_blank">official tutorial page</a></li>
<li><a href="https://youtu.be/g-7-Xg75CCI?t=6013" target="_blank">youtube in official tutorial page</a></li>
</ol>
<h1><strong>Solution</strong></h1>
<ul>
<li>3D ResNet (resnet34) ensemble<br>
I used 3fold Group fold CV(fragment_id based), chose best model for each fold, and made ensemble model of them.<br>
The base model is based on Ref 3, (192, 192, 16) as input size.<br>
Why I use 3D ResNet is to capture the feature spreading in z direction, which is caused by ink bleeding.<br>
As we can see Ref 4, the information about spatial frequency may be important for model, so not resizing the input image was very very important, I think.🤔 I should have tried <a href="https://pytorch.org/docs/stable/fft.html" target="_blank">fast fourier transformation</a>..😑<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4250230%2F1c0effbbab9a491994d11cfdb89ffeaa%2Fsem-alpha.png?generation=1686965040652713&alt=media" alt="Ref 4"></li>
<li>Data Augmentation based on Ref 1.<br>
To my surprise, blur augmentation was important. Without blur related augmentation, CV goes down by 1~2%.<br>
Though it is said that adding blur makes more difficult to distinguish for us in Ref 5, some blur made important role.🤔<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4250230%2F9da9c905c7e480e5ac5bb32665d0a748%2Fink_blured.png?generation=1686963393693952&alt=media" alt="Ref 5"></li>
<li>TTA based on Ref 2.</li>
<li>th=0.50<br>
I couldn't understand how to optimize for ensemble model, so fixed.</li>
</ul>
<h1><strong>environment for computing</strong></h1>
<ul>
<li>Google Colab Pro+<br>
We can run at most 3 sessions in it, so I ran fold1~3 at the same time.<br>
1 epoch for batchsize=16 took 30 min with A100.</li>
</ul>
<h1><strong>not worked for me</strong></h1>
<ul>
<li>larger input<br>
I tried from 256 to 512. </li>
<li>reducing label near edge by cv2.erode<br>
the picture shown in the most right is obtained by subtracting real label and erosion(shrinked) label<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4250230%2F3b75e6ea3bc351085102683f77432eb3%2Ferosion1.png?generation=1686964831123283&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4250230%2F27141862c0e134018f485096ae191500%2Ferosion2.png?generation=1686964854760693&alt=media" alt=""></li>
<li>various loss function</li>
</ul> |
Vesuvius Challenge - Ink Detection | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1><strong>5th place solution</strong></h1>
<p>I would like to say thank you to organizers for interesting competition. And for users <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> and <a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a> for their great public notebooks which gave me good start in this competition. My solution is not very sophisticated and based on 3D Resnet models, however it gives good result on both public and private LB.</p>
<p><strong>Models:</strong></p>
<p>3dResnet with architectures resnet18 and resnet34</p>
<p><strong>Data preprocessing:</strong></p>
<p>empty tiles which are not belong to papirus tissue ignored</p>
<p>split: 4 folds (1,2a,2b,3)</p>
<p>tile size:256</p>
<p>stride:128</p>
<p>32 slices between 16 and 48 indices</p>
<p><strong>Training:</strong></p>
<p>epoches: 50</p>
<p>fp16</p>
<p>loss: BCE</p>
<p>optimizer: AdamW</p>
<p>scheduler: GradualWarmupScheduler</p>
<p>hard augmentation: mosaic augmentation + albumentation augmentation</p>
<p>mosaic augmentation:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1906164%2Ffcc229a2841ba660e0ef8e93518a1c92%2Fmosaic.png?generation=1686922603788993&alt=media" alt=""></p>
<p>albumentation augmentation:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1906164%2F2f1ca34594848cb7464023bf1f379d38%2Falbu.png?generation=1686921911927180&alt=media" alt=""></p>
<p><strong>Inference:</strong></p>
<p>final ensemble models: 2 folds(1,2a) 3dResnet18 models + 4 folds 3dResnet34 models</p>
<p>tile size: 1024</p>
<p>stride:512</p>
<p>output predictions close to the tile's edges are ignored</p>
<p>threshold: 0.5url</p>
<p>TTA: 4 rotates, h/v flips</p>
<p>denoizing</p>
<p>Result: 0.804 on Public LB and 0.668 on Private LB</p>
<p><strong><a href="https://github.com/aksell1981/inkdet_solution" target="_blank">Training code</a></strong><br>
<strong>Inference notebook:</strong> <a href="https://www.kaggle.com/code/aksell7/3dresnet18-3dresnet34-infer/notebook" target="_blank">https://www.kaggle.com/code/aksell7/3dresnet18-3dresnet34-infer/notebook</a></p>
<p>Thank you!<br>
Congrats with 3rd place!<br>
Yes, I use models trained on 256 size with inference on size 1024.<br>
Inference on higher resolution always give me boost with these models.</p>
<p>Thank you!</p> |
Vesuvius Challenge - Ink Detection | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/mkotyushev" target="_blank">@mkotyushev</a> Thank you for comment! I added the visualization of predictions (see ## 4.1).</p>
<p><a href="https://www.kaggle.com/lucasvw" target="_blank">@lucasvw</a> Thank you for comment ! </p>
<p>1) No, I did not intentionally include 90-degree rotations in the augmentation. I believed that it could introduce noise in this approach, as the direction of the characters was already known.</p>
<p>2) The usage of IR images was not that complex; I simply stacked the same images and used them as training images. I also used the labels in the same positions as before.</p>
<p>3) Regarding pretraining, the pretrained model I'm using is not specialized in characters. Therefore, I thought that pretraining in the domain of characters and then further fine-tuning could enhance its sensitivity.</p>
<p>Thank you very much for organizing such an interesting competition. I am greatly thankful to the hosts and the Kaggle staff.</p>
<p>Continuing from the previous competition, I am delighted to have won solo gold medal again, with a total of four medals (Table 1, NLP × 2, CV × 1). Additionally, it was my first time attempting the segmentation task, and I began with <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> excellent notebook <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training/comments" target="_blank">here</a>. I am truly grateful for that.</p>
<h1>1. Summary</h1>
<p>My approach involved an ensemble of EfficientNet and SegFormer models. I believed that the test data was rotated shown in other discussions, so I rotated the images during inference, which resulted in a significant boost at the beginning (Public LB 0.58 → 0.74). Additionally, my originality came from incorporating IR images into the training data, which gave me a CV score increase of 0.01 and an LB score increase of 0.01. I will now explain the details below.</p>
<h1>2. Inference</h1>
<h2>2.1 About test data and inference flow</h2>
<p>Based on the brief LB probing and the information provided on the competition page, I had an idea of what the test data might look like. Here is an image that represents my understanding:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Fb9c4e129fe3285876db0a7135f704edb%2Fsolution1.jpg?generation=1686795321876414&alt=media" alt=""></p>
<p>Therefore, I structured my inference code in the Kaggle notebook as follows:</p>
<ol>
<li>Concatenate(axis=1) fragments A and B.</li>
<li>Rotate the image clockwise.</li>
<li>Perform inference (original + h flip TTA).</li>
<li>Rotate the prediction countor-clockwise back to its original position.</li>
<li>Cut and encode each fragment A and B respectively.</li>
</ol>
<p>Of course, to reduce inference time, I skipped the inference for areas where the mask value was 0. Furthermore, instead of inferring fragment A and B separately, I concatenated them. This not only eliminated the 0 padding at the boundary between A and B but also allowed for continuous inference of the initial part of fragment B as a contiguous sequence. These led to a significant boost in my LB score (EfficientNet B4: 0.58 → 0.74)</p>
<h2>2.2 Threshold</h2>
<p>I believe many of you experienced the instability of the signal values. Therefore, I used the following function to rank the entire image and calculate percentiles. Then, by applying a threshold, I obtained a stable threshold value. This approach proved helpful not only during inference but also during ensemble processes. 2nd place solution also used the same way <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417255" target="_blank">here</a></p>
<pre><code>def get_percentile(array):
org_shape = array.shape
= array.reshape(-1)
= np.arange(len(array))[array.argsort().argsort()]
=/ array.max()
= array.reshape(org_shape)
array
</code></pre>
<p>For the Public LB, the optimal threshold was found to be 0.96. However, using fragment 3, I conducted a simulation to observe the correlation between the partially optimal threshold (around 10%) and the threshold for the remaining 90%. As a result, I noticed that the threshold was overfitting for the 10% portion (likely reducing noise), while for the remaining 90%, it was better to slightly lower the threshold below the optimal value (aiming for clearer extraction of text). In fact, when comparing the same model, a threshold of 0.95 performed slightly better for the private LB(but less than 0.01). For the final submission, I used different models: sub1 with a threshold of 0.96 and sub2 with a threshold of 0.95.</p>
<h1>3. Training</h1>
<p>The following is an overview of the training process. Similar to inference, I created three sets of data and took their averages. It should be noted that SegFormer differs from CNN as it can only utilize 3 channels. As mentioned earlier, incorporating IR images resulted in improvements in both CV and LB scores.</p>
<h2>3.1 CNN + Unet</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Fa32d067916dee939f3301b72d8daf9f8%2Fsolution2.jpg?generation=1686795342236188&alt=media" alt=""></p>
<h2>3.2 SegFormer</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Fba6b7cdc7012775d09930b49179e0317%2Fsolution3.jpg?generation=1686795355254555&alt=media" alt=""></p>
<h2>3.3 Fine-tuned Parameters</h2>
<ul>
<li>Stride: image size // 4</li>
<li>Optimizer: Adam</li>
<li>Epochs: 20</li>
<li>Early stopping: 4</li>
<li>Scheduler: get_cosine_schedule_with_warmup (from transformers)</li>
<li>Warm-up: 0.1</li>
<li>Gradient norm: 10</li>
<li>Loss function: SoftBCEWithLogitsLoss (segmentation model in PyTorch)</li>
<li>Training excludes areas with a mask value of 0.</li>
<li>TTA: Horizontal flip</li>
</ul>
<h2>3.4 Cross Validation</h2>
<p>For submission1, I used a 7kfold cross-validation, and for submission2, I used a 10kfold cross-validation. Increasing the value of k-fold resulted in improvements in both CV and LB scores. I recall that increasing from 5-fold to 7-fold led to an improvement of approximately 0.1 in the Public LB score.</p>
<h1>4 Final result</h1>
<p>Ensemble was all mean value of predictions.</p>
<p>sub1 : th 0.96, cv 0.740, public LB 0.811570, private LB 0.661339</p>
<table>
<thead>
<tr>
<th>model</th>
<th>image size</th>
<th>kfold</th>
<th>cv</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>efficientnet_b7_ns</td>
<td>608</td>
<td>7 + fulltrain</td>
<td>0.712</td>
<td>0.80</td>
<td>0.64</td>
</tr>
<tr>
<td>efficientnet_b6_ns</td>
<td>544</td>
<td>7 + fulltrain</td>
<td>0.702</td>
<td>0.79</td>
<td>0.64</td>
</tr>
<tr>
<td>efficientnetv2_l_in21ft1k</td>
<td>480</td>
<td>7</td>
<td>0.707</td>
<td>0.79</td>
<td>0.65</td>
</tr>
<tr>
<td>tf_efficientnet_b8</td>
<td>672</td>
<td>7</td>
<td>0.716</td>
<td>0.79</td>
<td>0.64</td>
</tr>
<tr>
<td>segformer b3</td>
<td>1024</td>
<td>7</td>
<td>0.738</td>
<td>0.78</td>
<td>0.66</td>
</tr>
</tbody>
</table>
<p>sub2 : th 0.95,cv 0.746 , public LB 0.799563, private LB 0.654812</p>
<table>
<thead>
<tr>
<th>model</th>
<th>image size</th>
<th>kfold</th>
<th>cv</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>efficientnet_b7_ns</td>
<td>608</td>
<td>10</td>
<td>0.722</td>
<td>0.80</td>
<td>0.65</td>
</tr>
<tr>
<td>efficientnet_b6_ns</td>
<td>544</td>
<td>10</td>
<td>0.720</td>
<td>0.79</td>
<td>0.63</td>
</tr>
<tr>
<td>efficientnetv2_l_in21ft1k</td>
<td>480</td>
<td>10</td>
<td>0.717</td>
<td>0.79</td>
<td>0.65</td>
</tr>
<tr>
<td>segformer b3</td>
<td>1024</td>
<td>7</td>
<td>0.738</td>
<td>0.78</td>
<td>0.66</td>
</tr>
</tbody>
</table>
<h2>4.1 Visualization of predictions</h2>
<p>The following images visualize the predictions of submission1.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2F985af9f03d54b031cc820f29b63bc891%2Fpredictions.jpg?generation=1686889411081867&alt=media" alt=""></p>
<h1>5. My understanding</h1>
<h2>5.1 Not working well</h2>
<p>I tried several models such as ConvNext, Mask2Former, Swin Transformer + PSPNet, BeiT, and many others, but their effectiveness was not satisfactory for cv and lb. EfficientNet and mobilevit performed well and stable in this competition for me. I also experimented with SegFormer using various versions from b1 to b5. Although it showed good performance in cross-validation (CV), the leaderboard (LB) scores were poor and unstable. Five days before the end of the competition, when I plotted the relationship between CV and LB scores again, I noticed that larger models tended to overfit. They achieved good CV scores but had poor LB scores. After adjusting the layers used, I found that only b3 showed high LB scores, although I suspected it might be overfitting. When I used it in the ensemble, it significantly improved the LB scores, so I decided to include it. This discrepancy may be due to the limited amount of training data. I should have also tried regularization techniques such as dropout, freezing, and other strategies, but I ran out of time. Considering these options might have potentially improved the performance. <br>
※ These are just my guesses.</p>
<h2>5.2 Potential Successes That Were Not Implemented</h2>
<p>Pre-training using IR images: Although it improved the CV performance, it resulted in a decline in LB scores, so it was not implemented.<br>
Including EMNIST (external data) in the training dataset: While it improved the CV performance, it led to a deterioration in LB scores, so it was not implemented.</p>
<h1>6. Acknowledgments</h1>
<p>I could not have achieved these results on my own. I was greatly influenced by those who I have collaborated with in the past, and I am grateful for their contributions. I would also like to express my sincere gratitude to those who have shared their knowledge and insights through previous competitions. Thank you very much.</p>
<p>training code : <a href="https://github.com/chumajin/kaggle-VCID" target="_blank">https://github.com/chumajin/kaggle-VCID</a><br>
inference code : <a href="https://www.kaggle.com/code/chumajin/vcid-6th-place-inference" target="_blank">https://www.kaggle.com/code/chumajin/vcid-6th-place-inference</a></p>
<p><a href="https://www.kaggle.com/sapr3s" target="_blank">@sapr3s</a> Thank you very much! This is just my opinion, but I believe that it is not so much about the rotation itself, but rather aligning the images in the same orientation as during training. The angle of the text during training may have an influence on the results.</p>
<p><a href="https://www.kaggle.com/socated" target="_blank">@socated</a> Congratulations on the 1st place! I'm looking forward to hearing about your solution. Personally, I didn't have such concerns because I didn't observe such tendencies in frag1 to frag3. Instead, I thought that it might be influenced by the density of the text. The graph shows the label density, where the x-axis indicates the ratio of pixels with label 1 to the total number of pixels, and the y-axis represents the optimal threshold when using percentiles. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Faa985dc98f0e0d70f4cf73acdbb567ae%2Fsolution4.jpg?generation=1686807056839803&alt=media" alt=""></p>
<p>From the graph, I confirmed that the optimal threshold changes when the text density varies significantly, but I believed that it would not fluctuate significantly within the same fragment. (I anticipated that the threshold for the private dataset would be slightly lower as the text density tends to be higher compared to the public dataset.)</p>
<p><a href="https://www.kaggle.com/petersk20" target="_blank">@petersk20</a> I didn't anticipate it, but surprisingly, the single model of SegFormer b3 achieved the highest private LB score(0.66) among my submissions. I was not very confident in the single model due to overfitting concerns. Since single models tend to have larger variances due to the limited training data, aligning them with the public LB may not be ideal. However, I knew from conducting simulations by dividing fragment3 that ensembling would reduce the variance and establish a correlation between the public LB and private LB.</p>
<p><a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a> Thank you very much! Yes, that's correct. I stacked the same image multiple times as per the number of in_channels, and I used the original labels without any modifications.</p>
<p><a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> Great confirmation method and result ! Thank you very much. When I saw this image in your post for the first time, I also imagined that this image could be test data ! </p>
<p><a href="https://www.kaggle.com/wangxuc" target="_blank">@wangxuc</a> Yes, perhaps adding rotation in aug could be effective. However, if rotation is not included, I think adding noise in TTA would be necessary.</p>
<p><a href="https://www.kaggle.com/riow1983" target="_blank">@riow1983</a> Thank you for comment !! Yes. Considering that the dataset before submitting is divided into two flags and the fragments are rotated, I can imagine the image mentioned above. Additionally, using the Kaggle technique called LB proving, for example, if the edges of masks for flag A and B have the same pixels, you can set the score to zero to confirm.</p>
<p><a href="https://www.kaggle.com/imeintanis" target="_blank">@imeintanis</a> Thank you my friend!! </p>
<p><a href="https://www.kaggle.com/ohanegby" target="_blank">@ohanegby</a> Thank you very much!!</p>
<p>I have released the training code and inference code publicly!<br>
training code : <a href="https://github.com/chumajin/kaggle-VCID" target="_blank">https://github.com/chumajin/kaggle-VCID</a><br>
inference code(kaggle) : <a href="https://www.kaggle.com/code/chumajin/vcid-6th-place-inference" target="_blank">https://www.kaggle.com/code/chumajin/vcid-6th-place-inference</a></p> |
Vesuvius Challenge - Ink Detection | 73th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Acknowledgement:</h2>
<p>Firstly, I would like to express my gratitude to the organizers of the competition and Kaggle. Next, I want to thank <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a>, I started the competition based on his/her open-source NoteBooks, thank you!🎉. This competition is my first and I am glad that my ranking is in the top 6%, which is a milestone for me. This achievement cannot be separated from my enthusiastic senior brothers, and I am grateful for their valuable suggestions!</p>
<h2>1. Overview:👀</h2>
<ul>
<li>Ensemble of 8 Unet models with mit, pvt as backbones</li>
<li>TTA in four directions(0°, 90°, 180°, 270°) in the inference stage</li>
</ul>
<h2>2. DataFlow:👇</h2>
<ul>
<li>Select channels 28-34 (11 channels), split them into groups of [28-30, 30-32, 32-34] and concatenate them together in batch_ Size dimension</li>
<li>Split fragment2 into two sub fragmentA and B, thus we have four parts of the data, we can achieve 4fold cross validation</li>
</ul>
<h2>3. Data Augmentation:</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11574777%2F80b2b3e3d5d6a89f77c70a14eb5a75d3%2F1686900312738.jpg?generation=1686900353043727&alt=media" alt=""></p>
<h2>4. Model Selection</h2>
<p>I initially tried using Segformer, but the results of this model were not good at that time, so I did not use Segformer in subsequent experiments.Now it seems like I did something wrong.<br>
I have tried many different encoder models, such as resnet, resnext, seresnext, efficientnet, convnext, mit, and pvt, but <strong>only mit and pvt have shown superior performance</strong>, so all subsequent experiments are based on these two models.Meanwhile, it should be noted that overly complex and deep networks may lead to overfitting of the model on training data, so I chose <strong>mit-b2 and pvt-v2-b2</strong> as the encoder models.</p>
<h2>5. Improvement of Network Structure</h2>
<ul>
<li>All shape feature maps output by the encoder will go through an additional SK attention module</li>
<li>Then perform the following operations on the feature map that has undergone SK attention, as referenced by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> .<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11574777%2Fdf703992b4d39cfa8c5758910e4ddceb%2F1686902361015.png?generation=1686902393516010&alt=media" alt=""><br>
The structure of self.weight1 is as follows:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11574777%2Fccf94645bbad0c6de308f39056f3293a%2F1686902566772.png?generation=1686902593485222&alt=media" alt=""></li>
</ul>
<h2>6. Other Training Details</h2>
<ul>
<li>Image Size: 224 * 224</li>
<li>Stride: 112</li>
<li>Loss Function: SoftBCELoss</li>
<li>Optimizer: AdamW</li>
<li>Initial LR: 0.0001</li>
<li>Scheduler: GradualWarmupSchedulerV2 with CosineAnnealingLR</li>
<li>Batch_size: 32</li>
<li>Epoch: 30</li>
</ul>
<h2>7. Inference</h2>
<ul>
<li>Ensemble of 8 Unet models with mit, pvt as backbones</li>
<li>TTA in four directions(0°, 90°, 180°, 270°) in the inference stage</li>
<li>Mit & Pvt ensemble 👉Public LB: 0.721 Private LB: 0.586</li>
<li>Single Pvt 👉Public LB: 0.726 Private LB: 0.591</li>
</ul>
<h2>8. Methods tried but did not work</h2>
<ul>
<li>DiceLoss❌</li>
<li>DeNoise module❌</li>
<li>Image size larger than 224 * 224❌</li>
<li>Adding attention only to the output of the encoder without adding modules, referenced by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , is not effective❌</li>
<li>The ensemble of mit and pvt seems to have less good results than the individual pvt❌</li>
</ul> |
Vesuvius Challenge - Ink Detection | 7th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, but we primarily evaluated the model with the best EMA pseudo Dice score, in addition to the final model. Usually, the best EMA pseudo Dice model performed the best.</p>
<p>We are very grateful for the organization of this very interesting competition. Sincere thanks to the hosts and the Kaggle team. A big thank you to Yannick and Max, it was lots of fun working with you! For all of us it was the first Kaggle competition. </p>
<h2>Summary</h2>
<p>In this competition submission, we used the <a href="https://github.com/MIC-DKFZ/nnUNet" target="_blank">nnU-Net framework</a>, a recognized powerhouse in the medical imaging domain. The success of nnU-Net is validated by its adoption in winning solutions of 9 out of 10 challenges at <a href="https://arxiv.org/abs/2101.00232" target="_blank">MICCAI 2020</a>, 5 out of 7 in MICCAI 2021 and the first place in the <a href="https://amos22.grand-challenge.org/final-ranking/" target="_blank">AMOS 2022</a> challange.</p>
<p>To tackle the three-dimensional nature of the given task, we designed a custom network architecture: a 3D Encoder 2D Decoder U-Net model using Squeeze-and-Excitation (SE) blocks within the skip connections. We used fragment-wise normalization and selected 32 slices for training, enabling us to extend the patch size to 32x512x512. We divided each fragment into 25 pieces and trained five folds. We only submitted models that performed well on our validation data. For the final submission, we ensambled the weights of two folds (zero and two) from two respective models. One model was trained with a batch size of 2 and a weight decay of 3e-5, while the other was trained with a batch size of 4 and a weight decay of 1e-4. During test time augmentation, we implemented mirroring along all axes and in-plane rotation of 90°, resulting in a total of eight separate predictions per model for each patch. </p>
<p>For the two submissions we chose two different postprcessing techniques. The first approach involved setting the threshold of the softmax outputs from the network from 0.5 to 0.6. As a second step we conducted a connected component analysis to eliminate all instances with a softmax 95th percentile value below 0.8. The second approach involved utilizing an off-the-shelf 2D U-Net model with a patch size of 2048x2048 on the softmax outputs of the first model. The output was resized to 1024x1024 for inference and then scaled up to 2048x2048. The intention behind this step was to capture more structural elements, such as the shape of letters, due to the higher resolution of the input. We regret both since they only improved results for public testset.</p>
<h2>Model Architecture</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2Fbf19626f416be872fbb0b56b92a6c585%2FModel.png?generation=1686847814703959&alt=media" alt=""></p>
<p>As been mentioned, we chose a 3D Encoder 2D Decoder U-Net model using SE blocks within the skip connections. Therefore, the selected slices were still seen as a 3D input volume by our network. After passing through the encoder, the features were mapped to 2D on all levels (i.e., skip connections) using a specific weighting. One unique aspect of the network to highlight is that the encoder contained four convolutions in each stage to process the difficult 3D input, whereas the decoder only had two convolutional blocks.</p>
<p>The mapping was initially performed using a simple average operation but was later refined with the use of Squeeze-and-Excitation. However, instead of applying the SE on the channel dimension — as is usually done to highlight important channels — we applied one SE Block per level (i.e., skip) to all channels, but on the x-dimension. This results in a weighting of the slices in feature space, so when aggregating with the average operation later, each slice has a different contribution.</p>
<h2>Preprocessing</h2>
<p>In the preprocessing stage, we cropped each fragment into 25 parts and ensured they contain an equal amount of data points (area labeled as foreground in the mask.png). This process was performed to create five folds for training.</p>
<p>For the selection of the 32 slices, we calculated the intensity distributions for each individual fragment. From these distributions, we determined the minimum and maximum values and calculated the middle point between them. We then cropped 32 slices around this chosen central slice. The following plot shows the intensity distribution for the individual fragments. The vertical line in the plot represents the midpoint between the maximum and minimum intensity values.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2F1fd4a8e4dbccd46f59d3ca92b19fb74d%2Fintesity_dist.png?generation=1686847846933995&alt=media" alt=""></p>
<p>To further preprocess the data, we applied z-scoring to normalize the intensity values. Additionally, we clipped the intensity values at the 0.5 and 99.5 percentiles to remove extreme outliers. Finally, we performed normalization on each individual fragment to ensure consistency across the dataset.</p>
<h2>Training</h2>
<p>Here are some insights into our training pipeline.</p>
<h3>Augmentation</h3>
<p>For the most part, we utilized the out-of-the-box augmentation techniques provided by nnU-Net, a framework specifically designed for medical image segmentation. These techniques formed the foundation of our data augmentation pipeline. However, we made certain modifications and additions to tailor the augmentation process to our specific task and data characteristics:</p>
<ul>
<li>Rotation: We performed rotations only in the plane, meaning we applied rotations along the y and z axes. Out-of-plane rotations were considered as a measure to ensure stability but were not implemented.</li>
<li>Scaling: We introduced scaling augmentation, allowing the data to be randomly scaled within a certain range. This helped to increase the diversity of object sizes in the training data.</li>
<li>Gaussian Noise: We added Gaussian noise to the data, which helps to simulate realistic variations in image acquisition and improve the model's ability to handle noise.</li>
<li>Gaussian Blur: We applied Gaussian blur to the data, with varying levels of blurring intensity. This transformation aimed to capture the variations in image quality that can occur in different imaging settings.</li>
<li>Brightness and Contrast: We incorporated brightness and contrast augmentation to simulate variations in lighting conditions.</li>
<li>Simulate Low Resolution: We introduced a transformation to simulate low-resolution imaging by randomly zooming the data within a specified range. This augmentation aimed to make the model more robust to lower resolution images.</li>
<li>Gamma Transformation: We applied gamma transformations to the data, which adjusted the pixel intensities to enhance or reduce image contrast. This augmentation technique helps the model adapt to different contrast levels in input images.</li>
<li>Mirror Transform: We employed mirroring along specified axes to introduce further variations in object orientations and appearances.</li>
</ul>
<h3>Training Curves for Submission Folds</h3>
<p>Batch size of 4 and a weight decay of 1e-4. Fold 0.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2Fb0086c8f134adc9b58a78097cec48151%2Fprogress_0_0.png?generation=1686847908240065&alt=media" alt=""><br>
Batch size of 4 and a weight decay of 1e-4. Fold 2.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2F5afa0a5b3cc9890b40155af6265c9617%2Fprogress_0_2.png?generation=1686847917423689&alt=media" alt=""><br>
Batch size of 2 and a weight decay of 3e-5. Fold 0.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2Facd870b03c91b9f4f6875b9d01a7858f%2Fprogress_1_0.png?generation=1686847926934043&alt=media" alt=""><br>
Batch size of 2 and a weight decay of 3e-5. Fold 2.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6289140%2F7cd38547bc3d2d76aa0050889c09ed2a%2Fprogress_1_2.png?generation=1686847935394605&alt=media" alt=""></p>
<h2>Inference</h2>
<p>We made the decision to submit only the ensemble of fold 0 and fold 2 models due to their superior performance on the validation data compared to the other folds. The difference in Dice score between these two folds and the rest of the folds was substantial, with a margin of ~0.1. As a result, we deemed the ensemble of fold 0 and fold 2 models to be the most reliable and effective for our final submission (we fools).</p>
<p>Additionally, we made the decision to incorporate test time augmentation (TTA) techniques during the inference process. TTA involved applying mirroring along all axes and in-plane rotation of 90° to each patch. By performing these augmentations, we generated a total of eight separate predictions per model for each patch.</p>
<h3>Post Processing</h3>
<p>In a moment of desperate determination to achieve better results on the public test set, one audacious team member decided to dive headfirst into the realm of advanced post-processing. This daring soul concocted a daring plan: raise the threshold of the softmax outputs from a mundane 0.5 to a daring 0.6. But that was just the beginning!</p>
<p>Undeterred by caution, the same intrepid individual embarked on a quest to conduct a connected component analysis, mercilessly discarding all instances with a lowly softmax 95th percentile value below the illustrious threshold of 0.8.</p>
<p>With fervor and a touch of madness, this brave adventurer tested countless combinations of thresholds, determined to find the golden ticket to enhanced validation scores across all folds. A relentless pursuit of validation improvement that knew no bounds.</p>
<p>On the public test set, this fearless undertaking delivered a substantial boost of 0.05 dice points, raising hopes and spirits across the team. The unexpected improvement injected a renewed sense of excitement and optimism.</p>
<p>However, as fate would have it, on the ultimate battlefield of the 50% final, the outcome took a peculiar twist. The gains dwindled ever so slightly, with a meager decrease of -0.002 dice points. Though the difference may seem minuscule, in the realm of fierce competition, every decimal point counts.</p>
<p>Sorry guys.</p>
<h3>2D Unet Refinement</h3>
<p>The second approach involved employing a 2D U-Net model with a patch size of 2048x2048 on the softmax outputs generated by the first model. Subsequently, the model's output was resized to 1024x1024 for inference purposes and then scaled up to the original resolution of 2048x2048. The rationale behind this strategy was to leverage the higher resolution input data to capture finer structural details, including the intricate shapes of letters. The training data for our this model was derived from inferences made by our various trained models on the original training data.</p>
<p>Whose idea was this?</p>
<p>Sorry again.</p>
<h2>Preliminary Last Words</h2>
<p>More details and code will follow, cheers!</p> |
Vesuvius Challenge - Ink Detection | 8th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Team <a href="https://www.kaggle.com/renman" target="_blank">@renman</a> and <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> </p>
<h2>Acknowledgement:</h2>
<p>First of all we want to thank Kaggle for organizing such a great competition. The host has been very helpful and responsive and made this competition such a great experience for us!</p>
<h2>1. Summary of solution:</h2>
<ul>
<li>Ensemble of 5 Unet models with 3D seresnet101, 3D Resnet34, and Segformer-b3 as backbones</li>
<li>Post-processing methods such as masking out edge pixels for inference and applying rot90 and "channels" TTA</li>
<li>Our final two submissions including one with a fixed threshold (0.55) and the other with a dynamic threshold based on pixel percentile (top 3% pixels)</li>
</ul>
<h2>2. Dataset:</h2>
<h3>Preprocessing:</h3>
<ul>
<li>Set maximum pixel value to 0.78 (i.e. if pixel value > 0.78, set it to 0.78)</li>
</ul>
<h3>CV Strategies:</h3>
<ul>
<li>We tried two different CV strategies:<ul>
<li>3-fold CV: Split into fragment 1,2,3</li>
<li>4-fold CV: split fragment 2 into top and bottom half (“2a” and “2b”)</li></ul></li>
</ul>
<h3>Data Augmentation:</h3>
<ul>
<li>Image Resizing and Sampling:<ul>
<li>We augment the original dataset with three different resolutions categories (1, 0.75x, 0.5x)</li></ul></li>
<li>Compression in z dimension (rate=+-0.2)</li>
<li>3d Rotation (rate=+-5°)</li>
<li>Channel Dropout</li>
<li>Rotate and Flip</li>
<li>Albumentation:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F461631%2F6f55b04646dc7cfdea4163f03aa4fa91%2FScreen%20Shot%202023-06-15%20at%208.43.22%20PM.png?generation=1686834309651934&alt=media" alt=""></li>
</ul>
<h2>3. Models:</h2>
<ul>
<li><h3>3D SEResnet101:</h3>
<ul>
<li>Change to the architecture:<ul>
<li>Add Squeeze and Excitation Layer and Dilated Convolution to the encoder</li>
<li>Use BasicBlock instead of Bottleneck</li></ul></li>
<li>No pretrained model is used. Model is trained from scratch with a 2 stage process:<ul>
<li>The first stage is to train the model with image size of 96 which is easier to train</li>
<li>The second stage is to continue training the model with bigger image size of 192 and 256</li></ul></li>
<li>Fixed stride of 112</li>
<li>Randomly pick 20 channels between 15 and 40 as input to the model:<ul>
<li>At inference time, slide through channels 15 to 40 with a moving window of 20 and stride of 5 and take average (i.e. "channel" TTA)</li></ul></li>
<li>BCE + Dice Loss</li>
<li>Exclude areas outside binary mask<br>
<br></li></ul></li>
<li><h3>Segformer-b3:</h3>
<ul>
<li>Use Segformer-b3 with 3 channels input. Model pretrained on imagenet</li>
<li>First method:<ul>
<li>Select channels 25-36 (11 channels), split them into groups of [25-27, 28-30, 31-33, 34-36] and feed them into the model</li>
<li>Merge the feature maps in the channel dimension with "Attention Pooling":<ul>
<li>Refer to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s excellent elaborate in this thread: <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972" target="_blank">https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972</a></li></ul></li></ul></li>
<li>Second Method:<ul>
<li>Select channels 29-34 (6 channels). Split them into two groups of [29-31, 32-34]</li>
<li>Feed each group into the same model and average the logit outputs</li></ul></li>
<li>Image size 224</li>
<li>BCE Loss</li>
<li>Exclude areas outside binary mask<br>
<br></li></ul></li>
<li><h3>3D Resnet34:</h3>
<ul>
<li>Pick channel 22-34 (18 channels) as input to the model</li>
<li>Image size 224 and 192</li>
<li>BCE + Dice Loss</li>
<li>Exclude areas outside binary mask</li></ul></li>
</ul>
<h2>4. Inference:</h2>
<ul>
<li>Fp16</li>
<li>Ignore areas outside binary mask</li>
<li>Rotation TTA (90 degree rotation)</li>
<li>Channels TTA:<ul>
<li>At inference time, slide through channels 15 to 40 with a moving window of 20 and stride of 5 and take average (3D SEResnet101 only)</li></ul></li>
<li>Mask out edge pixels:<ul>
<li>Only generate predictions in the “middle part” of the output</li></ul></li>
<li>Thresholding:<ul>
<li>One submission we use a fixed Threshold of 0.55</li>
<li>The other submission we use a dynamic threshold strategy based on pixel percentile (top 3% pixels), i.e. sort the pixels and pick a threshold value such that top 3 percentile of pixels are selected</li></ul></li>
</ul>
<h2>5. Final Ensemble:</h2>
<ul>
<li>The final ensemble consists of a weighted average of the 5 models below:</li>
</ul>
<table>
<thead>
<tr>
<th>Models</th>
<th>Image size</th>
<th>Kfold</th>
<th>Output size</th>
<th>Stride</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>3D Seresnet101</td>
<td>256</td>
<td>Single Fold validated on fragment "2a"</td>
<td>192</td>
<td>192 // 4</td>
<td>0.76</td>
<td>0.61</td>
</tr>
<tr>
<td>Segformer-b3 with attention pooling</td>
<td>224</td>
<td>4 folds</td>
<td>160</td>
<td>160 // 2</td>
<td>0.7</td>
<td>0.6</td>
</tr>
<tr>
<td>Segformer-b3 with averaged logit</td>
<td>224</td>
<td>3 folds</td>
<td>176</td>
<td>176 // 2</td>
<td>0.66</td>
<td>0.56</td>
</tr>
<tr>
<td>3D Resnet34</td>
<td>224</td>
<td>4 folds</td>
<td>192</td>
<td>192 // 2</td>
<td>0.66</td>
<td>0.58</td>
</tr>
<tr>
<td>3D Resnet34</td>
<td>192</td>
<td>3 folds</td>
<td>160</td>
<td>160 // 2</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
<ul>
<li>Our final 2 submissions including one with a fixed threshold (public LB: 0.79, private LB: 0.65) and one with a pixel percentile threshold (public LB: 0.79, private LB: 0.63)</li>
</ul>
<h2>6. Methods tried but did not work:</h2>
<ul>
<li>Tried to add Convolutional Block Attention Module (CBAM) for the decoder of 3D Resnet but CV dropped<ul>
<li>Paper to CBAM: <a href="https://arxiv.org/abs/1807.06521" target="_blank">https://arxiv.org/abs/1807.06521</a></li></ul></li>
<li>Use IR image as additional segmentation head but it did not lead to any improvement in CV</li>
<li>Mixup and label noise for data augmentation</li>
</ul>
<p>Update: The code and detailed instructions for replicating our models can be found in the following publicly available github repo:</p>
<p><a href="https://github.com/flyyufelix/vesuvius_challenge_8th_place_solution" target="_blank">https://github.com/flyyufelix/vesuvius_challenge_8th_place_solution</a></p> |
Vesuvius Challenge - Ink Detection | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>ensemble = 2-fold x 2-model</p>
<p>[2-model]</p>
<ul>
<li>resnet34d-unet (crop=256. depth=32 slices)</li>
<li>pvtv2-b3-daformer (crop=384. depth=16 slices)</li>
</ul>
<p>[2-fold]</p>
<ul>
<li>validation fragment-1 and fragment-2aa </li>
</ul>
<hr>
<p>training code and solution presentation (slide and video):<br>
<a href="https://github.com/hengck23/solution-vesuvius-challenge-ink-detection" target="_blank">https://github.com/hengck23/solution-vesuvius-challenge-ink-detection</a></p>
<p>the public submission notebook (clean version) is at:<br>
<a href="https://www.kaggle.com/code/hengck23/9th-place-final-ensemble-v1?scriptVersionId=136499959" target="_blank">https://www.kaggle.com/code/hengck23/9th-place-final-ensemble-v1?scriptVersionId=136499959</a></p>
<hr>
<p>more solution discussion : <br>
<a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972" target="_blank">https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/407972</a></p>
<p>it is smiliar to <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> 11th place solution solution:<br>
<a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417281" target="_blank">https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417281</a></p>
<p><a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a> <br>
Thanks for the post.</p>
<p>Good luck and all the best to your work!<br>
<a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/leaderboard" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/leaderboard</a></p>
<p>determine the correct threshold is important in this competition.<br>
we know test fragment (private and public) is smiliar to train fragment 3 from the data description page.</p>
<p>hence i:</p>
<ul>
<li>make a model with train = fragment1,2. validation = fragment 3</li>
<li>split fragment 3 into two (10%/90% area, to simulate public/private data set)</li>
</ul>
<p>check if optimal thresolds of the two fragements are the same. … which is obvious not the same.</p>
<hr>
<p>the best CV of whole of fragment 3 is about CV 0.66.<br>
So if you have higher CV for the smaller 10% segment, you have lower CV for the larger 90% segment.</p>
<p>This means that you should not choose the highest public LB threshold.</p> |
VinBigData Chest X-ray Abnormalities Detection | 12th place solution[1x2080Ti] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulation to all and thanks to Vingroup Big Data Institute for this interesting competition.<br>
This is our team's solutions.</p>
<ul>
<li>Briefly Summary:<ul>
<li>2 classes classifier</li>
<li>Augmentations: Oversampling, Light Albu, Crop bboxes into Healthy Background.</li>
<li>Object detection: <ul>
<li>Multi heads - VFNetX</li>
<li>Yolov5l (Ensemble: fused boxes+expert models+hard samples)</li>
<li>CenterNet2 DLA</li>
<li>PAA with NFNet</li></ul></li>
<li>Preprocessing boxes: WBF(IOU=0.3)</li>
<li>Post-processing: Modified WBF</li>
<li>Ensemble</li></ul></li>
</ul> |
VinBigData Chest X-ray Abnormalities Detection | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><img src="https://i.postimg.cc/zvD77bts/2x-Page-1-7.png" alt=""></p>
<p>Sorry for being so late. We were busy in preparing the codes.</p>
<p><strong>Thanks to Kaggle and hosts for this very interesting competition with a too challenging dataset. This has been a great collaborative effort. Please give your upvotes to <a href="https://www.kaggle.com/fatihozturk" target="_blank">@fatihozturk</a> <a href="https://www.kaggle.com/socom20" target="_blank">@socom20</a> <a href="https://www.kaggle.com/avsanjay" target="_blank">@avsanjay</a> Congrats to The Winners.</strong></p>
<h3>TLDR</h3>
<p><a href="https://www.kaggle.com/fatihozturk" target="_blank">@fatihozturk</a> explained in his <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229724" target="_blank">post</a> how was his path through this competition, this post is intended as an overall view of our final solution which achieved 0.354/0..314 PublicLB/PrivateLB. It is worth mentioning that we also had a better solution with 0.330/0.321 PublicLB/PrivateLB (which unfortunately we didn't select). As you all already know the main magic we have used for this competition was our Ensembling procedure, also there were lots of other important findings that we mention in this post.</p>
<p><strong>CV strategy</strong>: This was our main challenge, we didn't have a unique validation scheme. Since we started the competition as different teams, we all were using different CV splits to train our models and to build our separated best ensembles. To overcome this difficulty we finally ended up using the public LB as an extra validation dataset. We felt that this strategy was a bit risky, so we also retrained some of our best models using a common validation split and then we ensemble them (this last approach didn't have the best score on the private LB by itself). We think that our last ensemble, using the public LB as a validation source, helped out our final solution to fit better the consensus method used to build the test dataset. </p>
<h3>Our Validation Strategy</h3>
<p>Our strategy for the final submission can be divided into 3 stages :</p>
<ul>
<li>Fully Validated Stage</li>
<li>Partially Validated Stage</li>
<li>Not validated Stage (Just validated using the Public LB)</li>
</ul>
<h3>Models, we used for Ensembling</h3>
<ul>
<li><p><strong>Fully Validated Stage:</strong></p>
<ul>
<li>Detectron2 Resnet101 (notebook by <a href="https://www.kaggle.com/corochann" target="_blank">@corochann</a>, Thanks)</li>
<li>YoloV5</li>
<li>EffDetD2</li></ul></li>
<li><p><strong>Partially Validated Stage:</strong></p>
<ul>
<li>YoloV5 5Folds</li></ul></li>
<li><p><strong>Not Validated Stage:</strong></p>
<ul>
<li>2 EffdetD2 model </li>
<li>3 YoloV5 (1x w/ TTA , 2x w/o TTA)</li>
<li>16 Classes Yolo model</li>
<li>5 Folds YoloV5 by <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a></li>
<li>New Anchor Yolo</li>
<li>Detectron2 Resnet50 (notebook by <a href="https://www.kaggle.com/corochann" target="_blank">@corochann</a>, Thanks)</li>
<li>YoloV5 <a href="https://www.kaggle.com/nxhong93" target="_blank">@nxhong93</a> (<a href="https://www.kaggle.com/nxhong93/yolov5-chest-512" target="_blank">https://www.kaggle.com/nxhong93/yolov5-chest-512</a>)</li>
<li>Yolov5 with image size (640)</li></ul></li>
</ul>
<h3>Ensembling Strategy</h3>
<p>To ensemble all the partial models we mainly used <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a>'s ensembling repository <a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">https://github.com/ZFTurbo/Weighted-Boxes-Fusion</a>. Our ensemble technic is shown in the following figure:</p>
<p><img src="https://i.postimg.cc/NjfVC2Mv/2x-Page-1-6.png" alt="enter image description here"></p>
<p>As you can see at the end of the Not Validated Stage, we use WBF+p_sum. This last blending method is a variant of WBF proposed by <a href="https://www.kaggle.com/socom20" target="_blank">@socom20</a> and was intended to simulate the consensus of radiologists used in this particular test dataset. By using WBF+p_sum improved our Not Validated stage from 0.319 to 0.331 in the PublicLB.</p>
<p><strong>Modified WBF:</strong> It has more flexibility than WBF, it has 4 bbox blending technics, we found that only 2 of them were useful in this competition:</p>
<ul>
<li>"p_det_weight_pmean", has a behaviour similar to WBF, it finds the set of bboxes with iou > thr, and then weights them using their detection probabilities (detection confidences). The new p_det (detection confidence) is an <strong>average</strong> over the detection probabilities of detections with iou > thr.</li>
<li>"p_det_weight_psum": does the same with the bboxes. In this case, the new p_det is a <strong>sum</strong> over the detection probability of bboxes with iou > thr.<br>
Because of the sum of the p_dets, "p_det_weight_psum" needs a final normalization step. The idea behind psum is to try to replicate "the consensus of radiologists" using the partial models. When adding the probabilities of detection, we weigh more those boxes in which more models/radiologists agree.</li>
</ul>
<p>In our final submission, the not validated stage had a high weighting, which lead to an overfitted problem in our final ensemble we got 0.354/0.314 Public/Private. If we had reduced the weighting for this not validated stage, the LB score would have become 0.330/0.321 Public/Private as we previously have mentioned in the post.</p>
<h3>Some Interesting Findings</h3>
<ul>
<li>After spending more time understanding the competition metric, we realized that there is No Penalty For Adding More Bbox as <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> explains clearly here: <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229637" target="_blank">https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229637</a>. Having more confident boxes is good but, at the same time, having many low confidence bboxes improves the Recall and can improve the mAP as well. We tested this approach by using two models, a Model A with a good mAP, and a Model B with a good Recall for all the classes. The following figure shows on the left the Precision vs Recall curves of Model A. It can be seen that many class tails end with a maximum recall = 0.7. We managed to add to Model A all the bboxes predicted for Model B which doesn't overlap with any bbox of predicted for Model A, the result is in the plot of the right. It can be seen that the main shape of the PvsR curves is still the same, but the tails now reach over recall=0.8. We didn't use this approach in our final model because the final ensemble already had a very good recall for all the classes.</li>
</ul>
<p><img src="https://i.postimg.cc/BnL1z77F/photo-2021-03-31-18-37-48.jpg" alt="enter image description here"></p>
<ul>
<li>Last competition days, we spent analyzing the predictions made for the ensemble both by submitting and also visually inspecting predicted bboxes. We found that our improved submissions were indeed improving for almost all classes including the easy and the rare ones.</li>
</ul>
<p><img src="https://i.ibb.co/k0NRCXJ/photo-2021-04-01-00-30-35.jpg" alt="enter image description here"></p>
<h3>Things didn't work</h3>
<ul>
<li>Multi-Label Classifier: We trained a multilabel classifier using as output the 14 classes. We used sigmoid as activation because we needed the probability of each disease and many images have more than one disease. The idea was to remove from the ensemble all the bboxes not predicted by this classifier model, This approach led to a reduction in performance. We also tried to switch the pooling layer and to use PCAM pooling (it is an attention pooling layer used in the <a href="https://github.com/jfhealthcare/Chexpert" target="_blank">Top1 Solution of CheXpert</a>), but it didn't work either. </li>
<li>Bbox Filter: We trained an EfficientNetB6 which classifies whether the detected bboxes belong to a selected class or not just by seeing the crop selected by the ensemble. The model couldn't understand the difference between each disease, perhaps it needed more information on the whole image.</li>
<li>Usage of External Dataset: We tried to use the NIH dataset. We first trained a backbone as a Multi-Label Classifier using NIH classes, and then we transfer its weights to the detector's backbone. Finally, we trained only the detection heads using this competition's dataset. We couldn't detect any appreciable improvement in the particular detector score.</li>
<li>ClassWise Model: we trained different models, one for each detection class. We used clsX + cls14 as each model's dataset. In this case, each model had its own set of anchors prepared for detecting the class of interest. Some classes improved their CV performance but others didn't. Joining all the models we got a very good model, unfortunately, the validation split used in this model was different from the others and it was unreliable to add it to the Validated Stage so we decided to include it in the Partially Validated Stage, but still, didn't give any improvements. We had a final ensemble using this model, which wasn't finally selected, but it had the same score as our winning submission.</li>
</ul>
<h3>Models we choose.</h3>
<ul>
<li>Best at local validation (around 0.47+ mAP on CV), Public: 0.300, private: 0.287</li>
<li>Best at LB. Public: 0.354, private: 0.314</li>
<li>Our best submission in private was 0.321 which was 0.330 in Public</li>
</ul>
<h3>Hardware we used</h3>
<ul>
<li>4x Quadro GV100</li>
<li>4x Geforce GTX 1080</li>
<li>4x Titan X</li>
<li>Ryzen 9 3950x</li>
<li>Nvidia RTX 3080</li>
<li>Quadro RTX 6000<br>
EDIT 1: The solution is reproducible by one GPU itself.</li>
</ul>
<p>Do you need the final prediction file or what else ? i am confused</p> |
VinBigData Chest X-ray Abnormalities Detection | 21th place solution (36th public) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot :)</p>
<blockquote>
<p>output = pd.DataFrame(columns = ["image_id", "PredictionString"]) <br>
for image_id in tqdm(image_ids):<br>
boxes_list = []<br>
scores_list = []<br>
labels_list = []<br>
# For each model, get PredictionString of image_id as a dict<br>
boxes, scores, labels = getitem(sub_yolo, image_id)<br>
boxes_list.append(boxes)<br>
scores_list.append(scores)<br>
labels_list.append(labels)<br>
boxes, scores, labels = getitem(sub1, image_id)<br>
boxes_list.append(boxes)<br>
scores_list.append(scores)<br>
labels_list.append(labels)<br>
boxes, scores, labels = getitem(sub2, image_id)<br>
boxes_list.append(boxes)<br>
scores_list.append(scores)<br>
labels_list.append(labels) <br>
# weights = [1, 4] #0.279<br>
# weights = [1, 3.5] #0.285<br>
# weights = [1, 3] #0.286<br>
# weights = [1, 2.5] #0.270<br>
# weights = [1, 2] #0.272<br>
# weights = [1, 1] #0.241<br>
# weights = [1, 0] #0.21<br>
# weights = [0, 1] #0.247 <br>
# weights = [2, 1, 3] #0.294<br>
# weights = [4, 1, 3] #0.289<br>
# weights = [3, 1] #0.272<br>
weights = [3, 1, 2] #0.291 <br>
boxes, scores, labels = weighted_boxes_fusion(boxes_list, scores_list, labels_list, weights, 0.4, 0.0001, )</p>
</blockquote>
<p>This is my code for WBF ensemble, Finally I chose with best public score.</p>
<p>Thank you , and Congrats on your Silver Medal too :).</p>
<p>Thank you , and Congrats on your Medal too :).</p>
<p>Thanks a lot 👍, And Congrats on your Medal too 😃.<br>
Your kernel helped me a lot.</p>
<p>Thank you :)</p>
<p>This was my 7th competition, and the goal was to get a Silver Medal.<br>
But after finished other competition, I joined this competition lately.<br>
So I first studied other competitor’s public solution.<br>
Detectron2 <a href="https://www.kaggle.com/corochann/vinbigdata-detectron2-train" target="_blank">@corochann</a> and yolo5 <a href="https://www.kaggle.com/awsaf49/vinbigdata-cxr-ad-yolov5-14-class-train" target="_blank">@Awsaf</a> attracted my eye, and just read <a href="https://www.kaggle.com/dennywangdev" target="_blank">@Denny Wang</a> <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/219925#1220224" target="_blank">achievement</a>.<br>
But the CV score[mAP@0.5:0.95] didn’t break 1.6, public lb can’t reach to 1.8.<br>
With lucky I implemented Ensemble Strategy with <a href="https://www.kaggle.com/zfturbo" target="_blank">@ZFTurbo</a>’s WBF, and public lb score was reached to 2.72.<br>
I tried to implement 5 Folds and TTA, but gave up for my other work.<br>
Last Night I ensembled public results with WBF, the results was surprised me to get 2.94(2.82).<br>
This results was enough for me to get Silver Medal.<br>
I think there is far distance between GM/Masters and me.<br>
I hope to become a Master this year, so I must <strong>study</strong> hard.<br>
Thanks to Competition Host, and Congratulations to all winners!<br>
Special thanks to <a href="https://www.kaggle.com/corochann" target="_blank">@corochann</a> <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> <a href="https://www.kaggle.com/dennywangdev" target="_blank">@dennywangdev</a> <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> </p>
<p>This is my submission results.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Public/ Private</th>
<th>WBF weight</th>
<th>Public/Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>Yolo5</td>
<td>0.243/0.235</td>
<td>[3, 1]</td>
<td>0.272/0.256</td>
</tr>
<tr>
<td>Detectron2</td>
<td>0.221/0.232</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Yolo5</td>
<td>0.243/0.235</td>
<td>[2, 1, 3]</td>
<td><strong>0.294/0.282</strong></td>
</tr>
<tr>
<td>Detectron2</td>
<td>0.221/0.232</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Public</td>
<td>0.246/0.226</td>
<td></td>
<td></td>
</tr>
</tbody>
</table> |
VinBigData Chest X-ray Abnormalities Detection | 2nd place solution (quick overview) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I still use this code:<br>
<a href="https://github.com/ZFTurbo/Keras-RetinaNet-for-Open-Images-Challenge-2018" target="_blank">https://github.com/ZFTurbo/Keras-RetinaNet-for-Open-Images-Challenge-2018</a></p>
<p>It's outdated a little bit )</p>
<p>Yes, just use images with bboxes marked by given radiologist discarding all other radiologist for this image. I don't remember exactly, but probably I also used all empty images.</p>
<p>Class 12: 0.037 -> 0.013 (Note: as I remember it contains not only public solution, but ensembled with our models as well)</p>
<p>Thanks! As I remember public submissions catch rare 12 class [Pneumothorax] very good. I think there are only few (2-5?) boxes of Pneumothorax class on public. Looks like it can be the main source of shake up. )</p>
<p>You can see it in my table (it was even better than our validation):<br>
<code>Class 12: 0.037 (mAP LB: 0.555 Valid: 0.388)</code></p>
<h2>Part 1: Initial models</h2>
<p>At first I started to try different OD models for this dataset. I tried: EffDet (B5), CenterNet (HourGlass), Yolo_v5, Retinanet (RsNet50, ResNet101, ResNet152), Faster RCNN</p>
<p>Main initial tricks:</p>
<ol>
<li>To train models I used <a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">WBF</a> on train.csv with IoU: 0.4. I usually used this file with most of models for training and for validation.</li>
<li>Using validation I find out that it’s critical to use lower values for IoU in NMS which filters boxes on output of models. So I used NMS IoU: 0.25 for Yolo_v5 and NMS IoU: 0.3 for RetinaNet. Looks like, it was better for high confident boxes to consume more boxes around it.</li>
<li>I didn’t want to change anchors for RetinaNet so I created version of train.csv with boxes which is close to square, because default Retina doesn’t like elongated boxes.</li>
</ol>
<p>Best models were the following:</p>
<ul>
<li>Yolo_v5: ~0.251 Public LB (Std + Mirror images 5KFold, 640px)</li>
<li>RetinaNet (ResNet101): ~0.246 Public LB (Std + Mirror images 5KFold, 1024px)</li>
<li>RetinaNet (ResNet152): ~0.222 Public LB (Std + Mirror images 5KFold, 800px)</li>
<li>CenterNet (HourGlass): ~0.196 Public LB(Std + Mirror images 5KFold, 512px)</li>
</ul>
<p>EffDet – same code as Wheat competition, but works pretty poor on this dataset. With Faster-RCNN I think I have some problem with code, because it has the very bad score.</p>
<p>Using WBF ensemble of these models + 2 public models (Detectron2 and Yolo_v5):</p>
<ul>
<li>0.221: <a href="https://www.kaggle.com/corochann/vinbigdata-detectron2-prediction" target="_blank">https://www.kaggle.com/corochann/vinbigdata-detectron2-prediction</a></li>
<li>0.204: <a href="https://www.kaggle.com/awsaf49/vinbigdata-2-class-filter?scriptVersionId=51562634" target="_blank">https://www.kaggle.com/awsaf49/vinbigdata-2-class-filter?scriptVersionId=51562634</a></li>
</ul>
<p>I was able to achieve 0.300 on Public LB (0.289 Private LB). Then I merge with my teammates. They had more mmdetection models as well as Yolo_v5, but together we were able to got only 0.301.</p>
<h2>Part 2: Investigation</h2>
<p>While we were stuck we decided to get individual score for each class and compare with our OOF validation. For 3 days we got the following table:</p>
<pre><code>Class 0: 0.014 (mAP LB: 0.210 Valid: 0.900)
Class 1: 0.009 (mAP LB: 0.135 Valid: 0.335)
Class 2: 0.017 (mAP LB: 0.255 Valid: 0.278)
Class 3: 0.046 (mAP LB: 0.690 Valid: 0.915)
Class 4: 0.019 (mAP LB: 0.285 Valid: 0.438)
Class 5: 0.013 (mAP LB: 0.195 Valid: 0.376)
Class 6: 0.010 (mAP LB: 0.150 Valid: 0.465)
Class 7: 0.003 (mAP LB: 0.045 Valid: 0.400)
Class 8: 0.011 (mAP LB: 0.165 Valid: 0.372)
Class 9: 0.004 (mAP LB: 0.060 Valid: 0.162)
Class 10: 0.027 (mAP LB: 0.405 Valid: 0.548)
Class 11: 0.012 (mAP LB: 0.180 Valid: 0.343)
Class 12: 0.037 (mAP LB: 0.555 Valid: 0.388)
Class 13: 0.009 (mAP LB: 0.135 Valid: 0.432)
Class 14: 0.063 (mAP LB: 0.945 Valid: 0.772)
Sum: 0.294
</code></pre>
<p>As it can be seen LB mAP the very different from local mAP. We started from investigation of Class 0 difference (it was most suspicious). We have near perfect prediction on validation while LB score is so small. The first thing we tried is to calculate validation mAP based on Radiologist ID.</p>
<pre><code>Class ID: 0 Validation file: ensemble_iou_0.4._mAP_0.454_extracted_classes_0.csv
Radiologist: R8 mAP: 0.991058
Radiologist: R9 mAP: 0.954593
Radiologist: R10 mAP: 0.971534
Radiologist: R11 mAP: 0.379283
Radiologist: R12 mAP: 0.019819
Radiologist: R13 mAP: 0.511284
Radiologist: R14 mAP: 0.527909
Radiologist: R15 mAP: 0.279870
Radiologist: R16 mAP: 0.506167
Radiologist: R17 mAP: 0.447092
</code></pre>
<p>Interesting, we have perfect match for R8, R9 and R10, but very poor on other radiologists. Bad thing is that there are too small entries by other radiologists in train:</p>
<pre><code>R11 - Class: 0 Entries: 30 AP: 0.379283
R12 - Class: 0 Entries: 12 AP: 0.019819
R13 - Class: 0 Entries: 32 AP: 0.511284
R14 - Class: 0 Entries: 74 AP: 0.527909
R15 - Class: 0 Entries: 25 AP: 0.279870
R16 - Class: 0 Entries: 23 AP: 0.506167
R17 - Class: 0 Entries: 10 AP: 0.447092
</code></pre>
<p>One more thing that R8, R9 and R10 never markup with any other radiologists. And what if test set marked up by some other radiologists (may be even by R13, R14 etc) and making boxes too similar to these 3 we overfit to train? And what if with better models we went further from test mark up?</p>
<p>I checked by eyes images of class 0 in train marked up by R8, R9 and R10 and by rare radiologists to check the difference. I was lucky to see that other radiologists often use much larger boxes comparing to R8, R9 and R10. So I made a small trick by hand:<br>
<code>x1 -= 100 y2 += 100 for all boxes with class 0</code><br>
And checked local validation, which improved a lot for other radiologists, while R8, R9 and R10 became worse:</p>
<pre><code>Radiologist: R10 Class: 0 Entries: 2349 AP: 0.559458
Radiologist: R11 Class: 0 Entries: 30 AP: 0.559355
Radiologist: R12 Class: 0 Entries: 12 AP: 0.351739
Radiologist: R13 Class: 0 Entries: 32 AP: 0.483730
Radiologist: R14 Class: 0 Entries: 74 AP: 0.623019
Radiologist: R15 Class: 0 Entries: 25 AP: 0.341545
Radiologist: R16 Class: 0 Entries: 23 AP: 0.506167
Radiologist: R17 Class: 0 Entries: 10 AP: 0.452887
</code></pre>
<p>I made the same trick on our 0.301 submission and got 0.309 on LB. This showed us the right direction. We finetuned our models using only rare radiologists. And it improved LB score for all of models:</p>
<p>RetinaNet (ResNet101): 0.246 -> 0.264<br>
RetinaNet (ResNet152): 0.222 -> 0.237<br>
Yolo_v5: 0.251 -> 0.271<br>
Ensemble for all models including original moved us to Public LB > 0.330</p>
<h2>Part 3 (Single classes):</h2>
<p>At this point it became hard to add new models to ensemble without proper validation. So we decided to switch to single class improvements. It was much easier to find proper stop epochs for this case. Also training procedure became more complicated. We added NIH dataset boxes for classes where it was possible. We prepared <a href="https://www.kaggle.com/zfturbo/nih-chest-xray-dataset-bbox-for-vinbigdata" target="_blank">NIH dataset in format of VinBigData Chest X-ray competition</a>. We added it as separate radiologist R99. We also prepeared <a href="https://www.kaggle.com/zfturbo/siimacr-pneumothorax-for-vinbigdata-chest-xray" target="_blank">dataset for Pneumothorax, based on SIIM competition</a>, but it didn't help. </p>
<p>We used only one RetinaNet-R101 model for finetuning. We started from best previous weights obtained for multiclass case. So finetuning was pretty fast. We changed sampling in following way:</p>
<ul>
<li>We first select random radiologist, then select random class (target_class ot empty image), then select random image which were marked by selected radiologist and selected class.</li>
<li>Validation was also slightly changed. We excluded all non-empty images which were marked by R8, R9 and R10 radiologists.</li>
</ul>
<p>In parallel we fine-tuned some models on very high resolution (2048px). Some classes from these models catch small boxes better than previous models. It improved our local validation.<br>
This approaches together allowed us to improve even more: >0.36 on public</p>
<h2>Part 4. Final steps</h2>
<p>Actually we decided to create 2 submissions: best on validation and best on LB. We made first submission pretty early. So at the late stage of competition we mostly work on LB-overfitting. That was a little bit risky, but after private LB was revealed we found out it was right thing to do. )</p>
<h3>Models we choose.</h3>
<ul>
<li>Best at local validation (around 0.51 mAP locally), Public: 0.304, private: 0.304</li>
<li>Best at LB. Public: 0.365, private: 0.307</li>
<li>Our best model at private has 0.319 score, but we had no chance to choose it because it had too low score on public - only 0.292. And it wasn't best at local validation.</li>
</ul>
<h2>Code</h2>
<p>You can find code for out solution on github:<br>
<a href="https://github.com/ZFTurbo/2nd-place-solution-for-VinBigData-Chest-X-ray-Abnormalities-Detection" target="_blank">https://github.com/ZFTurbo/2nd-place-solution-for-VinBigData-Chest-X-ray-Abnormalities-Detection</a></p>
<blockquote>
<p>How do you use wbf on train.csv, do you consider each Radiologist as a "model" ?</p>
</blockquote>
<p>I just apply WBF to all boxes for given image with same confidence = 1. If boxes intersect with large IoU (>0.4) they fused in single box, otherwise stays "as is".</p>
<blockquote>
<p>What is "Std + Mirror images", is this an data augmentation technique ?</p>
</blockquote>
<p>It's standard Test Time Augmentation. I used original image and mirrored image - then ensemble obtained boxes for these 2 images using WBF. </p>
<blockquote>
<p>How do you "finetuned our models using only rare radiologists" ? </p>
</blockquote>
<p>I removed images which were marked by R8, R9 and R10 radiologists from training and from validation. Finetuning started from best weights of these models trained on full data.</p>
<blockquote>
<p>Do you train "no fingding" images with your OD models ? Since i note that you have class 14 mAP on Local Valid?</p>
</blockquote>
<p>From the begining for class 14 I used predictions from binary classifier (EffNetB5). But I don't remember what exactly we used in final submission. I think we changed it to some other method involving object detection models, because it was better on LB. Need to ask Sergey.</p> |
VinBigData Chest X-ray Abnormalities Detection | 3rd Place Solution Summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We would like to thank the competition host(s) and Kaggle for organizing the competition and congratulate all the winners, and anyone who benefited in some way from the competition. Special thanks to my teammates <a href="https://www.kaggle.com/erniechiew" target="_blank">@erniechiew</a>, <a href="https://www.kaggle.com/css919" target="_blank">@css919</a>, <a href="https://www.kaggle.com/zehuigong" target="_blank">@zehuigong</a> and <a href="https://www.kaggle.com/stephkua" target="_blank">@stephkua</a> </p>
<h1>Solution Components:</h1>
<ol>
<li>Detection models</li>
<li>Specialized detector for aortic enlargement</li>
<li>Multi-label classifier-based post-processing</li>
</ol>
<h2>1.Detection models</h2>
<p>Pre-processing of the bounding box annotations<br>
I believe that most participants used wbf to merge the original boxes, as this notebook(<a href="https://www.kaggle.com/sreevishnudamodaran/vinbigdata-fusing-bboxes-coco-dataset)" target="_blank">https://www.kaggle.com/sreevishnudamodaran/vinbigdata-fusing-bboxes-coco-dataset)</a>. I apply the same idea that averaging the coordinate of boxes if their IOUs are higher than a threshold, e.g., 0.2. Then using this pre-processed boxes as my annotations in this competition.<br>
Our YOLO-V4 Detector<br>
• Optimizer: SGD, BS = 32, LR=0.001, CosineAnnealing.<br>
• Image size 1280 for P6, 1536 for P7.<br>
• Augmentations: <br>
(1) RandomFlipping (p=0.5)<br>
(2) RandomScaling (0.2-1.8)<br>
(3) Mosaic<br>
(4) Mixup(p=0.2)<br>
(5) Random translation (0.5)</p>
<h3>1.1 Training on the boxes of a single radiologist (LB 0.274)</h3>
<p>We notice that the processing method of annotation between training and test set are different (see this notebook <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/211035)" target="_blank">https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/211035)</a>. So we analyze how many images each radiologist annotates, and found that rad 8, 9, and 10 annotate the majority of bounding boxes. So we train three detectors for rad 8, 9, and 10, separately. The score of the separate detector are showed as follow: (CV/LB/Private)<br>
Rad 8: 0.498/0.252/0.233 <br>
Rad9: 0.376/0.213/0.234<br>
Rad10: 0.384/0.228/0.226</p>
<p>And then I think, why not try to merge them all? Then after I merge them, I got 0.274 on LB, and a private score of 0.248. </p>
<h3>1.2 To the LB 0.280.</h3>
<p>Then, I try to train the detectors on all radiologists. For this part, YOLO-V4 P6 and P7 version are employed. (LB/Private)<br>
P6: 0.278/0.261<br>
P7: 0.264/0.263<br>
After I got these models, I merge them all with the step one submission, and we achieve 0.280 on the public leaderboard, and 0.272 on the private.</p>
<p>Things not work:<br>
(1) Albu augmentation, e.g., CLAHE, RandomBrigheness<br>
(2) Class aware sampling<br>
(3) Training with larger image 1536 for P6<br>
(4) Focal loss + BCE<br>
(5) Pretraining on RSNA dataset and finetune on training set of this competition<br>
(6) Pseudo labeling on the test set<br>
(7) Train specialized detector on class ILD only</p>
<p>Things that we haven’t tried for details:<br>
(1) Pseudo labeling the RSNA or other dataset, denoted as PseData, and training on PseData + TrainSet, this improve the CV from 0.498 to 0.55+, but not improve the LB, however, it did make a improvement on the private score, 0.250 vs. 0.255.<br>
(2) Crop the boxes from abnormal images and paste them on normal images, with different sampling ratios for different classes.</p>
<h2>2. Specialized Detector for Aortic Enlargement</h2>
<p>As highlighted in the discussion forums during the competition, there is a big discrepancy between CV and LB scores. To understand this discrepancy better, we decided to compare LB and local CV scores for each individual class. The AP scores of two classes -- aortic enlargement (class 0) and cardiomegaly (class 3) stood out to us. Below are the individual class AP scores from one of our earlier detectors:</p>
<p><strong>- Aortic enlargement: 0.712 CV / 0.24 LB</strong><br>
<strong>- Cardiomegaly: 0.772 CV / 0.81 LB</strong></p>
<p>Intuitively, we felt that LB performance on these classes should follow the same trends since they are i) “easy” (based on their high local CV scores), ii) relatively popular assuming the test set follows a similar disease distribution as the training set. The observation that aortic enlargement performs drastically worse on LB warranted a further investigation.</p>
<p>We found that CV scores for aortic enlargement on images annotated by non-R8/R9/R10 radiologists were unusually low (similar to our LB score for this class). Upon further analysis, we found that:</p>
<ol>
<li>Aortic enlargement bboxes annotated by non-R8/R9/R10 rad_ids tend to be larger than those annotated by R8, R9, R10.</li>
<li>Our model has a non-negligible tendency to mixup Aortic enlargement and calcification for images annotated by non-R8/R9/R10 rad_ids.</li>
</ol>
<p>And so we decided to fine-tune one of our detectors on non-R8/R9/R10 rad_ids only. We also trained this detector only on aortic enlargement and calcification classes in an attempt to make the model focus on better distinguishing these two classes.</p>
<p>We then replaced our final predictions for aortic enlargement from our main detector with predictions from this specialized detector. <strong>This yielded an increase in overall mAP of approximately 0.020 on public LB, and 0.010 on private LB from our original score.</strong></p>
<p>After reading the great solutions posted by other teams, it is comforting to know that several teams also leveraged this data peculiarity in similar ways!</p>
<h2>3. Multi-label Classifier-based Post-processing</h2>
<p>A detailed breakdown of this step has been posted in a separate discussion(<a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229636)" target="_blank">https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229636)</a>.<br>
code is <a href="https://github.com/Scu-sen/VinBigData-Chest-X-ray-Abnormalities-Detection" target="_blank">https://github.com/Scu-sen/VinBigData-Chest-X-ray-Abnormalities-Detection</a></p>
<h3>Final Results</h3>
<p>Ensemble of 11 detection models with 2-class classifier Public: 0.290 Private:0.284<br>
+Specialized Aortic detector Public:0.310 Private: 0.294 <br>
+ Multi-label Classifier-based Post-processing Public: 0.348 Private: 0.305 </p>
<p>Yes, we only use single model YOLOv4</p> |
VinBigData Chest X-ray Abnormalities Detection | 5th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><blockquote>
<p>Now that we can make unlimited submissions, i'm tempted to solve this mystery</p>
</blockquote>
<p>Now that the competition is finished, I will delete everything including Python and won't look back:)</p>
<p>I would never expect a medical image competition to produce those great art pieces. Maybe Kaggle needs Artist ranking too:)</p>
<p>We only submit the common classes separately to see if we can postprocess. Postprocessing rare classes based on small LB feedback was dangerous. Therefore I don't know its LB score but maybe my teammates have an idea.</p>
<p>We didn't postprocess box confidences. We also didn't use any classifier model. Class 14 predictions are multiplication of (1 - box_conf[i]).</p>
<p>In this competition, I am happy that I have teamed up with one of the most dangerous gangs of Kaggle Street, namely <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>, <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>, <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>. </p>
<p><strong>Validation and Data Bias</strong><br>
It was very difficult to make reliable Cross-Validation since the test set was generated by different labelling process and Public LB was not trustworthy due to its small size. We first noticed that our LB score is much lower than our CV score. It makes sense to some degree but the gap was really big, therefore we have investigated the data. We have trained an adversarial validation model on train vs test images. It has 0.61 AUC. The model was almost certainly able to distinguish some train set images from the test set but the rest looks like random split. These distinguishable images were all labelled by R8-R9-R10 with findings. So the different data sources were not evenly distributed among radiologists. This finding didn't really help us since we don't have the luxury to remove data with labellings. Therefore we have tried to remove the bias. They all had black corners covering some information. It made sense to add the same black boxes to the corners of the each image but it didn't remove the bias. Then we have tried gradCAM and according to it the bias was everywhere in the image, making it hard to remove.</p>
<p><img src="https://i.imgur.com/mY34STX.png" alt="gradCAM Result"></p>
<p>We have trained another model to detect which radiologists labelled the images. This model was very accurate on R1-7 vs the rest. Since R1-7 didn't have any labels, we have also removed them from our training set hoping that it can solve some of the bias issue. Knowing that bias is everywhere in the image, and not on a specific location, we thought that style transfer may solve the issue and ended up in this art.</p>
<p><img src="https://i.imgur.com/Gqk1RBq.jpg" alt="Chest Art"></p>
<p>This didn't work (obviously :D). </p>
<p><strong>Rare Annotators</strong><br>
Losing hope on removing more bias, we have focused on improving our CV as it is with more focus on R11-17. We have trained several effdets with different backbones and standard augmentations. Data is sampled for each radiologist if the image has a finding, otherwise the image is sampled once during each epoch. We also had a YOLO model performing worse than our effdet. Ensembling them all with WBF@0.4 gave us our best score. Some experiments with pretraining on R8-10, finetuning on R11-17 and applying WBF before training by weighting R11-17 more worked not good on Public LB, therefore we didn't use them. But they seem to work much better on Private LB like 0.224 -> 0.275.</p>
<p><strong>Aortic Enlargement</strong><br>
We noticed that something was wrong with this class because our LB score for it was way less than our CV score. Considering that it may be the way boxes are labelled in the test set, we have made a post processing for them which gave us an extra 0.01 on Public, 0.005 on Private.</p>
<pre><code>sub.loc[sub.class_id==0, "y_max"] += 110
sub.loc[sub.class_id==0, "x_min"] += 40
</code></pre> |
VinBigData Chest X-ray Abnormalities Detection | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No I didn't have time to try other kinds of preprocessing</p>
<p>It simply means I did not do any preprocessing on the labels. Note that most of the winners combined labels from three annotators (using wbf for example) before training. I did not do that.</p>
<p>Thanks, Chris. Yes, I realized after reading other winning solutions that ensembling different models is more effective than ensembling same models of different CV fold. BTW, my best model is efficientdet not yolo.</p>
<p><strong>Preprocessing</strong><br>
1) I annotated part of the NIH data and trained a lung localizer for more efficient usage of gpu memory. The drawback is that diseases outside lung area is guaranteed missing.<br>
2) I treated each image-annotation pair as independent data. So both the training and validation data were tripled.</p>
<p><strong>Modeling</strong><br>
Similar to the majority, I used object detection for disease localization, and binary classifiers for predicting "no finding" as well as removing false positive bboxes on healthy images.</p>
<p>For detection, I trained efficientdet-d5 on positive images and validated on the full validation set including healthy images. The scores were 5 fold average CV: 0.36, public LB: 0.191, private LB: 0.219. I used WBF (<a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">https://github.com/ZFTurbo/Weighted-Boxes-Fusion</a>) for ensembling. The image input size was 1152x1152.</p>
<p>For binary classifier training, I added an additional multi-label head which gave slightly better performance. The CV was about 0.994 AUC with a blend of efficientnet-b7 and resnet200d. The image input size was 640x640.</p>
<p>To fuse the detection and classification models, I first multiplied the confidence scores of every bbox by Prob(unhealthy)^0.2, then I added [14, Prob(healthy), 0, 0, 1, 1] to all the predictions. With the classifiers, CV improved to 0.47, public improved to 0.266, private improved to 0.297.</p>
<p><strong>What didn't work</strong><br>
1) Yolov5 with default parameters performed much worse than efficientdet. Due to the time limit, I gave up yolov5 easily. I should have spent more time on it. <br>
2) I tried different kinds of pseudo-labeling on most of the external data but nothing worked.</p> |
VinBigData Chest X-ray Abnormalities Detection | 76th place solution - EfficientDet | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Updated public, private LB. Local CV is mAP(@iou=0.4) and positive images only.</p>
<p>Thank you!</p>
<p>Congratulations to all the winners and thanks to the organizers of this competition. And also thanks to all of the competitors. I have learned a lot from this competition.</p>
<p>I tried to get better results, but couldn't find a better way to solve the problem.</p>
<h2>Summary</h2>
<p>The ensembled results of the two-stage approach and the one-stage approach. The two-stage approach consists of 2-class classifier and 14-class detector. And the one-stage approach is 15-class detector.</p>
<h2>Tools</h2>
<ul>
<li>Colab Pro, Tesla V100 16GB single GPU</li>
<li>GCS</li>
<li>Pytorch Lightning</li>
<li>Neptune</li>
<li>Kaggle API</li>
</ul>
<h2>Validation</h2>
<p>StratifiedKFold was used and the data set was composed of 5 folds.</p>
<p>The classifier of the two-stage approach was trained using all normal and abnormal images, and only abnormal images were used for the detector.</p>
<p>The one-stage detector was trained using all images of normal and abnormal.</p>
<h2>Fusing BBoxes</h2>
<p>The overlapped bboxes in both train set and validation set were fused using nms. I tested it using batched_nms of torchvision, nms and wbf of ZFTurbo.</p>
<p>14-class Efficientdet d4 896px 30 epochs without classifier, local cv on positive image only</p>
<table>
<thead>
<tr>
<th></th>
<th>cv(mAP@iou=0.4)</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchvision batched_nms</td>
<td>0.4317</td>
<td>0.155</td>
<td>0.168</td>
</tr>
<tr>
<td>ZFTurbo nms</td>
<td>0.4419</td>
<td>0.164</td>
<td>0.181</td>
</tr>
<tr>
<td>ZFTurbo wbf</td>
<td>0.4158</td>
<td>0.157</td>
<td>0.185</td>
</tr>
</tbody>
</table>
<p>I thought direct comparison of local cv was not possible, and LB was also somewhat difficult to use as a criterion for judgement.</p>
<p>After considering the labeling method of the test set, I thought that the nms method was more similar to the labeling method, so I decided to use nms.</p>
<p><a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/214906#1199596" target="_blank">[Discussion] nms > weighted box fusion?</a></p>
<h3>torchvision's nms vs ZFTurbo's nms</h3>
<p>Torchvision'nms has some problems in the order of results when the bbox scores are the same.</p>
<p><a href="https://stackoverflow.com/questions/56176439/pytorch-argsort-ordered-with-duplicate-elements-in-the-tensor" target="_blank">[Stackoverflow] Pytorch argsort ordered, with duplicate elements in the tensor</a></p>
<p><a href="https://pytorch.org/vision/stable/ops.html#torchvision.ops.nms" target="_blank">torchvision.ops.nms</a></p>
<blockquote>
<p>If multiple boxes have the exact same score and satisfy the IoU criterion with respect to a reference box, the selected box is not guaranteed to be the same between CPU and GPU. This is similar to the behavior of argsort in PyTorch when repeated values are present.</p>
</blockquote>
<p>It doesn't matter when you convert the label only once at first and then save and load it as csv, pickle, etc., but if you convert the label and use it in a new environment, the consistency of the bbox may not be maintained.</p>
<p>I decided to use ZFTurbo's nms which uses numpy.argsort, as this part would interfere with experiment flexibility and consistency during training.</p>
<p><a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">[GitHub] ZFTurbo/Weighted boxes fusion</a></p>
<h2>Model training</h2>
<p>All models were trained on Colab Pro's V100 16GB single GPU.</p>
<ul>
<li>AdamW</li>
<li>CosineAnnealingLR</li>
<li>epochs = 50</li>
<li>checkpoint selection : max mAP among top-3 min val_loss</li>
</ul>
<h3>Two-stage approach</h3>
<ul>
<li>2-class classifier : EfficientNet, Resnet200d, total 15 models</li>
</ul>
<table>
<thead>
<tr>
<th>model</th>
<th>image size(px)</th>
<th>folds</th>
<th>batch size</th>
<th>init lr</th>
<th>weight decay</th>
<th>val_acc</th>
<th>auc</th>
</tr>
</thead>
<tbody>
<tr>
<td>b5</td>
<td>600</td>
<td>5 of 5</td>
<td>12</td>
<td>7.5e-5</td>
<td>1.0e-4</td>
<td>0.9601</td>
<td>0.9930</td>
</tr>
<tr>
<td>b6</td>
<td>528</td>
<td>5 of 5</td>
<td>12</td>
<td>7.5e-5</td>
<td>1.0e-3</td>
<td>0.9553</td>
<td>0.9927</td>
</tr>
<tr>
<td>resnet200d</td>
<td>600</td>
<td>3 of 5</td>
<td>12</td>
<td>7.5e-5</td>
<td>1.0e-4</td>
<td>0.9541</td>
<td>0.9934</td>
</tr>
<tr>
<td>b5</td>
<td>456</td>
<td>single</td>
<td>16</td>
<td>1.0e-4</td>
<td>1.0e-4</td>
<td>0.9557</td>
<td>0.9927</td>
</tr>
<tr>
<td>b5</td>
<td>1024</td>
<td>single</td>
<td>4</td>
<td>2.5e-5</td>
<td>1.0e-4</td>
<td>0.9577</td>
<td>0.9936</td>
</tr>
</tbody>
</table>
<ul>
<li>14-class detector : EfficientDet with 2-class classifier, total 18 models, local cv on positive image only</li>
</ul>
<table>
<thead>
<tr>
<th>model</th>
<th>image size(px)</th>
<th>folds</th>
<th>batch size</th>
<th>init lr</th>
<th>weight decay</th>
<th>cv(mAP@iou=0.4)</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>d3</td>
<td>1024</td>
<td>single</td>
<td>3</td>
<td>3e-4</td>
<td>1e-3</td>
<td>0.4545</td>
<td>0.209</td>
<td>0.250</td>
</tr>
<tr>
<td>d4</td>
<td>896</td>
<td>5 of 5</td>
<td>4</td>
<td>4e-4</td>
<td>1e-4</td>
<td>0.4541</td>
<td>0.218</td>
<td>0.250</td>
</tr>
<tr>
<td>d4</td>
<td>896</td>
<td>single</td>
<td>4</td>
<td>4e-4</td>
<td>1e-3</td>
<td>0.4606</td>
<td>0.257</td>
<td>0.247</td>
</tr>
<tr>
<td>d4</td>
<td>1024</td>
<td>single</td>
<td>3</td>
<td>3e-4</td>
<td>1e-3</td>
<td>0.4545</td>
<td>0.228</td>
<td>0.249</td>
</tr>
<tr>
<td>d5</td>
<td>768</td>
<td>5 of 5</td>
<td>4</td>
<td>4e-4</td>
<td>1e-3</td>
<td>0.4472</td>
<td>0.225</td>
<td>0.253</td>
</tr>
<tr>
<td>d5</td>
<td>896</td>
<td>4 of 5</td>
<td>3</td>
<td>3e-4</td>
<td>1e-3</td>
<td>0.4522</td>
<td>0.214</td>
<td>0.250</td>
</tr>
<tr>
<td>d5</td>
<td>1024</td>
<td>single</td>
<td>2</td>
<td>2e-4</td>
<td>1e-3</td>
<td>0.4462</td>
<td>0.214</td>
<td>0.232</td>
</tr>
</tbody>
</table>
<h3>One-stage approach</h3>
<ul>
<li>15-class detector, total 2 models, local cv on positive image only</li>
</ul>
<table>
<thead>
<tr>
<th>model</th>
<th>image size(px)</th>
<th>folds</th>
<th>batch size</th>
<th>init lr</th>
<th>weight decay</th>
<th>cv(mAP@iou=0.4)</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>d4</td>
<td>896</td>
<td>2 of 5</td>
<td>4</td>
<td>4e-4</td>
<td>1e-3</td>
<td>0.4546</td>
<td>0.230</td>
<td>0.246</td>
</tr>
</tbody>
</table>
<p>At batch size < 4, the mAP result was poor. The larger the image size, the better the mAP, but no further training was possible.</p>
<p>I tried Freeze BatchNorm, accumulate grad batches, and GroupNorm but I didn't get any better results.</p>
<p>I also tested Downconv, but the AP of small objects like Calcification increased, but the AP of ILD and large objects decreased, so the overall mAP was not improved.</p>
<p><a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/226557" target="_blank">[RANZCR CLiP] 11th Place Solution - Utilizing High resolution, Annotations, and Unlabeled data</a></p>
<h2>Augmentation</h2>
<p>Resize, scale, and crop were configured by referring to the method in the paper of EfficientDet.</p>
<p>CLAHE, equalize, invertimg, huesaturationvalue, randomgamma, shiftscalerotate did not work.</p>
<pre><code>A.Compose(
[
A.Resize(height=self.resize_height, width=self.resize_width),
A.RandomScale(scale_limit=(-, ), p=),
A.PadIfNeeded(
min_height=self.resize_height,
min_width=self.resize_width,
border_mode=cv2.BORDER_CONSTANT,
value=,
p=,
),
A.RandomCrop(height=self.resize_height, width=self.resize_width, p=),
A.RandomBrightnessContrast(p=),
A.ChannelDropout(p=),
A.OneOf(
[
A.MotionBlur(p=),
A.MedianBlur(p=),
A.GaussianBlur(p=),
A.GaussNoise(p=),
],
p=,
),
A.HorizontalFlip(p=),
A.Normalize(),
ToTensorV2(),
],
</code></pre>
<h2>Post processing</h2>
<p>In the case of the two-stage approach, it was difficult to find an appropriate threshold for the classifier. The threshold was determined to be a 60~70% normal case. And if it was normal, all detections of the detector were excluded.</p>
<p>In the case of the one-stage approach, similarly, detections were excluded if it was normal based on the threshold.</p>
<h2>Blending</h2>
<p>The results of the two-stage approach and the one-stage approach were blended using nms, and the final result is as follows.</p>
<h3>two-stage approach, 15 classifier + 18 detector</h3>
<table>
<thead>
<tr>
<th>normal thr</th>
<th>public LB</th>
<th>private LB</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>0.70</td>
<td>0.219</td>
<td><strong>0.246</strong></td>
<td><strong>final submission</strong></td>
</tr>
<tr>
<td>0.65</td>
<td>0.217</td>
<td>0.255</td>
<td></td>
</tr>
<tr>
<td>0.60</td>
<td>0.215</td>
<td>0.256</td>
<td></td>
</tr>
</tbody>
</table>
<h3>one-stage approach, 2 detector</h3>
<table>
<thead>
<tr>
<th>normal thr</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.10</td>
<td>0.198</td>
<td>0.245</td>
</tr>
<tr>
<td>0.00</td>
<td>0.198</td>
<td>0.245</td>
</tr>
</tbody>
</table>
<h3>two-stage + one-stage</h3>
<table>
<thead>
<tr>
<th>thr</th>
<th>public LB</th>
<th>private LB</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>0.70-0.30</td>
<td>0.224</td>
<td><strong>0.253</strong></td>
<td><strong>final submission</strong></td>
</tr>
<tr>
<td>0.65-0.30</td>
<td>0.222</td>
<td>0.259</td>
<td></td>
</tr>
<tr>
<td>0.60-0.30</td>
<td>0.220</td>
<td>0.258</td>
<td></td>
</tr>
</tbody>
</table>
<h2>What did not work</h2>
<p>Freeze BatchNorm</p>
<p>accumulate grad batches</p>
<p>GroupNorm, group per channel = 8</p>
<p>GroupNorm, pretrained backbone with GroupNorm</p>
<p>Downconv</p>
<p>NIH dataset concat classifier</p>
<p>change aspect ratio</p>
<h2>Source Code</h2>
<p>Source code is available at <a href="https://github.com/sunghyunjun/kaggle-vinbigdata-chest-xray-abnormalities-detection" target="_blank">https://github.com/sunghyunjun/kaggle-vinbigdata-chest-xray-abnormalities-detection</a></p>
<p>Submission notebook is <a href="https://www.kaggle.com/sunghyunjun/91th-place-vbd-inference-clf-det-and-det-all" target="_blank">91th place VBD inference CLF DET and DET ALL</a></p> |
VinBigData Chest X-ray Abnormalities Detection | 7th Place - Metric is like AUC | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Good point. I agree it shouldn't be needed. Bayesian statistics says only <code>prediction from "finding / no finding"</code> is needed when we train object detection on just positive images.</p>
<p>I think it helped for three reasons. </p>
<ul>
<li>First, having the extra 14 outputs on our classifier probably made the output for "finding/ no finding" more accurate due to multi-task learning. </li>
<li>Second using 8% "class" and 8% "finding", is like ensembling two different diverse classifiers to achieve the goal of predicting "finding" better. </li>
<li>Third, the <code>mAP</code> metric doesn't want realistic probabilities. After predicting bbox, you can increase <code>mAP</code> by adding more bbox and using a "class" classifier helps find which additional bbox to add.</li>
</ul>
<p>But ultimately the decision was based on local validation. Using our local validation we tried both </p>
<ul>
<li>16% "finding" classifier</li>
<li>8% "class" ensembled with 8% "finding"</li>
</ul>
<p>The latter increased CV and private LB by +0.002</p>
<p>That should work good. That is the same as <code>confidence score= prob(class given finding=True) ∗ prob(finding=True)**0.5</code> because both formulas keep scores in the same order. The formula here is just the square root of yours above.</p>
<p>For us, using <code>confidence score= prob(class given finding=True) ∗ prob(finding=True)**0.2</code> worked nearly as well as our formula posted above which uses our 15 class classifier. </p>
<p>Did you try the formula <br>
<code>confidence score=(prob(class given finding=True)^5)∗prob(finding=True)</code> <br>
or the equivilent<br>
<code>confidence score= prob(class given finding=True) ∗ prob(finding=True)**0.2</code> ?</p>
<p>Congratulations KS and team, well done!</p>
<p>Just to confirm, is your formula the "output score from object detection squared" multiplied by "output score from classifier"?</p>
<p>haha, true <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> <br>
Kaggle Post Process Grandmaster 😄</p>
<p>Thanks. The letters are not classes. The entire picture is a single class. Each letter is one bbox. The numbers are the confidence scores assigned to a bbox. And the bboxes are all bboxes in your entire <code>submission.csv</code> file for the 1 pictured class. (This is a made up example with few bbox).</p>
<p>So given 1 class with its many bboxes in our submitted CSV file, if we can modify all the confidence scores in a smart way, then we can increase that 1 class's <code>mAP</code>.</p>
<p>Yes. It's a geometric mean. The formula <code>p = a * b**0.1 * c **0.1</code> produces the same ranking ordering (i.e. same <code>mAP</code>) as <code>a**0.84 * b**0.08 * c**0.08 = p**(5/6)</code>. This second expression is the geometric mean of numbers <code>a, b, c</code> with weights <code>0.84, 0.08, 0.08</code>. </p>
<p>So we use 84% of the prediction from object detection model, 8% prediction from "finding / no finding" prediction, and 8% from "specific class prediction". These weights were found using local validation.</p>
<p>You will also notice that Guanshuo Xu in his 6th place <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229770" target="_blank">write-up</a> used a geometric mean of <code>a**0.84 * b**0.16</code>. He used 84% object detection and 16% "finding / no finding" prediction. Most teams found this geometric mean to work best.</p>
<p>We tried tuning the weights, but using weights=1 gave best CV score. First we applied <code>NMS 0.4</code> to each fold separately. Next per model, we used <code>WBF iou_thr=0.40, conf_type='max'</code> to combine 10 model folds into 1 set of predictions. Next we used <code>WBF iou_thr=0.40, conf_type='max'</code> again to combine the 3 models into <code>submission.csv</code></p>
<p>Thanks Tuan</p>
<p>To clarify further, our formula is <code>score = base * prob(finding)**0.1 * prob(class)**0.1</code> which is more or less <code>score = base * prob(finding)**0.2</code> which is equivalent (same ranking ordering, same <code>mAP</code>) to <code>score = base**0.84 * prob(finding)**0.16</code> if you raise the right side to the exponent <code>5/6</code>. </p>
<p>Great question <a href="https://www.kaggle.com/ivanpan" target="_blank">@ivanpan</a> . Note that this post process is not the same as training object detection with all images.</p>
<p>Bayesian statistics say we should use <code>score = base * prob(finding)</code> where base is the probability from object detection training on only "finding" images. I believe this formula would produce the same bbox confidence scores as training an object detection on all images.</p>
<p>The above formula is equivilent (i.e. same rank order, same <code>mAP</code>) as using <code>score = base**0.5 * prob(finding)**0.5</code>. This is the geometric mean between the two. Now, <code>base</code> is over zealous because it was trained on only positive images, and multiplying by <code>prob(finding)</code> makes the result realistic.</p>
<p>Our final formula is equivilent to <code>score = base**0.84 * prob(finding)**0.16</code>. This is the geometric average where we give more weight to over zealous bbox confidence scores which optimizes the loop holes of the <code>mAP</code> metric.</p>
<p>Yes, we found the formula using grid search on local validation. </p>
<p>Your question has me wondering how similar or different our post process is to training object detection on all images. As i said above, i think they are different and i think if i had an object detection model trained on all images, i could post process the confidence scores with a new formula not published anywhere that could increase it's mAP CV LB. </p>
<p>Because the <code>mAP</code> metric rewards both precision and recall. And the best way to increase recall is to make more bbox on the images that we believe have findings (i.e. <code>prob(finding)>0.5</code>) and <code>mAP</code> does not penalize you for trying to find more bbox and being wrong.</p>
<p>No, then <code>mAP</code> will not increase. We increase <code>mAP</code> by rearranging the ranking order into a more accurate order just like the metric AUC.</p>
<p>Thanks. We did not during the competition, but i just submitted now. Our ensembled classifier output for finding / no finding achieves CV AUC 0.9934. And when submitted by itself achieves LB 0.064</p>
<p>Thanks. Statistics say we only need to multiply the object detection confidence scores by the classifier probabililty "no finding / finding" scores to correct them. However through experimentation, we found we could increase CV mAP score more by also multiplying by the probability of class too.</p>
<p>So for example if object detection says confidence for bbox of "class 0" is 0.78. Then we multiply 0.78 by probability that image has "class 0" and we multiply by probability that image has a "finding". We add exponents of 0.1 for each with formula</p>
<p>$$ score = base * prob(class)^{0.1} * prob(finding)^{0.1}$$</p>
<p>Our "15 class classifier" has 15 binary outputs and predicts the probability that a specific class is present. It was trained using the number of doctors who predict a class divided by 3 as ground truth.</p>
<p>When i say CV on "class 14", yes i mean the OOF AUC of binary classifier for "no finding / finding". The ensemble of 3 classifiers achieves CV OOF of AUC 0.9934</p>
<p>I like your idea your sum based NMS idea.</p>
<p>We had two validation schemes. The first was 10 stratified folds. We used this to make predictions on the test dataset. And we computed a CV mAP score on this.</p>
<p>Next, to compare different fold ensembling techniques, we had a second validation scheme. In our second validation scheme, we put fold 9 and 10 aside as a holdout validation set. The we trained an 8 fold model on folds 1-8 without adding fold 9 and 10 to train data. Lastly we used the 8 folds to predict folds 9 and 10. We compared different ways to ensemble the 8 folds like NMS versus WBF etc. We computed a validation mAP score on this.</p>
<p>Thanks. Congratulations on earning a bronze medal, top 10%</p>
<p>I would like to thank my teammates <a href="https://www.kaggle.com/matthieuplante" target="_blank">@matthieuplante</a> <a href="https://www.kaggle.com/seb6084" target="_blank">@seb6084</a> for inviting me onto their team. I have been wanting to learn more about object detection and my teammates were the perfect teachers. </p>
<p>When i joined them 1 week ago, they already had full pipelines to train VFNet, YoloV5, EfficientDet4 and more! Together we tuned the models, built some classifier models, and ensembled everything using weighted box fusion and Bayesian statistics!</p>
<h1>Trust Your CV - mAP 0.488</h1>
<p>Our final solution was an ensemble of three object detection models:</p>
<ul>
<li>VFNet on 1024x1024, infer 800x800 - CV mAP 0.463, Public LB 0.287, <strong>Private LB 0.289</strong></li>
<li>Yolov5 on 1024x1024, infer 768x768 - CV mAP 0.464, Public LB 0.268, <strong>Private LB 0.290</strong></li>
<li>EfficientDet4 on 1024x1024, infer 1024x1024 - CV 0.460, Public LB 0.237, <strong>Private LB 0.282</strong></li>
</ul>
<p>and three 15 class classifiers, i.e. 15 binary outs, with ensemble "no finding" <strong>CV AUC 0.9934</strong>:</p>
<ul>
<li>EfficientNet B4 on 512x512 - CV on finding/no finding output, AUC 0.990</li>
<li>EfficientNet B4 on 768x768 - CV on finding/no finding, AUC 0.991</li>
<li>DenseNet201 on 768x768 - CV on finding/no finding, AUC 0.991</li>
</ul>
<p>All decisions on how to ensemble folds and models. And how to combine object detection models with classifier models were determined via local validation.</p>
<p>With ensemble, we achieved a validation score of <strong>0.488 mAP, public LB 0.272, and private LB 0.296</strong>. To simulate the test set with our validation set, we randomly only kept one doctor's predictions per class per image otherwise validation would not be like LB scoring.</p>
<h1>Competition Metric is like AUC</h1>
<p>For a given class, every bbox confidence score you predict needs to be ordered from most likely to least likely. Just like AUC metric. This is not per image, this is per CSV file for each class. If you predict a total of 30,000 bbox for class 0, then all 30,000 bbox confidence scores need to be correctly ordered from most likely to be class 0 to least likely to be class 0. </p>
<h1>Bbox Confidence Score Order is Important</h1>
<p>By simply adjusting the confidence scores of your bbox and not changing their xmin, xmax, ymin, ymax you can increase your CV and LB score! In the following example, the entire plot is 1 class, each dot is 1 bbox and the numbers are confidence scores. If the confidence score for bbox D and G are changed then the mAP for this class increases by +0.12 !! Therefore it is important to calibrate bbox probabilities.</p>
<h2>Order 1</h2>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Feb-2022/map1.png" alt=""></p>
<h2>Order 2</h2>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Feb-2022/map2.png" alt=""></p>
<h1>No Penalty For Adding More Bbox</h1>
<p>If you add new bbox with confidence score less than the lowest confidence score for all of that class in your entire CSV file, then your CV LB can only increase, it cannot decrease! Therefore instead of removing bbox it is better to calibrate probabilities, i.e. lower probability for unlikely bbox.<br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Feb-2022/map3.png" alt=""></p>
<h1>Calibrated Confidence Scores</h1>
<p>The most effective way to train an object detection model is to remove all the images without any bbox. After doing this, your object detection model will predict a confidence score equal to <br>
$$ \text{confidence score} = \text{prob}(class \text{ given } finding = \text{True})$$<br>
Therefore to get the calibrated probability, i.e correct confidence score for classes 0 thru 13, you must use conditional probability rule and multiply by <code>prob(finding) = 1 - prob(no finding)</code>. So we use our classifier and the formula<br>
$$ \text{confidence score} = \text{prob}(class \text{ given }finding = \text{True}) * \text{prob}(finding =\text{True}) $$<br>
The first prob comes from object detection model and the second prob comes from classifier model ensemble. Using CV we discovered you can improve mAP metric with a 15 class classifier and the following formula:<br>
$$ \text{score} = base * \text{prob}(class)^{0.1} * \text{prob}(finding)^{0.1} $$<br>
where <code>base = prob(class given finding = True)</code> that comes from your object detection model. And <code>prob(class)</code> is the probability that specific class is present on image from classifier model. And <code>prob(finding)</code> is the probability that image has a finding from classifier model.</p>
<h1>Add Class 14, finding/ no finding</h1>
<p>There is nothing fancy about adding predictions for class 14. For each image just add the string <code>14 PR 0 0 1 1</code> where <code>PR = prob(no finding)</code> from your classifier. If your classifier is perfect, then you will score full <code>mAP = 1.0</code> for class 14. Most classifiers have CV AUC 0.99+, so most teams will maximize this class mAP</p> |
VinBigData Chest X-ray Abnormalities Detection | 8th Place: Last hours top and late to be higher | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Let's talk about a two-week adventure in the black-and-white world of lung.</p>
<h2>Classification</h2>
<p>We had 2 classification models:</p>
<ul>
<li>Binary classification for 14 / non-14 class. It was used for the postprocessing / filtering of the object detection predictions</li>
<li>Multilabel classification for 14 disease classes. It was used to calculate the final box probability as: <code>p_box * 0.9 + p_cls * 0.1</code>, where <code>p_box</code> is a probability of the box itself, and <code>p_cls</code> is the probability of the corresponding class for the whole image</li>
</ul>
<p>Both models were using ResNet34 as a backbone and standard augmentations: HorizontalFlip, RandomBrightnessContrast, ShiftScaleRotate. Binary classifier was achieving 0.992 ROC AUC locally, however got only 0.059 mAP submitted separately on the LB. So, probably, our classifiers were quite weak, that's why in the final models we've ensembled binary classifier with publicly available ones.</p>
<h2>Preprocessing</h2>
<p>Preprocessing did not produce any results. We tried</p>
<ul>
<li>Equalization</li>
<li><a href="https://www.kaggle.com/ratthachat/aptos-eye-preprocessing-in-diabetic-retinopathy#2.-Try-Ben-Graham's-preprocessing-method." target="_blank">Graham's-preprocessing</a></li>
<li>combine preprocessing options into different channels<br>
<img src="https://i.imgur.com/BkzsbeP.png" alt=""></li>
</ul>
<h2>Preparing training data:</h2>
<ul>
<li>Remove duplicates</li>
<li>Different radiologist at different images</li>
<li>R11-17 (rare) radiologist</li>
<li>Add 35% No finding</li>
</ul>
<h2>Detector</h2>
<ul>
<li><a href="https://github.com/shinya7y/UniverseNet" target="_blank">Universenet</a><br>
-- universenet50-gfl<br>
-- universenet101-gfl</li>
<li>FasterRCNN R101</li>
<li>YoloV5</li>
</ul>
<p>The universenet50-gfl universenet101-gfl models gave the greatest contribution. Probably Focal Loss is not bad for this task.<br>
Best model: univernet50-gfl on all data (Different radiologist) with 35% No finding give <strong>LB 0.288 / 0.281</strong> . CV 0.492 with all data and 0.54 without no finding on GT</p>
<h2>Extra data for pseudo labeling</h2>
<p>Take NIH dataset and run ensemble.<br>
Bring in the appropriate classes:<br>
'Hernia' -> 'Aortic enlargement'. <br>
We kept only the bounding box classes that matched the nih labels<br>
Light class balance -- take 31000 images with pseudo labels from NiH<br>
Finetune some folds on it give 0.001 to ensemble. It gives boost 9, 11,12 classes</p>
<h2>BBox aggregation:</h2>
<p>Use some kind of <a href="https://github.com/dereyly/signate_tobaco/blob/b458e4fbd312b186ed0cac92321486b7d2fce175/data_utils/boxes_mask.py#L394" target="_blank">voting</a>. It depends on score, IOU score and unique bbox. Unique part just calculate how many models votes for current bbox (its better then pre nms all results to make sure that model not give multi bbox in position with current nms). Its boost No finding images, if we are not sure about the classifier:<br>
voting 0.502 all / 0.564 val only on gt<br>
uniq voting 0.522 all / 0.56 val only on gt</p>
<p>The picture started to come together towards the end and so only a fraction of the cool stuff got done. Best of the unpublished private 0.317 (public 0.309)<br>
Best too late moments:</p>
<ul>
<li>no filtering by class 14 -- +0.08</li>
<li>more models with rare R11-17 (less models with all data) in final ensemble -- +0.09<br>
Thanks to the team <a href="https://www.kaggle.com/ybabakhin" target="_blank">@ybabakhin</a> <a href="https://www.kaggle.com/golubev" target="_blank">@golubev</a> for this trip</li>
</ul> |
VinBigData Chest X-ray Abnormalities Detection | [10th place solution] YOLOv5 + VFNet + FasterRCNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The single scores of my VFNet models are about 0.23-0.24 for both public and private leaderboards. I didn't test the ensemble of only VFNets. Aside the augmentations that I mentioned, the lr I used is 0.01 and other settings are default.</p>
<p>Thank you! The best CV/LB of a single yolov5 model I got were 0.41 and 0.24. I didn't test the ensemble of the final 5 yolov5 models that I used because I decided to trust my CV. The ensemble with only yolov5 I tested is around 0.23 (both public and private).</p>
<p>Thank you! Your notebooks are really detailed and helpful.</p>
<p>Thank you!</p>
<p>Thank you very much! I couldn't have done it without your work.</p>
<p>Thank you!</p>
<p>Thank you!</p>
<p>Thank you! To ensemble the results, I predicted the test set using each individual model and formatted the predictions to the correct input for weighted boxes fusion. You can see the format in this github: <a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">https://github.com/ZFTurbo/Weighted-Boxes-Fusion</a></p>
<p>The public scores for most of my single yolov5 models (after 2 class filter) are around 0.2-0.24 and the private scores for most of them are around 0.23-0.24.</p>
<p>Thank you!</p>
<p>Thank you!</p>
<p>Cảm ơn :))</p>
<p>Thank you! The highest public score I got before I separated the iou was 0.292 and the public score after I separated the iou was 0.31. For private leaderboard, the best ensemble with separated iou achieved only slightly better than the best ensemble without separated iou (0.304 vs 0.295). However, the best ensemble without separated iou I used a bigger skip_box_thr so it might be the factor.</p>
<p>Hello! Congratulations to all the winners. I am very excited to be in top 10 as this was my first competition on Kaggle. Thanks to <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> <a href="https://www.kaggle.com/corochann" target="_blank">@corochann</a> <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a> <a href="https://www.kaggle.com/sreevishnudamodaran" target="_blank">@sreevishnudamodaran</a> <a href="https://www.kaggle.com/quillio" target="_blank">@quillio</a> and <a href="https://www.kaggle.com/gauravsingh1" target="_blank">@gauravsingh1</a> for their amazing public notebooks. Thanks to Vingroup Big Data Institute and Kaggle for organizing this competition.</p>
<p><strong>TLDR</strong></p>
<ul>
<li>I used weighted boxes fusion (WBF) by <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> to preprocess the input.</li>
<li>I used an ensemble of 5 Yolov5 models, 1 FasterRCNN model and 6 VFNet models.</li>
<li>I used WBF to ensemble the outputs.</li>
<li>Source code: <a href="https://github.com/kc3222/Kaggle/tree/main/VinBigDataChestXRayAbnormalitiesDetection" target="_blank">https://github.com/kc3222/Kaggle/tree/main/VinBigDataChestXRayAbnormalitiesDetection</a></li>
</ul>
<p><strong>Models</strong></p>
<ul>
<li>5 Yolov5 models trained on 768x768 images (5 folds) based on:<br>
<a href="https://www.kaggle.com/awsaf49/vinbigdata-cxr-ad-yolov5-14-class-train" target="_blank">https://www.kaggle.com/awsaf49/vinbigdata-cxr-ad-yolov5-14-class-train</a></li>
<li>1 FasterRCNN model trained based on:<br>
<a href="https://www.kaggle.com/corochann/vinbigdata-detectron2-train" target="_blank">https://www.kaggle.com/corochann/vinbigdata-detectron2-train</a></li>
<li>6 VFNetmodels trained (5 folds + 1 random split) based on:<br>
<a href="https://www.kaggle.com/gauravsingh1/mmdet-pytorch-framework-training-fasterrcnn-base" target="_blank">https://www.kaggle.com/gauravsingh1/mmdet-pytorch-framework-training-fasterrcnn-base</a></li>
<li>I tried to train some classification models but none of them provides a better score than the classification by <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> so I decided to stick with their classification.</li>
</ul>
<p><strong>Preprocess</strong></p>
<ul>
<li>I applied WBF with iou=0.5 to the base bounding boxes.</li>
</ul>
<p><strong>Training Methods</strong></p>
<ul>
<li>For yolov5, I trained the base models with hyp.scratch.yaml for 50 epochs and finetune them with hyp.finetune.yaml for another 50 epochs (both of these hyperparameter settings are default in yolov5).</li>
<li>For Faster RCNN, I trained the model based on the mentioned public notebook.</li>
<li>For VFNet, I trained each model for 15 epochs with horizontal flip, random brightness contrast and slight rotation augmentations.</li>
</ul>
<p><strong>Postprocess Strategy</strong></p>
<ul>
<li>The ensemble technique I used is WBF with skip_box_threshold=0.3. The threshold is chosen based on the best public scores.</li>
<li>The weight of each model is the mAP50 score on respective validation set of each model.</li>
<li>Based on the input bounding boxes, 5 classes (Aortic enlargement, Atelectasis, Cardiomegaly, Consolidation and Pneumothorax) have less than 1.5 bounding boxes per image on average and other classes have about 2 bounding boxes per image on average. Therefore, I decided to ensemble bounding boxes of the mentioned classes with iou = 0.4 (to reduce the number of boxes) and bounding boxes of other classes with iou = 0.5.</li>
<li>The classification predictions are used on all VFNet models, Faster RCNN model and two Yolov5 models. Three yolov5 models are unfiltered as some classifications are wrong and yolov5 models produce relative small amount of bounding boxes even if the image is normal.</li>
</ul>
<p><strong>Side note</strong></p>
<ul>
<li>All submissions that I used to get my public scores are an ensemble of 5 Yolov5 models, 1 FasterRCNN model and 1 VFNet model. Most of them also get high private scores (0.3-0.304). I didn't choose any of them because I thought that they would generalize worse than an ensemble of more models. I think that finding the balance between the number of models and the number of bounding boxes would result in a better score.</li>
</ul> |
VinBigData Chest X-ray Abnormalities Detection | [11th place solution] YOLOv5 + VFNet + EfficientD6 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!!!</p>
<p><a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion/blob/master/ensemble_boxes/ensemble_boxes_wbf.py#L193" target="_blank">here</a> the author updates weighted_boxes every time it founds a matching box. But I think we should update weighted_boxes only once after iterating over all boxes. I modified that and get better result in this competition, and also better result on the provided benchmark (Open Images Dataset), author's implementation got 0.5982, mine got 0.60xx (I don't remember exactly).</p>
<p>thanks <a href="https://www.kaggle.com/sergeyzlobin" target="_blank">@sergeyzlobin</a> you are right, it's 0.015.</p>
<p>you can refer to <a href="https://arxiv.org/pdf/1703.01780.pdf" target="_blank">this paper</a>. In the paper the author uses one mean teacher to train one student. In this competition, I use 4 teachers to train 1 student, so I call it quadruple mean teachers :)).</p>
<p>Thanks <a href="https://www.kaggle.com/stanleyjzheng" target="_blank">@stanleyjzheng</a> . Actually, I just reimplemented WBF paper. I think the author's implementation is not exactly same as describing in the paper. </p>
<p>Congratulation to all and thanks to Vingroup Big Data Institute for this interesting competition.<br>
So here are some essential points of my solutions.</p>
<p><strong>1. Summary</strong></p>
<ul>
<li>2 classes classifier: efficientnetB6</li>
<li>object detection: Yolov5 (yolov5x, yolov5l, yolov5x + ASFF head, yolov5-p6) + EfficientD6 + VFNet</li>
<li>Preprocessing boxes: WBF(IOU=0.7)</li>
<li>Post-processing: customized WBF (this customization brings 0.015mAP improvement)</li>
<li>Train yolov5 with quadruple mean teachers.</li>
<li>Ensemble</li>
</ul>
<p><strong>2. Classification models</strong></p>
<ul>
<li>For 2 class classifier, I slightly modify <a href="https://www.kaggle.com/solosquad1999/vinbigdata-train-efficientnet-2-classes" target="_blank">this notebook</a>. Thanks to <a href="https://www.kaggle.com/solosquad1999" target="_blank">@solosquad1999</a> </li>
<li>I use efficientnetB6, image size 768.</li>
<li>My 2 class classifier achieves a AUC of 0.992 and AP 0.96.</li>
</ul>
<p><strong>3. Detection model</strong></p>
<ul>
<li>Yolov5: <ul>
<li>I use 3 models from <a href="https://github.com/ultralytics/yolov5" target="_blank">yolov5 repo</a> (yolov5x, yolov5l and yolov5-p6) and yolov5_asff from this <a href="https://github.com/ultralytics/yolov5" target="_blank">pull request</a></li>
<li>I add mosaic2 and mosaic3 and use it together with mosaic4 from yolov5 repo.</li>
<li>I train each model 60 epochs at image size 640. I didn't test larger image size. </li>
<li>A single model has CV around 0.42 and LB score ~0.2-0.25</li></ul></li>
<li>VFNet: <ul>
<li>I use VFNet (resnet101) model from mmdetection and add more augmentation (Noise, RandomBrightnessContrast, etc). </li>
<li>My VFNet model works as good as yolov5, CV ~0.42 and LB 0.24.</li></ul></li>
<li>EfficientD6: I train EfficientD6 at image size 512x512, it gives me better CV (0.43) but lower LB (0.2). I can't use larger image size because of limited GPU VRAM.</li>
</ul>
<p><strong>4. Training with mean teachers</strong><br>
<img src="https://i.ibb.co/jT2RH7S/Untitled-Diagram.png" alt=""></p>
<p><code>loss = yolov5_loss + alpha*consistency_losses</code><br>
<code>alpha</code> is calculated using this function</p>
<pre><code>def sigmoid_rampup(current_epoch, rampup_epoch=5):
"""Exponential rampup from https://arxiv.org/abs/1610.02242"""
if rampup_length == 0:
return 0.1
else:
current = np.clip(current, 0.0, rampup_length)
phase = 1.0 - current / rampup_length
return 0.1*float(np.exp(-5.0 * phase * phase))
</code></pre>
<p>Using mean teachers improve CV ~0.02mAP, However, it does not add much diversity when ensemble.</p> |
VinBigData Chest X-ray Abnormalities Detection | [34th place solution] Single model Yolov5 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>hmm 🤔……</p>
<p>Thank you, your great kernels help me a lot in this competition 😁</p>
<p>I did not train my model with original data, I used WBF box fusion with iou 0.4, 0.5 and 0.6 during the competition. IOU = 0.6 probably the best threshold I can found</p>
<p>Thank you!</p>
<p>Thank you!</p>
<p>Congratulations to all the winners and special thanks to Vingroup Big Data Institute for organizing a very interesting competition, I have learned a lot from this.</p>
<p>At the beginning of the competition, I decided to use Retinanet (from <a href="https://keras.io/examples/vision/retinanet" target="_blank">this post</a>) but after almost two months I could not improve my score. I switched to Yolov5 since I notice a promising result from <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> <a href="https://www.kaggle.com/awsaf49/vinbigdata-cxr-ad-yolov5-14-class-train" target="_blank">Yolov5 notebook</a> and the mAP increased significantly (I wish I had realized earlier :)), and then I stick with it.</p>
<p>My solution is the ensemble of 5 folds Yolov5x (thanks to <a href="https://github.com/glenn-jocher" target="_blank">Glenn Jocher</a>) and uses 2 class filter classifier to remove normal images.</p>
<h2>Preprocessing</h2>
<ul>
<li>Fusion box using WBF with IOU 0.5</li>
<li>Remove (incorrect) annotations by some data visualization (bar plot) based on: position/width/height/w/h ratio/area of bounding boxes</li>
<li>I noticed that the bounding boxes of class <code>Calcification</code> is sometimes overlapping with <code>Aortic enlargement</code> in the train data, but my model predicts smaller boxes, So I decided to fix them by using prediction from Retinanet model.</li>
<li>5 Folds stratified split, use this repo <a href="https://github.com/trent-b/iterative-stratification" target="_blank">https://github.com/trent-b/iterative-stratification</a></li>
</ul>
<h2>Detection</h2>
<ul>
<li>5 Folds ensemble of YoloV5x (640x640), using provided weight from the repository for initialization</li>
<li>Config: mostly identical with <a href="https://github.com/ultralytics/yolov5/blob/master/data/hyp.scratch.yaml" target="_blank">hyp.scatch.yaml</a>. But I changed some augmentations as below:<ul>
<li>Scale: 0.6</li>
<li>Translate: 0.2</li>
<li>Degrees and Shear: 0.1 (might be too small)</li>
<li>Mosaic: 0.7</li></ul></li>
<li>Besides, I added random intensity [-10 10] inspired by <a href="https://www.kaggle.com/zfturbo" target="_blank">ZFTurbo</a> <a href="https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/64633" target="_blank">post</a>.</li>
</ul>
<h2>Classification (2 class filter)</h2>
<ul>
<li>I was focusing on improving my detection models and forget about the importance of classification, then I tried to train my own classifier model, the score improve 0.001 on Private data compared to <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> <a href="https://www.kaggle.com/awsaf49/vinbigdata-2-class-filter" target="_blank">result</a></li>
</ul>
<table>
<thead>
<tr>
<th>Model</th>
<th>Public score</th>
<th>Private score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ensemble of 4 folds</td>
<td>0.279</td>
<td><strong>0.272</strong></td>
</tr>
<tr>
<td>Awsaf result</td>
<td><strong>0.281</strong></td>
<td>0.271</td>
</tr>
</tbody>
</table>
<ul>
<li><p>I trained EfficientNetB5 (noisy student) in 4 folds and remove the confusing images based on the difference between prediction and ground truth: If (prediction - ground truth label) > 0.95 then remove (wrong prediction but too high confident)</p></li>
<li><p>Then I train EfficientB5 again with cleaned data, and the ensemble of 4 folds with AUC score on validation data as below</p></li>
</ul>
<table>
<thead>
<tr>
<th>Fold</th>
<th>AUC</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.9935</td>
</tr>
<tr>
<td>1</td>
<td>0.9929</td>
</tr>
<tr>
<td>2</td>
<td>0.9927</td>
</tr>
<tr>
<td>3</td>
<td>0.9930</td>
</tr>
</tbody>
</table>
<h2>Things that not work</h2>
<ul>
<li>Label smoothing for Yolov5: I tried but LB is too bad so I skipped</li>
<li>VFnet and EfficientDet: worse performance than Yolov5 (probably my config is not good) and take longer time to train</li>
</ul>
<p>When the private score is released, I noticed that my score on YoloV5x (preprocessing iou 0.6) perform quite better result on the private score, but very slow score on LB so I skipped this, It would be better if I did ensemble my final prediction based on CV instead of LB :D</p> |
VinBigData Chest X-ray Abnormalities Detection | [4th place solution] YOLOv5x | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: VinBigData Chest X-ray Abnormalities Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to all participants and organizers!<br>
And congratulations to all the winners!<br>
Here's the review of my solution.</p>
<h1>Summary</h1>
<ul>
<li>model: yolov5x</li>
<li>image size: 640</li>
<li>TTA: 3 scale patterns and horizontal flip</li>
<li>ensemble: (4fold cv * 3 different preprocessed labels ) = 12 models</li>
</ul>
<h1>Preprocessing</h1>
<p>In the preprocessing, I used wbf and merged the boxes with iou=0.4.<br>
This makes a single box created by multiple annotators.<br>
One thing to keep in mind is that how many of the three annotators judged the disease. <br>
There is a difference in how much confidence one can have in a disease box if only one person or two people or three people judges it to be a disease.<br>
I thought it is necessary to create a model that could account for the difference.</p>
<p>First, I created three different training labels using the following methods.</p>
<ul>
<li>labels-1: Leave boxes that are considered a disease by one or more annotators.</li>
<li>labels-2: Leave boxes that are considered a disease by two or more annotators.</li>
<li>labels-3: Leave boxes that are considered a disease by all three annotators.</li>
</ul>
<h1>Training</h1>
<p>The above three patterns of labels were trained with yolov5x.<br>
I trained 50 epochs with image size 640, and the only hyperparameter change was to change the mixup from 0 to 0.5.<br>
Here are the four models I created.</p>
<ul>
<li>model-1: trained labels-1 on all images</li>
<li>model-2: trained labels-2 on all images</li>
<li>model-3: trained labels-3 on all images</li>
<li>model-4: trained labels-1 only on images with disease</li>
</ul>
<h1>Inference</h1>
<p>I modified the TTA implemented in yolov5 a bit.<br>
TTA was performed on six image patterns using a combination of three scale patterns [1, 0.83, 0.67] and horizontal flip.</p>
<p>The final output is an ensemble of the model's outputs using wbf with iou=0.5.<br>
Here are the scores for each model in the ensemble.</p>
<ul>
<li>model-1: public 0.264, private 0.278</li>
<li>model-1 + model-2: public 0.271, private 0.295</li>
<li>model-1 + model-2 + model-3: public 0.267, private 0.303</li>
<li>model-1 + model-2 + model-4: public 0.278, private 0.297</li>
<li>model-1 + model-2 + model-3 + model-4: public 0.276, private 0.306</li>
</ul>
<p><strong>Why the ensemble improved the score:</strong><br>
For the ensemble using wbf, the box's confidence score detected by more models will be higher. The box's confidence score detected by fewer models will be lower.<br>
The boxes that are judged to be a disease by more annotators are trained to be detected by more models (model-1, model-2, model-3). Therefore, the more likely a box is to be diseased, the higher its confidence score will be. As a result, the mAP is improved.</p>
<h1>Class 14, no finding</h1>
<p>I considered the highest confidence score among the boxes detected by model-1 and model-2 as the <em>finding_score</em> of the image.<br>
Then, I appended f'14 {1-<em>finding_score</em>} 0 0 1 1' to the results of all images.<br>
As a result, the class 14 score will be higher for images where the box is not easily detected by the detection models.<br>
Since mAP only considers the order based on the confidence score, it is safe to add f'14 {1-finding_score} 0 0 1 1' to all images.</p>
<h1>Final selection</h1>
<p>I chose the following two outputs for my final selection</p>
<ul>
<li>model-1 + model-2 + model-4: public 0.278, private 0.297</li>
<li>model-1 + model-2 + model-4 + some public notebook's output: public 0.285, private 0.303</li>
</ul>
<p>I couldn't choose the output with the model-3 ensemble.<br>
This is because model-3 had a low mAP due to the extremely small sample size of minor diseases, and the public LB dropped when model-3 was ensembled.<br>
So sad :(</p>
<p>Thank you!!</p>
<p>Thank you!<br>
I didn't modified the wbf source code.</p>
<blockquote>
<p>Because I thought wbf was simply averaging the confidence bbox scores when fused.</p>
</blockquote>
<p>That's right.<br>
However, when the detected and undetected bboxes are merged, the confidence score will be lower.<br>
For example, a bbox score detected by only one of the three models will be reduced to a third when the outputs of the three models are merged.<br>
As a result, the score of bboxes detected by fewer models will be lower, and the score of bboxes detected by many models will be relatively higher.</p>
<p>Thank you!<br>
Congrats on your silver medal too!</p>
<p>Thank you!<br>
I think the fun part of Kaggle is that simple solution can get a high score depending on ideas!</p>
<p>Thank you!!<br>
Yes! Yolov5 is fantastic :)</p> |
WSDM - KKBox's Churn Prediction Challenge | Solution sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: WSDM - KKBox's Churn Prediction Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Paul</p>
<p>If I remember right, yes, re-labeling did a better job than scala label (~0.0015 gain).</p>
<p>Hi, congrats to the winners, and thanks everyone. Here is the brief description of my solution:
<br><br></p>
<h3>1. Re-labeling</h3>
<p>According to <a href="https://www.kaggle.com/c/kkbox-churn-prediction-challenge/discussion/45991">this post</a>, the training sets posted for the contest differed greatly to the label generated by scala code. In my solution, one of my model use the scala label, and the other model use the label generated by <a href="https://github.com/InfiniteWing/Kaggle/blob/master/KKBOX%20churn/code/labeler_v2_final.py">my python script</a>(I did not optimize the code yet).
<br><br></p>
<h3>2. Features</h3>
<p>Here is the list of my features, feel free to ask me if the feature name is hard to understand:</p>
<p>P.S. For 201702 churner(whose membership expired in 201702), I trace the log which is between 20170101 to 20170131.</p>
<p>['payment_method_id', 'payment_plan_days', 'plan_list_price', 'actual_amount_paid', 'is_auto_renew', 'last_1_is_churn', 'last_2_is_churn', 'last_3_is_churn', 'last_4_is_churn', 'last_5_is_churn', 'churn_rate', 'churn_count', 'transaction_count', 'discount', 'is_discount', 'amt_per_day', 'num_25_mean', 'num_50_mean', 'num_75_mean', 'num_985_mean', 'num_100_mean', 'num_unq_mean', 'total_secs_mean', 'num_25_sum', 'num_50_sum', 'num_75_sum', 'num_985_sum', 'num_100_sum', 'num_unq_sum', 'total_secs_sum', 'count', 'city', 'bd', 'gender', 'registered_via', 'registration_init_time']
<br><br></p>
<h3>3. Models</h3>
<p>I use xgboost, lightGBM, and CatBoost to predict churner, and then simply average the prediction. Here is the feature importance of lightGBM model.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/260458/8095/feature_importance_0.png" alt="feature importance of lightGBM model" title=""></p>
<p>Thanks, and see you next competition.</p>
<p>Hi, Aniket, </p>
<p>If one member had membership expire five times in the past, for example 201603, 201604, 201605, 201606, and 201607, "last_1_is_churn" means did this member churn in 201607 or not; "last_2_is_churn" means did this member churn in 201606 or not; and so on.</p> |
WSDM - KKBox's Music Recommendation Challenge | A brief introduction to the 1st place solution (codes released) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: WSDM - KKBox's Music Recommendation Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!</p>
<p>I didn't get the same error. It seems that the code<code>time.mktime(time.strptime(str(x)))</code> is causing the error. If you can't change the platform, you could try other ways to convert the time string to a timestamp.</p>
<p>Hi!</p>
<ol>
<li><p>Duplicated rows in both train.csv & test.csv mean that a user listened to the song at some time, then in the following 30 days, (s)he didn't re-listen to the song, while later the user listened to the song again. As a result, the target of the first listening event in the train.csv might be unreliable. Dropping the duplicated rows in train.csv can prevent the model from memorizing the target only, and force the model to infer the probability of re-listening based on other data of the user/song.</p></li>
<li><p>Since there are 20 distinct values for source_screen_name in train.csv, and 22 distinct values in train.csv + test.csv, we need to map the right feature name before dropping it.</p></li>
</ol>
<p>Update: <a href="https://github.com/lystdo/Codes-for-WSDM-CUP-Music-Rec-1st-place-solution">codes Ver 0.1.2 released</a></p>
<hr>
<p>Hi everyone!</p>
<p>First of all, many thanks to KKBox, WSDM, and Kaggle for this interesting and exciting competition. The final rankings of Top 3 were unveiled at the last minute, and we were competing at the fourth digit after the decimal point.</p>
<p>I also want to congratulate all competitors, especially @mchahhot and @Nima from team Magic Recommenders and @Vasiliy. If you achieve the scores with LightGBM alone, I think I will be able to learn a lot from your feature engineering skills.</p>
<p>I also thank @MrOoijer and @Kamil for their discussions and sharing of insights on the problem. It seems that there are not so many discussions during the competition. But I do think that we can share and learn from each other a lot after the competition is finished.</p>
<p>I will release all the codes for feature engineering and models later (if allowed). But I need some time to check the codes since now it's a mess :) And hope that the final result of this competition can help a little when I apply for chances of postdoc programs. Let's talk about the solution.</p>
<hr>
<p><strong>1. Analysis of the Problem</strong></p>
<p>Based on my understanding, this is a similar problem with Click Value Rate (CVR) predictions. Relistening is like purchasing, and first listening is like clicking. So if you are looking for some reference, I think "CTR prediction" is a more accurate keyword (since CTR/CVR predictions share similar models) compared with "recommendation" since some recommendation methods like latent-factor based or neighborhood-based collaborative filtering is not the best way for this problem.</p>
<p>And since this is more like a CVR prediction, there are some properties that we can make use of for feature engineering or model construction, including:</p>
<p><strong>(1) Missing not at random</strong></p>
<p>Some user-song pairs can be observed and some not. If a user-song pair is in the dataset, we can tell that the user has listened to the song at least once. This contains rich information about user preference and song characteristics.</p>
<p><strong>(2) Time-sensitive</strong></p>
<p>Since the dataset covers a long range of time, the pattern might evolve a lot from the beginning to the end. This is important for the validation strategy and feature engineering.</p>
<p><strong>2. Overall Framework</strong></p>
<p>Just the same as what @MrOoijer predicted, the final results is a 0.6 * LightGBMs and 0.4 * NNs, and the LightGBMs, as well as NNs, are all bagging ensembles of several predictions with a slightly different subset of features. I didn't use any external data.</p>
<p>For LightGBMs, my best single model is 0.74440 on public LB with a learning rate of 0.1 and 0.74460 at 0.05, and bagging ensemble of LightGBMs can reach about 0.74584.</p>
<p>For NNs, my best 30-ensemble on the same feature set can get 0.74215, and bagging ensemble of NNs on different feature set can get 0.74333.</p>
<p>The predictions of LightGBMs and NNs are quite uncorrelated (correlation coefficient is about 0.92). The ensemble of single LightGBM and single 30-ensemble of NN can get 0.7489+. The ensemble of ensembles can reach 0.7498+.</p>
<p>The reason why ensembling GBDTs with NNs can get a very significant boost is, I think, that GBDTs and NNs are better at the different listening flow. GBDT is more or less a greedy method. It always captures the head flow first, and then the trees behind can finally fit the user-song pairs in the tail flow. As a result, GBDT is better at finding the pattern in the head flow. NNs, in contrast, explore the broad combinations of features, so it's easier for NNs to fit the pattern in the tail flow, while the head flow might be influenced by the noise.</p>
<p><strong>3. Validation Strategy</strong></p>
<p>I use the last 20% data as the validation set. What's need to be mentioned is that for validation, I use the features generated by the train.csv only, and for submission, I use the features generated with both train.csv and test.csv. </p>
<p>The problem is, since the data covers a long range of time if we use both train.csv and test.csv to generate features for validation, we are leaking information from the future to the past. This may give you some false positive results when doing feature selection.</p>
<p><strong>4. NN Models</strong></p>
<p>The structure of the neural network is presented in the picture attached. There are some tricks about the model structure.</p>
<p><strong>(1) Field-aware</strong></p>
<p>The inputs are divided into 3 groups, i.e., user field, song field, and context field. High-level features are extracted before we concatenate the features together. This can reduce the number of parameters and prevent some unexpected feature combinations.</p>
<p><strong>(2) Use trainable embeddings with L2 regularization for categorical features</strong></p>
<p>Compared with one-hot encoding, this can also reduce the number of parameters. With the help of shared weights in latter layers and L2 regularization, we can have a better ability of generalization.</p>
<p><strong>(3) Allowing an offset for users between the final representations of a user and the predicted results from metadata</strong></p>
<p>This is achieved by the layer "msno_embeddings". This is a commonly used trick from many academic papers. With an l2 regularization, the model can automatically lean that, if the data is sufficient, the representations of a user rely more on BP from the targets of the user, and if not, rely more on the metadata. This helps about 0.001 on public LB.</p>
<p>I tried to do a similar thing with the songs, but got no improvements, mainly for that the data of songs are too sparse.</p>
<p><strong>(4) Use RMSProp as the optimizer</strong></p>
<p>I find that momentum is not working well in this problem, while naive SGD is too slow.</p>
<p><strong>(5) Add dot layers for embeddings and representations of user-song pairs</strong></p>
<p>It can fasten the convergence and give better results. Some papers suggest using element-wise multiplication or even outer product, but I only get more overfitting.</p>
<p><strong>(6) Use ensemble to check the results</strong></p>
<p>I personally never submit predictions from a single NN. An average ensemble of at least 5 NNs will make the results a lot more reliable and repeatable.</p>
<p><strong>5. LightGBM</strong></p>
<p>For LightGBM, there aren't many interesting things. I only notice that some features are very important according to feature_importance(), while do not make an equal influence on the validation score. This means that the model is fitting some data better, while other data worse. The average ensemble of LightGBM predictions with/without these features can give a significant boost.</p>
<p>My final LightGBM model uses about 400 features. </p>
<p>Again I learn how powerful GBDT is. A simple GBDT can beat an ensemble of 30 carefully-designed NNs (of course GBDT is an ensemble model itself). While the advantage of NNs is that it's very fast compared with LightGBM. I can generate predictions in several minutes with a single GTX 1080, so I'm able to submit a lot and don't waste any changes for submission :D</p>
<p><strong>6. Feature Engineering</strong></p>
<p>Here are some important features that I found.</p>
<p><strong>(1) Conditional probability features</strong></p>
<p>P(source_type|msno), P(source_screen_name|msno), P(source_system_tab|msno), P(source_type|song_id), P(source_screen_name|song_id), P(source_system_tab|song_id), P(artist_name|msno), P(language|msno), P(first_genre_id|msno) and so on. I tried some smooth methods but found MLE probability give best results.</p>
<p>They are useful when the number of different values of categorical features is large. Like msno and song_id.</p>
<p><strong>(2) Timestamp</strong></p>
<p>Since the data is ordered chronologically, we can use the index as a timestamp.</p>
<p><strong>(3) SVD features with user-song matrix and user-artist matrix</strong></p>
<p>As mentioned before, the data is missing-not-at-random, so constructing a user-song matrix and user-artist matrix and use SVD to get embeddings for users/songs/artist can help significantly.</p>
<p>I also tried LDA and Gamma-Poisson Factorization but didn't get better results compared with SVD.</p>
<p><strong>(4) Other features</strong></p>
<p>Including some count features, last song features, and next song features.</p>
<p><strong>7. Some Methods that Didn't Work</strong></p>
<p><strong>(1) <a href="http://benanne.github.io/2015/03/17/plankton.html">Pseudo labeling</a></strong></p>
<p>Using the ensemble predictions as pseudo labels for test set is helpful significantly, while the lost diversity just makes the results after ensemble not improving at all. This result disappointed me a lot, for I was hoping that NNs can learn something new from a different perspective than GBDTs.</p>
<p><strong>(2) DART mode</strong></p>
<p>DART mode of LightGBM is really slow, and uncorrelated with gbdt. But its correlation coefficient with NNs is even larger than gbdt with NNs, and the ensemble of NN/gbdt/dart models doesn't give a significant difference compared with NN/gbdt ensembles.</p>
<p><strong>(3) FFM</strong></p>
<p>I tried a little with libffm, and didn't get comparable results with LightGBM and NNs. Besides, libffm is too slow with this dataset.</p>
<p><strong>(4) Give larger weights to more recent records</strong></p>
<p>After adding the timestamp to the feature set, this can only give very tiny help. So I didn't use it anymore.</p>
<p><strong>(5) Deep&Wide model</strong></p>
<p>I tried Deep&Wide models a little and got significantly worse results than Deep only models. But I didn't use any cross features in the wide part, and I didn't spend much time tuning the Deep&Wide model carefully. So there might be some room for improvements.</p>
<p><strong>8. Other Things that Come to My Mind</strong></p>
<p>Here I want to share and discuss whatever comes to my mind.</p>
<p><strong>(1) The version of LightGBM matters</strong></p>
<p>The version of LightGBM on my PC is 2.0.1, and the version on the server is 0.1, but I see repeatable results that the results from version 2.0.1 are worse than the results from version 0.1 by an AUC of about 0.01. I checked some default hyper-parameter modification but didn't fix the problem.</p>
<p>Don't know why such thing happens and whether this is a special case or not. So if you are not doing well, it may be the bad of LightGBM version.</p>
<p><strong>(2) Drop the duplicated rows of user-song pairs between train.csv and test.csv</strong></p>
<p>This gives a little but solid improvement.</p>
<p><strong>(3) Remaining problems</strong></p>
<p>How to use the large set of songs? I only keep the songs that appear in train.csv/test.csv.</p>
<p>I didn't try a lot of cross features. I think there is definitely some room for more feature engineering in this framework.</p>
<p>Learn-to-Rank methods might be helpful, but it will cost much more computation.</p>
<p><strong>(4) Re-listening sequences features might be helpful (didn't try)</strong></p>
<p>For online CVR prediction systems, they usually involve a lot of features about user's behavior, like purchasing or re-purchasing sequences. In this problem, the corresponding features are re-listening sequences. But I didn't try such features mainly for that in the testing set, we can't obtain the latest re-listening information. It might be helpful to design some features from this view, but it requires a lot of coding.</p>
<p><strong>(5) Is AUC the most suitable metric?</strong></p>
<p>Usually, recommender systems recommend a fixed number of songs to every user. So here is the problem: is AUC the most suitable metric in this problem? In other words, if the model finds that User A always re-listen the songs while User B seldom re-listen the songs, the AUC might be good but is this really helpful? There are metrics like GAUC, which first calculate AUC scores over different users, then calculate the average of the AUC scores. I think this is a better metric, but it may require some extra effort when preparing the proper data.</p>
<p>Actually, most of the features are generated automatically by a certain rule. For example, I use the first 48 (sometimes 64) components of SVD over user-song matrix, so there are 256 features for components of user/song/song_before/song_after. And features like P(source_type=i|msno), i = 1, 2, ..., n, also contribute a lot to the number of features.</p>
<p>I build a matrix of user-song pairs. For example, msno = 123 listened to song_id = 2845, then matrix[123, 2845] = 1. Then I use SVD over the matrix. Artist_name is in a similar manner.</p>
<p>Yes. I keep the first two genre_id and drop the third or more if there exists. For example, 13|45|22 will be first_genre_id = 13, second genre_id = 45, and the last one 22 is dropped.</p>
<p>Actually there are many tricks that can fasten the NN models. For example, we need to put the features for members/songs in the memory of GPU, and retrieve them with msno/song_id. This can reduce the load of PCI-E. Besides, we can prepare the mini-batches on CPU in parallel to training the model on GPU.</p>
<p>Thanks! I didn't try this because I wonder how we can obtain reliable out-of-fold predictions by FFM or GBDT. For example, When generating predictions for user-song pairs in the first 10% of data in train.csv, we are using a lot of training data that generated after them, while when generating predictions for pairs in test.csv, we are using data that generated before them. So the OOF predictions cannot be generated in a similar manner. I think this will bring more harm than good.</p>
<p>I didn't use the target when generating features, so the probabilities are only based on the X of data.</p>
<p>Haha, actually it's not that horrible. The picture is like a mess while the code is clearer.</p>
<p>Thanks!~</p>
<p>Thanks! For local validation, I generate conditional probabilities with train.csv only, and for prediction, I use both.</p>
<p>Basically an embedding layer takes an integer (or an integer sequence) as input, and gives a fixed length vector (or a vector sequence) as output. You can try searching "embedding layer" and there are rich information about it like <a href="https://stats.stackexchange.com/questions/182775/what-is-an-embedding-layer-in-a-neural-network">this one</a>.</p>
<p>As for song_feat and usr_feat, they are some features corresponding to msno/song_id. Later you can check the code and it will be easier to understand the structure.</p>
<p>I will share the codes later if allowed. I need some time to check the codes since now it's a mess :)</p>
<p>I will share the codes later if allowed. I need some time to check the codes since now it's a mess :)</p>
<p>Hi MrOoijer!</p>
<p>It's very interesting that we get different results. Based on my quick local validation with a high learning rate (0.5), I can get 0.735 with the timestamp, while only ~0.726 without, but of course, the hyper-parameters were optimal for the models with the timestamp.</p>
<p>I think the reasons why I got better results with the feature may include:</p>
<ol>
<li><p>LightGBM is converting the feature into 256 different bins, so it's less likely to overfit on the timestamp compared with XGBoost.</p></li>
<li><p>I used a very large hyper-parameter (65200) for L2 regularization, so the model is punished a lot for considering one feature too much.</p></li>
</ol>
<p>About the weights for records, actually I tried it, and get some improvements before the timestamp was introduced. After I used the timestamp, giving time-related weights was almost not helping at all. In order to use as few hyper-parameters as possible, I didn't use it in the final submissions.</p>
<p>This function maps the order of records to the timestamp of actual date. Please check <a href="https://www.kaggle.com/liulongxiao1/pair-the-order-of-train-and-test-to-actual-date">this kernel</a> for more details.</p>
<p>Thanks! Here are my understanding and some experiments on the problem.</p>
<p>About the timestamp, actually it's helping a lot for both LightGBMs and NNs. Since the training data covers a long range of time, the pattern changed a lot from the beginning to the end. Adding timestamp to the models can help them find the pattern of evolution. For tree-based methods, all the records in the testing set will be split in a similar manner as the records at the end of the training set, so it can track the latest pattern. For NNs, I also tried setting the timestamp to a fixed value in all testing records, but I got a slightly worse result (about ~0.0002 on public LB).</p>
<p>About the method of ensembling, since I was searching for the best hyper-parameters with 80%/20% of training data, and use all the training data and the hyper-parameters obtained to train the model and make the submission, I can't try the supervised method directly. I was setting the ensemble weights by hand based on the AUCs and the correlation coefficients. Inspired by your methods, I think I can try searching for the best weights on the validation stage, it may bring some further improvements.</p>
<p>Anyway, Merry Christmas!</p>
<p>My guess is that, if a user listens to a song in a different manner compared with what he/she gets used to, then the user might just run into the song randomly, or click the song by accident. So he/she is not likely to re-listen the song in the future.</p>
<p>That's a mistake, thanks for pointing it out! I've fixed it on Github.</p>
<p>There are only very few songs having more than 2 genre_ids, so I just keep the first two for simplicity.</p>
<p>Thanks!</p>
<p>Null values are set as a class of the categorical feature. You can check the code later.</p>
<p>It's initialized randomly, and trained with BP.</p>
<p>Thanks! Very interesting questions!</p>
<ol>
<li><p>I used different dimension for embeddings for ensemble, ranging from 6 to 15. To save some hard work on hyper-parameter tuning, I use the same dimension for all embeddings. There are shared embedding layers, like the layer for first_genre_id and second_genre_id, as well as for the last & next song.</p></li>
<li><p>Actually I only used the last 1 song a user has played. I didn't find significant improvement after adding the last 1 song to the features, so I didn't add more. Besides, it requires some tricks for LightGBM to add listening histories of a longer range, and I'm not sure how it would be like by simple averaging. I guess that the training data undergone a sampling procedure, so it couldn’t reflect the true sequences a user is listening to.</p></li>
<li><p>I did think of using a network with two outputs, one for re-listening and one for first-listening, and I believe that it will definitely bring some improvement, if the model is tuned carefully (The relative weights of two losses, the strategy of negative sampling, and on which level the layers are shared). But since I don't have the time and I'm not sure how much improvement it can bring, I didn't try this idea. Instead, I used the most simple way to incorporate the information of first-listening, i.e., add SVD features into the input.</p></li>
</ol>
<p>Thanks!</p>
<ol>
<li><p>About the validation strategy, if you use the features generated with both train.csv and test.csv, then you are validating the model with data between 60% ~ 80%, and testing the model with data between 80% ~ 100%. That will make some differences. For example, if you add a feature like "how many times a user appears in the train.csv & test.csv" - "how many times a user appeared before this record", you'll get some improvement on the validation set, while the LB scores will drop significantly. For that in the training set, a user doesn't appear anymore means that he is quitting the service, while in the testing set, it may due to that we can't collect the data after the end of test.csv. Generate two sets of features can help find this phenomenon at the validation stage.</p></li>
<li><p>I tried SVD and LDA at the very early stage, and SVD gives ~0.713 on public LB, while LDA gives ~0.702. Of course if you come to ~0.74 the difference won't be that much, but I think there will be definitely some improvements.</p></li>
<li><p>"song_before" means the song_id the user listened before the record, and "song_after" means the song_id that the user listened after the record. I was doubting that whether it's reasonable to use "song_after", but I think since we can know the source_type and so on, it means that this algorithm is deployed after the first listening event, so it might be OK to use such features.</p></li>
</ol>
<p>I'm happy to have discussions, you can contact me with Kaggle's email system! :)</p>
<p>Thanks!</p>
<p>Thanks! 年轻真好年轻真好~</p>
<p>Thanks! Also congratulations to you! Looking forward to learn from your solution!</p>
<p>It's a mistake, thanks for pointing it out! :)</p>
<p>If the user-song pairs are all random, using SVD to get features for users/songs is still technically OK, but the results is worthless.</p>
<p>Sure</p>
<p>You can check the results of LabelEncoder</p>
<p>Tensorflow 1.0.1</p>
<p>I tried dropping some features with top importance and checked the validation score. So, it's done by hand.</p>
<p>Thanks!</p>
<p>Since the data is ordered chronologically, we can use the index as a timestamp. This can indicate the time of records. The train/test data covers a long range of time, so the pattern may change over time, thus time is crucial in this task.</p>
<p>For count features, the skew is usually relatively high, like many values fall into [0, 100] while there is a long tail in [100, 10000]. So using log1p can reduce the skew, and afterward, we can further standardize the data.</p>
<p>This is usually effective for NNs and LR, but brings little to no effect to GBDT.</p>
<p>Genre_id = 0 means that the genre_id is missing. Since the count for missing values can't be compared with the count for not-missing values, I set the count for 0 as NaN.</p>
<p>pandas: 0.20.1</p>
<p>sklearn: 0.18.1</p>
<p>keras: 2.0.4</p>
<p>lightgbm: 0.1</p>
<p>numpy: 1.12.1</p>
<p>scipy: 0.19.0</p>
<p>On the one side, timestamp itself can serve as a feature, helping the model to capture some time-related pattern, like a song was popular before and not popular anymore after a certain time.</p>
<p>On the other side, using a time window strategy and generate features within the time windows can create features describing the temporal interests of users, which can help the model capture some strong pattern within a limited time range.</p> |
WSDM - KKBox's Music Recommendation Challenge | My solution without time series assumption | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: WSDM - KKBox's Music Recommendation Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone!</p>
<p>Thanks to KKboxs, WSDM and kaggle for this competition. Congratulate all competitors. especially, @lystdo, @Magic Recommenders and @rubtov Vasiliy. </p>
<p>This is the first Kaggle competition that I really attend. As a Master Student, I am really lack of experience for my first Kaggle competition. At the end of the competition, I realize that there is a strong time order in the dataset. But it is too late.
So my solution did not use any time series assumption in the dataset.</p>
<p>I will release all the codes for feature engineering and model later, if allowed.</p>
<p><strong>Model</strong></p>
<p>I use an ensemble (average) of lightgbm. The best single model public LB 0.726</p>
<p><strong>Feature Engineering</strong></p>
<p>1) raw features for both song, user and context</p>
<p>2) SVD matrix factorization latent space of user-song interaction matrix</p>
<p>3) SVD matrix factorization score of user-song interaction matrix</p>
<p>4) collaborative filtering (CF) score (top 100 neighborhoods ) of user-song interaction</p>
<p>5) similarity of artist, lyricist, composer, genre,language, source_system_tab, source_type, soure_screen_name,within each user.</p>
<p>6) source_system_tab, source_type, soure_screen_name categorical variable rate of each user.</p>
<p>7) word embedding of of artist, lyricist, composer, genre.</p>
<p>I use 10 cross validation, where train / valid set are splitted randomly. </p>
<p>I wish you can understand.</p>
<p>2 stupid questions about When and how to submit the code and How to submit paper to workshop?
Thanks</p>
<p>Thanks! I will release all the codes for feature engineering and model later, if allowed.</p>
<p>the similarity function is dot product.</p>
<p>Hi, Still no official notifications about this?</p>
<p>it is similarity between the attributes of songs that a user listened and the attributes of the song the user listen now. the attributes are represented by normalized count of words.</p> |
WSDM - KKBox's Music Recommendation Challenge | The 3d place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: WSDM - KKBox's Music Recommendation Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, learning on the whole feature set will take around 150GB of RAM, 24 hours of time on 50 threads</p>
<p>You can don't use the triples of categorical features and try to run on your pc. </p>
<p>I will try to add feature importance to the article. </p>
<p>Matrix factorization feature have the biggest importance score (I use gain in xgboost feature importnace), but this feature improve the score for about 0.001.</p>
<p>Good question. Actually, it is bad idea, because we will have different distribution for this features in train/test dataframes. But, we can avoid this using fixed size part of history. It is interesting to see will it improve score)</p>
<p>Yes, history will smaller and smaller </p>
<p>Mean of target the most powerful features. The targets from history will not lead to overfitting.</p>
<p>35 % of train data is a size of test data. It is better to use the same time interval.</p>
<p>If you drop last 5% of data and repeat the procedure you get different dataframe to fitting. And the model, that fit on this dataframe will be different from first and combination of its predictions will improve score.</p>
<p>regression is just linear regression. linal_user[u] is a coefficient of linear regression for category u (vector (a, b)), so that y(t) = a*t + b.</p>
<p>Dimension of matrix A is Nx2. It is a matrix on which we should fit linear regression model. The solution is (A^t * A)^(-1) * A^t * y</p>
<p>I don't exactly understand the question. time_to_next_heard don't use the target. For features ['msno', 'artist_name'], for example, it will how much time to the next same msno+artist_name in the possible data.</p>
<p>n = len(test_data) #about 35% of train data. It was better to say "the size of test set". Thanks.</p>
<p>shift = int(0.05*len(train_data)) #here i say how much i will dropped</p>
<p>Yes, soon I update my code</p>
<p>By hands :) </p>
<p>Thanks KKBox, Kaggle and WSDM for this competition! </p>
<p>The main hack of this competition is that we can generate features from future. Data is ordered chronologically. We can see, for example, listen or not the same artist in the future.</p>
<p>I use xgboost and catboost. Fit their on features among which was matrix_factorization, where I user LightFM. Catboost is better than xgboost for about 0.001. </p>
<p>I use last 35% of train for generation table of features, where the rest of the data was used as a history. For test – train is history. For validation analogous scheme, but the size of X_train, X_val was about 20% of data.</p>
<p>From hisotry it can be generated features using target. Other features can be generated using all data. I generate features in the context of categorical features (their pairs and triples):</p>
<p>1) mean of target from history by categorical features (pairs and triples)</p>
<p>2) count – the number of observations in all data by categorical features (pairs and triples)</p>
<p>3) regression – linear regression target by categorical features (msno, msno + genre_ids, msno+composer and so on)</p>
<p>4) time_from_prev_heard – time from last heard by categorical features (msno, msno + genre_ids, msno+composer and so on)</p>
<p>5) time_to_next_heard – time to next heard by categorical features (pairs and triples which include msno)</p>
<p>6) last_time_diff – time to the last heard by categorical features</p>
<p>7) part_of_unique_song – the share of unique song of artist that the user heard on all unique song of artist</p>
<p>8) matrix_factorization – LightFM. I use as a user msno in different context (source_type) and the msno was the feature for user.</p>
<p>I use last 35% for fit. Also I drop last 5% and again use 35%. Such a procedure could be repeated many times and this greatly improved the score (I did 6 times on validation and this gave an increase of 0.005, but in the end I did not have time to do everything and did only 2 times on test).</p>
<p>As a result, I fit xgboost and catboost on two parts of the data using and without the matrix_factorization feature. And finally I blend all 8 predictions.
Sorry for my English. </p>
<p>The code is available here:</p>
<p><a href="https://github.com/VasiliyRubtsov/wsdm_music_recommendations/blob/master/pipeline.ipynb">https://github.com/VasiliyRubtsov/wsdm_music_recommendations/blob/master/pipeline.ipynb</a></p>
<p>But I don't check it yet :)</p> |
Web Traffic Time Series Forecasting | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Web Traffic Time Series Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>TL;DR</strong> this is seq2seq model with some additions to utilize year-to-year
and quarter-to-quarter seasonality in data.</p>
<p>Model code: <a href="https://github.com/Arturus/kaggle-web-traffic">https://github.com/Arturus/kaggle-web-traffic</a></p>
<hr>
<p>There are two main information sources for prediction:</p>
<ol>
<li>Local features. If we see a trend, we expect that it will continue
(AutoRegressive model), if we see a traffic spike, it will gradually decay (Moving Average model),
if wee see more traffic on holidays, we expect to have more traffic on
holidays in the future (seasonal model).</li>
<li>Global features. If we look to autocorrelation plot, we'll notice strong
year-to-year autocorrelation and some quarter-to-quarter autocorrelation.</li>
</ol>
<p>The good model should use both global and local features, combining them
in a intelligent way.
<img src="https://github.com/Arturus/kaggle-web-traffic/blob/master/images/autocorr.png?raw=true" alt="autocorrelation" title="">
I decided to use RNN seq2seq model for prediction, because:</p>
<ol>
<li>RNN can be thought as a natural extension of well-studied ARIMA models, but much more
flexible and expressive.</li>
<li>RNN is non-parametric, that's greatly simplifies learning.
Imagine working with different ARIMA parameters for 145K timeseries.</li>
<li>Any exogenous feature (numerical or categorical, time-dependent or series-dependent)
can be easily injected into the model</li>
<li>seq2seq seems natural for this task: we predict next values, conditioning on joint
probability of previous values, including our past predictions. Use of past predictions
stabilizes the model, it learns to be conservative, because error accumulates on each step,
and extreme prediction at one step can ruin prediction quality for all subsequent steps.</li>
<li>Deep Learning is all the hype nowadays</li>
</ol>
<h2>Feature engineering</h2>
<p>I tried to be minimalistic, because RNN is powerful enough to discover
and learn features on its own.</p>
<p>Model feature list:</p>
<ul>
<li><em>pageviews</em> (spelled as 'hits' in the model code, because of my web-analytics background).
Raw values transformed by log1p() to get more-or-less normal intra-series values distribution,
instead of skewed one.</li>
<li><em>agent</em>, <em>country</em>, <em>site</em> - these features are extracted from page urls and one-hot encoded</li>
<li><em>day of week</em> - to capture weekly seasonality</li>
<li><em>year-to-year autocorrelation</em>, <em>quarter-to-quarter autocorrelation</em> - to capture yearly and quarterly seasonality strength.</li>
<li><em>page popularity</em> - High traffic and low traffic pages have different traffic change patterns,
this feature (median of pageviews) helps to capture traffic scale.
This scale information is lost in a <em>pageviews</em> feature, because each pageviews series
independently normalized to zero mean and unit variance.</li>
<li><em>lagged pageviews</em> - I'll describe this feature later</li>
</ul>
<h2>Feature preprocessing</h2>
<p>All features (including one-hot encoded) are normalized to zero mean and unit variance. Each <em>pageviews</em>
series normalized independently.</p>
<p>Time-independent features (autocorrelations, country, etc) are "stretched" to timeseries length
i.e. repeated for each day by <code>tf.tile()</code> command.</p>
<p>Model trains on random fixed-length samples from original timeseries. For example,
if original timeseries length is 600 days, and we use 200-day samples for training,
we'll have a choice of 400 days to start the sample.</p>
<p>This sampling works as effective data augmentation mechanism:
training code randomly chooses starting point for each timeseries on each
step, generating endless stream of almost non-repeating data.</p>
<h2>Model core</h2>
<p>Model has two main parts: encoder and decoder.</p>
<p><img src="https://github.com/Arturus/kaggle-web-traffic/blob/master/images/encoder-decoder.png?raw=true" alt="seq2seq" title=""></p>
<p>Encoder is <a href="https://www.tensorflow.org/versions/master/api_docs/python/tf/contrib/cudnn_rnn/CudnnGRU">cuDNN GRU</a>. cuDNN works much faster (5x-10x) than native Tensorflow RNNCells, at the cost
of some inconvenience to use and poor documentation.</p>
<p>Decoder is TF <code>GRUBlockCell</code>, wrapped in <code>tf.while_loop()</code> construct. Code
inside the loop gets prediction from previous step and
appends it to the input features for current step.</p>
<h2>Working with long timeseries</h2>
<p>LSTM/GRU is a great solution for relatively short sequences, up to 100-300 items.
On longer sequences LSTM/GRU still works, but can gradually forget information from the oldest items.
Competition timeseries is up to 700 days long, so I have to find some method
to "strengthen" GRU memory.</p>
<p>My first method was to use some kind of <em><a href="https://distill.pub/2016/augmented-rnns">attention</a></em>.
Attention can bring useful information from a distant past to the current
RNN cell. The simplest yet effective attention method for our problem is a
fixed-weight sliding-window attention.
There are two most important points in a distant past (taking into
account long-term seasonality): 1) year ago, 2) quarter ago.</p>
<p><img src="https://github.com/Arturus/kaggle-web-traffic/blob/master/images/from_past.png?raw=true" alt="from_past" title=""></p>
<p>I can just take
encoder outputs from <code>current_day - 365</code> and <code>current_day - 90</code> timepoints,
pass them through FC layer to reduce dimensionality and append result to input
features for decoder. This solution, despite of being simple, considerably
lowered prediction error.</p>
<p>Then I averaged important points with their neighbors to reduce noise and
compensate uneven intervals (leap years, different month lengths):
<code>attn_365 = 0.25 * day_364 + 0.5 * day_365 + 0.25 * day_366</code></p>
<p>Then I realized that <code>0.25,0.5,0.25</code> is a 1D convolutional kernel (length=3)
and I can automatically learn bigger kernel to detect important points in a past.</p>
<p>I ended up with a monstrous attention mechanism, it looks into 'fingerprint'
of each timeseries (fingerprint produced by small ConvNet), decides
which points to attend and produces weights for big convolution kernel.
This big kernel, applied to decoder outputs,
produces attention features for each prediction day. This monster is still
alive and can be found in a model code.</p>
<p>Note, I did'nt used classical attention scheme (Bahdanau or Luong attention),
because classical attention should be recalculated from scratch on every prediction step,
using all historical datapoints. This will take too much time for our long (~2 years) timeseries.
My scheme, one convolution per all datapoints, uses same attention
weights for all prediction steps (that's drawback), but much faster to compute.</p>
<p>Unsatisfied by complexity of attention mechanics, I tried to remove attention
completely and just take important (year, halfyear, quarter ago) datapoints
from the past and use them as an additional
features for encoder and decoder. That worked surprisingly well, even slightly
surpassing attention in prediction quality. My best public score was
achieved using only lagged datapoints, without attention.</p>
<p><img src="https://github.com/Arturus/kaggle-web-traffic/blob/master/images/lagged_data.png?raw=true" alt="lagged_data" title=""></p>
<p>Additional important benefit of lagged datapoints: model can use much shorter
encoder without fear of losing information from the past, because this
information now explicitly contained in features. Even 60-90 days long
encoder still gives acceptable results, in contrast to 300-400 days
required for previous models. Shorter encoder = faster training and less
loss of information</p>
<h2>Losses and regularization</h2>
<p><a href="https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error">SMAPE</a>
(target loss for competition) can't be used directly, because of unstable
behavior near zero values (loss is a step function if truth value is zero,
and not defined, if predicted value is also zero).</p>
<p>I used smoothed differentiable SMAPE variant, which is well-behaved at all real numbers:</p>
<pre><code>epsilon = 0.1
summ = tf.maximum(tf.abs(true) + tf.abs(predicted) + epsilon, 0.5 + epsilon)
smape = tf.abs(predicted - true) / summ * 2.0
</code></pre>
<p>Another possible choice is MAE loss on <code>log1p(data)</code>, it's smooth almost everywhere
and close enough to SMAPE for training purposes.</p>
<p>Final predictions were rounded to the closest integer, negative predictions clipped at zero.</p>
<p>I tried to use RNN activation regularizations from the paper
<a href="https://arxiv.org/abs/1511.08400">"Regularizing RNNs by Stabilizing Activations"</a>,
because internal weights in cuDNN GRU can't be directly regularized
(or I did not found a right way to do this).
Stability loss did'nt worked at all, activation loss gave some very
slight improvement for low (1e-06..1e-05) loss weights.</p>
<h2>Training and validation</h2>
<p>I used COCOB optimizer (see paper <a href="https://arxiv.org/abs/1705.07795">Training Deep Networks without Learning Rates Through Coin Betting</a>) for training, in combination with gradient clipping.
COCOB tries to predict optimal learning rate for every training step, so
I don't have to tune learning rate at all. It also converges considerably
faster than traditional momentum-based optimizers, especially on first
epochs, allowing me to stop unsuccessful experiments early.</p>
<p>There are two ways to split timeseries into training and validation datasets:</p>
<ol>
<li><em>Walk-forward split</em>. This is not actually a split: we train on full dataset
and validate on full dataset, using different timeframes. Timeframe
for validation is shifted forward by one prediction interval relative to
timeframe for training.</li>
<li><em>Side-by-side split</em>. This is traditional split model for mainstream machine
learning. Dataset splits into independent parts, one part used strictly
for training and another part used strictly for validation.</li>
</ol>
<p><img src="https://github.com/Arturus/kaggle-web-traffic/blob/master/images/split.png?raw=true" alt="split" title=""></p>
<p>I tried both ways.</p>
<p>Walk-forward is preferable, because it directly relates to the competition goal:
predict future values using historical values. But this split consumes
datapoints at the end of timeseries, thus making hard to train model to
precisely predict the future.</p>
<p>Let's explain: for example, we have 300 days of historical data
and want to predict next 100 days. If we choose walk-forward split, we'll have to use
first 100 days for real training, next 100 days for training-mode prediction
(run decoder and calculate losses), next 100 days for validation and
next 100 days for actual prediction of future values.
So we actually can use only 1/3 of available datapoints for training
and will have 200 days gap between last training datapoint
and first prediction datapoint. That's too much, because prediction quality
falls exponentially as we move away from a training data (uncertainty grows).
Model trained with a 100 days gap (instead of 200) would have considerable
better quality.</p>
<p>Side-by-side split is more economical, as it don't consumes datapoints at the
end. That was a good news. Now the bad news: for our data, model performance
on validation dataset is strongly correlated to performance on training dataset,
and almost uncorrelated to the actual model performance in a future. In other words,
side-by-side split is useless for our problem, it just duplicates
model loss observed on training data.</p>
<p>Resume?</p>
<p>I used validation (with walk-forward split) only for model tuning.
Final model to predict future values was trained in blind mode, without any validation.</p>
<h2>Reducing model variance</h2>
<p>Model has inevitably high variance due to very noisy input data. To be fair,
I was surprised that RNN learns something at all on such noisy inputs.</p>
<p>Same model trained on different seeds can have different performance,
sometimes model even diverges on "unfortunate" seeds. During training, performance also wildly
fluctuates from step to step. I can't just rely on pure luck (be on right
seed and stop on right training step) to win the competition,
so I had to take actions to reduce variance.</p>
<p><img src="https://github.com/Arturus/kaggle-web-traffic/blob/master/images/training.png?raw=true" alt="training" title=""></p>
<ol>
<li>I don't know which training step would be best for predicting the future
(validation result on current data is very weakly correlated with a
result on a future data), so I can't use early stopping. But I know
approximate region where model is (possibly) trained well enough,
but (possibly) not started to overfit. I decided to set this optimal region
bounds to 10500..11500 training steps and save 10 checkpoints from each 100th step
in this region.</li>
<li>Similarly, I decided to train 3 models on different seeds and save checkpoints
from each model. So I have 30 checkpoints total.</li>
<li>One widely known method for reducing variance and improving model performance
is SGD averaging (ASGD). Method is very simple and well supported
in <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/train/moving_averages">Tensorflow</a> -
we have to maintain moving averages of network weights during training and use these
averaged weights, instead of original ones, during inference.</li>
</ol>
<p>Combination of all three methods (average predictions from 30 checkpoints
using averaged model weights in each checkpoint) worked well, I got
roughly the same SMAPE error on leaderboard (for future data)
as for validation on historical data.</p>
<p>Theoretically, one can also consider two first methods as a kind of ensemble
learning, that's right, but I used them mainly for variance reduction.</p>
<h2>Hyperparameter tuning</h2>
<p>There are many model parameters (number of layers, layer depths,
activation functions, dropout coefficents, etc) that can be (and should be) tuned to
achieve optimal model performance. Manual tuning is tedious and
time-consuming process, so I decided to automate it and use <a href="https://automl.github.io/SMAC3/stable/">SMAC3</a> package for hyperparameter search.</p>
<p>Some benefits of SMAC3:</p>
<ul>
<li>Support for conditional parameters (e.g. jointly tune number of layers
and dropout for each layer; dropout on second layer will be tuned only if
n_layers > 1)</li>
<li>Explicit handling of model variance. SMAC trains several instances
of each model on different seeds, and compares models only if instances were
trained on same seed. One model wins if it's better than another model on all equal seeds.</li>
</ul>
<p>Contrary to my expectations, hyperparamter search did not found well-defined global minima.
All best models had roughly the same performance, but different parameters.
Probably RNN model is too expressive for this task, and best model score
depends more on the data signal-to-noise ratio than on the model architecture.</p>
<p>Anyway, best parameters sets can be found in <code>hparams.py</code> file</p>
<p>I used GTX 1080Ti (12 Gb memory). You probably have low memory issues, try to reduce batch size.</p>
<p>What is E^2? Variance?</p>
<blockquote>
<p>you must "select" a distribuition first (or select the distribuition
that best fit data) and fit the parameters to have a P() function
instead of using "any_ml_model.predict() or predict_proba()" and then
you can estimate intervals from cdf/pdf functions, right?</p>
</blockquote>
<p>Right. You can also use non-parametric models (histograms, kernel density estimators) instead of selecting hardcoded distribution.</p>
<blockquote>
<p>i'm just thinking about how to have something like arima models with
any kind of model</p>
</blockquote>
<p>RNN is ARIMA on steroids. I can do all that ARIMA can do and much, much more. I really don't see a reasons to work with ARIMA-like models, except of some amount of interpretability.</p>
<p>BTW, ARIMA 'confidence estimation' is really based on dumb assumption of Gaussian distribution (exactly same as we discussed above). I think it's better to use explicit, transparent and controllable distributions instead of hidden assumptions, buried deep in ARIMA or GARCH models.</p>
<p>Let's explain in steps:</p>
<ol>
<li>Predict <code>mu_hat</code>, <code>sigma_hat</code> for each timeseries step</li>
<li>Calculate likelihood <code>P(data|mu_hat, sigma_hat)</code> for each step. This is <code>Normal.pdf(true_pageviews, mu_hat, sigma_hat)</code> in our case</li>
<li>Take log of probability values and average across all steps: <code>mean(log(p_i))</code>. This is average log-likelihood of our data for predicted <code>mu_hat_i</code>, <code>sigma_hat_i</code>. The more accurate our estimations of mu and sigma, the greater the likelihood.</li>
<li>Use this loglikelihood as a loss metric for SGD. (use negative loglikelihood, if optimizer wants to minimize the loss).</li>
<li>Profit</li>
</ol>
<p>No need to use garch, rolling std or anything fancy.</p>
<p>It's possible, implementation depends on how you want to model posterior distribution. In a simplest case, we can assume Gaussian distribution, and predict not only mu (as current model does), but pair (mu, sigma). Loss will be negative log probability of data given predicted mu and sigma. In other words, we can treat NN as MLE estimator.</p>
<blockquote>
<p>in this model (NN you submitted to kaggle) you don't consider the
y_hat distribuition, you just want to optimize the smape metric right?
the gaussian distribuition isn't important, we are talking about
gaussian an mu/sigma just to create a "forecast inteval", right? in
this case considering that we force the mu_hat to be gaussian
distribuition, we have a well defined sigma_hat that's why we minimize
the error of Y≃N(sigma,mu) instead of minimize the smape loss</p>
</blockquote>
<p>Right.</p>
<blockquote>
<p>if i'm right... considering that optimizing smape (and not the fit of
gaussian distribuition) we couldn't estimate the probability
distribuition of the kaggle submition y_hat values?</p>
</blockquote>
<p>Right. To learn model to predict confidence interval, we should have a loss that depends on quality of confidence interval prediction. SMAPE knows nothing about confidence, so we can't use it.</p>
<p>So please forget SMAPE and think in terms of distributions and likelihoods. If you switch your mind into 'probabilistic mode' and read some probability/likelihood articles (including the link I gave you), you'll understand the whole picture. </p>
<p>Single Gaussian distribution is a primitive case, where are many advanced ways to model posterior distribution (histogram estimation, kernel density estimation, distribution mixtures), but Gaussian is easiest to understand. </p>
<p>You can interpret same probability <code>P(data|theta)</code> in two ways:</p>
<ol>
<li>Probability to observe data given parameters <code>theta</code> (so probability is a function of data). This is usual 'probability' definition. </li>
<li>Likelihood of parameters <code>theta</code> given observed data. </li>
</ol>
<p>You can see details at <a href="https://www.quora.com/What-is-the-difference-between-probability-and-likelihood-1/answer/Jason-Eisner">https://www.quora.com/What-is-the-difference-between-probability-and-likelihood-1/answer/Jason-Eisner</a></p>
<p>We use second interpretation. Given our pageviews, we calculate how well distribution with <code>(mu, sigma)</code> parameters will fit our pageviews. This is absolutely same calculation as probability to see expected number of pageviews given the <code>(mu, sigma)</code> distribution parameters. Do you understand this?</p>
<blockquote>
<p>you have a 'prediction' (y_hat≃mu_hat, sigma_hat)</p>
</blockquote>
<p>No. <code>mu_hat</code>, <code>sigma_hat</code> is our prediction. We don't need to calculate <code>y_hat</code> (to be precise, it is already calculated: <code>y_hat</code> is equivalent to <code>mu_hat</code> for Gaussian distribution). We need just to check how well our predicted distribution will fit our data, i.e. check how well this condition holds: <code>y_true≃Normal(mu_hat, sigma_hat)</code>.
If we have ideal fit, we'll have <code>y_true=mu_hat</code>, so peak of predicted Gaussian distribution will be equal to <code>y_true</code> value. If we'll have less than ideal fit, <code>y_true</code> will be off peak. Likelihood is a measure of our 'fitness'.</p>
<blockquote>
<p>but what sigma you use at train(x,y) when y=[y,sigma]?</p>
</blockquote>
<p>Again, we don't need <code>sigma</code> at training data. We have only to predict it and compare resulting distribution with <code>y_true</code> (calculate likelihood)</p> |
Web Traffic Time Series Forecasting | 30th Place Approach and Lessons Learned | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Web Traffic Time Series Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I didn't think of trying a third layer - that is a big jump you got with that. I messed up as well with not imputing missing values properly, I was just using zeroes but I read people had much better success with forward fill and backward fill. That is not that surprising now that I stop to think about it. Based on all of that I agree that a median model could potentially have got to gold.</p>
<p>Congratulations anyway to your team. I wish I had tried to join up with you guys although I know your team was pretty big already.</p>
<p>Your comments on CV in the discussion were particularly helpful as was respecting the nature of a time-series in order to avoid information leak. </p>
<p>Thanks. Do you think the percentile idea could have added any value to your approach? I know we discussed previously your team also was using the median based approach - your team name was a clue on that one! ;-)</p>
<p>Thanks to those who have shared their approaches and made useful comments in the discussion. I learnt a lot from some of those and they have inspired me to push on with Deep Learning. </p>
<p>As far as what I learned, I found a lot of value in setting up a CV framework that reflected the nature of the competition. For the prediction period I set up 6 consecutive folds of 61 days each test data. This covered a period of 366 days which was as close to a year as I could get it. This was an attempt to cover seasonality at least to a degree. I also paid close attention to the number of days gap between the train and test period as it was possible to model much more accurately where the gap was lower. So for each test period my CV was set up with a gap of a few days (based on what the competition organisers were indicating that gap would be) with the training period started prior to that gap of days. So for example if I had a fold with test data starting on 1st May I made a 10 day gap so that I would have the train data end on 20th Apr (for the same year obviously). </p>
<p>I mainly used the median approach and did not find an elegant way to improve on it. I did however find a simple way to improve on it which worked for me in this competition though it may not generalise very well. I initially had the idea that internet use globally is generally growing. Then I considered that the median actually exists at a point on a range of quantiles or percentiles. So I tried treating that as a parameter. If internet use is growing then perhaps a percentile slightly above 50 might be a better option. On the first level predictions this did not lead to a significant improvement and I decided that the median was the best option. But when taking a second level - the so called "median of medians" I found that increasing the percentile actually lead to a worse score. However, just to be thorough I tried lowering the percentile and to my initial surprise I found that this lead to an improvement. Depending on the CV fold I was able to improve by around 0.2 to 1.0 using percentiles approximately in the range 27.5 to 37.5 where there were around 8 or 9 first level medians. Because of the short deadline at the end of the competition and because I had this idea quite late I was not able to test it very thoroughly. At that point I did more EDA across the aggregate dataset - I probably should have done that first! This showed that the seasonal increase from summer to fall/winter was stronger in 2015 than in 2016. It also showed that the seasonal decrease from the beginning to the middle of 2017 was more severe than in 2016. This led me to the conclusion that many of the sites in the dataset (which was selected at a specific point in time) were based around specific events where interest would peak at a certain point and then reduce dramatically. I recall one example site for a big sporting event in 2016. Clearly the interest in that would build up through 2015 to the event itself but then decline afterwards dramatically. Using percentiles at the second level was not the only difference between my two submissions but I estimate that the improvement in the competition was fairly substantial at around 0.4 or maybe more. For others who used a median based approach your mileage may vary...</p>
<p>For those who find the top solutions a bit intimidating then perhaps there are some ideas here that are a bit more approachable. I've learned a lot from Kaggle and hope to repay some of that by helping others. Two takeaways points are 1) CV is always important, just find a way to do it that suits your data and 2) think about the problem from a practical point of view - the median approach assumed static internet use but was that accurate? As it turned out it wasn't totally accurate. Investigating this led me to a conclusion that was the opposite of what I expected but did eventually lead to an improvement in my score. This would not have happened if I had not thought about the problem and questioned some basic assumptions. </p>
<p>I am aware that the winning models are way more advanced than I what I did, but what I have described was enough for 30th place and a silver medal. Good luck with your next competition!</p>
<p>Sure - it certainly isn't as glamorous! Every competition is different as well - in other competitions using some of the more popular models like xgboost and neural networks is pretty much necessary to getting in the medals. I agree with your earlier point about trying to beat Arthur that there is a huge gap between getting a gold and winning a competition (or getting in the top 3). Getting a gold is hard enough but top 3 is a whole other level. A fully optimised median model may well have been enough for gold but not for the prize money. </p> |
Web Traffic Time Series Forecasting | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Web Traffic Time Series Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I like that many guys have shared their ideas. And the discussion here is very pleasant. Here is the idea. </p>
<p>The submission is weighted average of several nn models. Each model have 1 lstm layer and 2 fc layers. The differences are their inputs or objectives.</p>
<p>The input may be raw numbers or be transformed by log1p or be normalized by average. The objective may be either MAE or SMAPE. Although SMAPE is not convex, but it still works with SGD.</p>
<p>The solution does not count yearly seasonality, which I think is the major cause for the gap away from the lead.</p>
<p>The models were trained only against 8/31 data because of hurricane irma. The final submissions were made on a laptop. But I don't think this is critical.</p>
<p>Enjoy!</p> |
Web Traffic Time Series Forecasting | 8th place with Kalman filters | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Web Traffic Time Series Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! I appreciate your interest. The full solution code is a bit too messy to be published as is, but a wrote a <a href="https://gist.github.com/oseiskar/5e8937af96efdfe5f8a6994764b07081">Github gist</a> demonstrating the usage of the new <code>simdkalman</code> package for the competition data and defining the matrices in the Kalman filter.</p>
<p>I also did quite well without using any neural networks, XGBoost or other modern methods. Here's an overview of my not-so-elegant but fully "classical" approach.</p>
<p>It's based on the observation that most of the time series are low-traffic, noisy and seemingly very unpredictable (figure 1) while some of them behave quite nicely (figure 2). My main idea was to use Kalman filters to predict well-behaved time series while falling back to a more robust median-of-medians for the bulk of the data.</p>
<p>I was heavily aided by the public kernels: my fall-back method was almost the same as the best public kernel: the Fibonacci median of medians, grouped by weekend vs weekdays, rounded to the nearest integer (this one <a href="https://www.kaggle.com/rshally/web-traffic-cross-valid-round-and-wk-lb-44-5">https://www.kaggle.com/rshally/web-traffic-cross-valid-round-and-wk-lb-44-5</a>).</p>
<p>In more detail, the solution worked as follows</p>
<ol>
<li><p>For spiders, always use the rounded Fibonacci median of medians but <em>without weekly seasonality</em>. This was because the vast majority of the spider data was very noisy and I assumed that bots do not care about weekends. Leaving out weekly seasonality did not have much effect in my final CV benchmarks but I decided to use the simpler method since I thought it would be more robust to possible abnormalities caused by this competition.</p></li>
<li><p>For non-spider data, try three different smoothing parameters <em>s</em>, starting from least smoothing:</p>
<ul><li>run a Kalman smoother <em>K(s)</em> on the log1p-transformed data to get the smoothed mean <em>y</em></li>
<li>compute deltas: <em>dy</em> as <code>np.diff(y)</code></li>
<li>compute the yearly seasonality error as <code>R = 0.5 * |dy1 - dy2| / (|dy1| + |dy2|)</code>, where <em>dy1</em> is the last year of <em>dy</em> and <em>dy2</em> the second last year of <em>dy</em> (i.e., compare the data to itself, shifted by one year, very similar to what Nathaniel Maddux seems to have done in his solution)</li>
<li>if <em>R</em> is less than a threshold value (0.95), the <em>s</em>-smoothed data is seasonal: predict the new data with the Kalman filter <em>K(s)</em> adding yearly seasonality from <code>np.cumsum(dy2)</code>. Finally apply <code>exp(x)-1</code> and round to the nearest integer </li>
<li>otherwise try the next smoothing level <em>s</em></li></ul></li>
<li><p>If the data is not seasonal with any smoothing level, fall back to rounded Fibonacci median of medians by weekend</p></li>
</ol>
<p>I used a 8-state Kalman filter representing a local level (no trend) and weekly seasonality, parametrized by a smoothing parameter <em>s</em> which determined the covariance matrices (process and measurement noise). Adding the yearly seasonality directly to the Kalman filter would have exploded the number of states or required special tricks so I handled that separately as described above.</p>
<p>I wrote my own SIMD-style vectorized implementation of the Kalman filters which allowed running them relatively fast in Python (Numpy). EDIT: These custom Kalman filter codes can be found here <a href="https://github.com/oseiskar/simdkalman">https://github.com/oseiskar/simdkalman</a></p>
<p>The total execution time of the final model was about 25 minutes on a 4-core i5, which was nice since that was about the time I had available for dealing with the final data. My two submissions were the same model executed on the 8/30 and 9/10 data. The former scored only 39.2.</p>
<p>In the final cross-validation / back-testing I used 5 different truncations of the data and ensured that the submitted model worked well for <em>all</em> of these cases (i.e., max error instead of mean error). It had quickly became apparent to me that optimizing the public LB was the same as over-fitting for predicting January, which was quite different from predicting October or December.</p>
<p>Figures (red: predicted, dashed: actual)</p>
<p>Good question. I ran an simplified benchmark with a small subset of the data and the SIMD version was about 7x faster. EDIT: published the Kalman filter code in Github: <a href="https://github.com/oseiskar/simdkalman">https://github.com/oseiskar/simdkalman</a></p>
<p><img src="https://oseiskar.github.io/example-notebook.png" alt="example notebook" title=""></p>
<p>Here's an example of how the Kalman-smoothed data might look like</p>
<p><img src="https://oseiskar.github.io/example-result.png" alt="example result" title=""></p> |
Web Traffic Time Series Forecasting | Description of 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Web Traffic Time Series Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. I'll share when the competition is officially over and other top competitors begin to share their solutions.</p>
<p><strong>Summary</strong></p>
<p>Like many other competitors, I log-transformed page views before fitting and predicting, and used mean absolute error (MAE). Unlike other top solutions, my main tool was polynomial autoregression. (Rather than least squares regression, I used median regression, which minimizes MAE.)</p>
<p><strong>The model</strong> </p>
<p>I separated my model into 4 parts:</p>
<p>1) Simple, constant, median-based prediction</p>
<p>It was hard to improve on this for low-volume pages (meaning most pages).</p>
<p>2) Median for each day of the week</p>
<p>This is a list of 7 numbers for each page. The last 35 days were used to calculate medians. The mean of medians was subtracted. If a page had no weekly pattern, this list would ideally be all zeros.</p>
<p>3) Nonlinear polynomial autoregression</p>
<p>(Note that when I say page views, I mean logp1 page views.) Prediction of future page views from past page views was performed as follows. The past values (denoted by X) were medians of page view histories of various lengths: 2,4,8,16,32,64,128 days in the past. The future values (denoted by Y) were polynomial median regression coefficients of future page views, over the entire 62 day prediction period. The polynomial order was 1, in other words constant + trend. Y was predicted from X using the 150 nearest neighbors in X.</p>
<p>4) Pages with a strong yearly pattern</p>
<p>This part of the model was particularly useful for pages regarding sporting events, holidays, historical events, etc. The "strength" of a page's yearly pattern was determined by comparing last year's page views with the year before that. Normally one would use the correlation coefficient for this, but it was a little bit better, and faster, for me to use norm(lastYear - yearBefore)/(norm(lastYear) + norm(yearBefore)), after subtracting the mean from each list. </p>
<p>Prediction for pages with a strong yearly pattern could be a bit confusing. The X values describe the future, 365 days ago. They were polynomial regression coefficients of page views starting 365 days ago and ending 62 days later. The future Y values were polynomial regression coefficients of future page views, over the entire 62 day prediction period. For X and Y, the order of polynomial expansion was 2. Y was predicted from X using median regression.</p>
<p><strong>Combining the model parts</strong></p>
<p>Model parts were not selected in binary fashion, but were mixed as follows. 3 vectors of length 62 were mixed to generate a prediction. Each of the 3 vectors corresponds to the prediction from a part of the model (parts 1, 3 and 4 above).</p>
<p>To get mixing ratios, I used the linear crossover function defined as </p>
<pre><code>f(x; x0, x1) = (x-x0)/(x1-x0) for x0 <= x <= x1, 0 for x < x0, 1 for x > x1.
</code></pre>
<p>This function was used to activate the the median model when page views were low and to activate the yearly pattern model when the yearly pattern was strong. Variable x indicates whether a model part should be used. Parameter x0 is the value at which the function begins to turn on and the model part begins to be used. Parameter x1 is the value at which the model part is fully on (though the other model part could also be on, and then the two would be mixed). This function was applied separately to the median page views and the yearly pattern strength to get mixing portions for those two models. Any left-over portion was assigned to the nonlinear polynomial regression model. The mixing portions were normalized to sum to 1, and model predictions were mixed using these ratios. (I would have to say more to fully define the mixing method, but it wouldn't be very informative.) </p>
<p>Of course, model mixing could be done more elegantly with a simple neural net. I tried this, and the neural net learned a 2D activation function that was similar to what I made. But it was slow to train and didn't give better scores. (Which is not the fault of neural networks, I just need to learn more about them.)</p>
<p>The weekly pattern was then added to the prediction, the prediction were exponentiated to invert the initial log transform, and finally the predicted page views were rounded to the nearest whole number.</p>
<p><strong>Training</strong></p>
<p>The training dataset was generated by picking, for each page, a random day to divide future and past. This day was chosen to give at least 480 past days and 62 future days. (Note that I accounted everywhere for the gap days between the training dataset and the test dataset.)</p>
<p><strong>Cross validation</strong></p>
<p>I used 10-fold cross validation. My method would have been too slow for this, so I compiled various functions to machine code. The functions I compiled include SMAPE, my fast median regression function, and my function for calculating features. </p>
<p><strong>Tuning</strong></p>
<p>I tuned using 2 passes of line searching. Tuning resulted in 0.5 points of SMAPE improvement. (Whenever I add a hyperparameter to an algorithm, I do a little informal tuning motivated by curiosity. Consequently, the algorithm is close to optimal before I start final tuning. I would guess most people do this.)</p>
<p><strong>Ensembling</strong></p>
<p>I used a 10x ensemble. The main source of variation for ensembling was the random stop day chosen for each page in the training set.</p>
<p><strong>What I would do differently next time</strong></p>
<p>I started with only 2.5 weeks to go until the final submission deadline, because it's much more fun that way. Seriously, though, I should have started sooner, because I would have had more time to improve short-term predictions. One thing I could have tried would have been to fit a more complex model for each day of the week, rather than using the simple median of each day.</p>
<p>And of course, neural nets! I need to learn more about those. </p>
<p>In part 3 of the model, Y is predicted from X using nearest neighbors. The linear part only fits a polynomial expansion for Y.</p>
<p>In the crossover function, x is an indicator for whether a model part should be used (low median page views activates the median model, high autocorrelation activates the yearly pattern model). Parameter x0 is the value at which the function begins to turn on and the model part begins to be used. Parameter x1 is the value at which the model part is fully on (though the other model part could also be on, and then the two would be mixed).</p>
<p>Of course, model mixing could be done more elegantly with a simple neural net. I tried this, and the NN learned a 2D activation function that was similar to what I made. But it was slow and didn't give better scores.</p>
<p>Thanks for your questions. I've added the info to the description.</p>
<p><a href="http://nathanmaddux.com/pages/kagwebtraffic/KaggleWebTraffic2017.zip">Here it is.</a> Note that it's a Mathematica notebook. If you don't have Mathematica, you'll need to download the CDF player to open the code files. Sorry, I got busy with other things, including the <a href="https://www.kaggle.com/nathanrm/full-solution-cylindrical-coordinate-method">Passenger Screening Algorithm Challenge.</a></p>
<p>The other option would be to make 62-ish(63?) models, one for each future day. I thought this was over-complicated, might overfit and not generalize, and would not filter out noise. So I tried a polynomial regression of the future values, and with cross-validation, found that performance was great for a first order model (constant + trend). Higher order polynomial models (x^2, x^3, etc) did not perform better.</p>
<p>It works as follows. Each page has 62 future values and 1 to 2 years of past values. The 62 future values are expanded in a first order basis (constant+trend) using median regression, resulting in 2 values. The past values are reduced to the 7 features described in section 3. I do this reduction for all N pages, giving an N x 7 matrix of past features and an N x 2 matrix of future features. Future features are predicted from past features using nearest neighbors. When predicting page views, I matrix multiply the predicted future features by the first order basis, giving the 62 day predictions.</p>
<p>I moved it to <a href="https://github.com/naterm60/KaggleWebTrafficForecasting">a Github repository.</a></p> |
Web Traffic Time Series Forecasting | Simple model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Web Traffic Time Series Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1 hour for NN training on a 1080 Ti ;)</p>
<p>I’m done with the WTF winning package submission. As part of it I had to create a simple model: using just the median visits of the last 8 weeks, the site, the access method, the week of the day and the day in the week I get a score of 36.78 that would lead to 9th rank :) The simple NN model is quite robust there. I uploaded it to the git repo: <a href="https://github.com/jfpuget/Kaggle/tree/master/WebTrafficPrediction">https://github.com/jfpuget/Kaggle/tree/master/WebTrafficPrediction</a></p>
<p>Thanks. There is little traffic now here as the competition has ended.</p> |
West Nile Virus Prediction | 2nd place model (team Dmitry & Leustagos) is released | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: West Nile Virus Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>https://github.com/diefimov/west_nile_virus_2015</p>
<p>The file README.docx contains the description of our approach.</p> |
West Nile Virus Prediction | Model (and overfit) sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: West Nile Virus Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[quote=Solstice;82452]</p>
<p>When lagging, missing values get introduced and certain classifiers won't tolerate that - so what to do in those instances?</p>
<p>[/quote]</p>
<p>That's a very good question. If the data on a time window were incomplete (because it started before May) my approach was to set the average to zero and add a separate 'missing value indicator variable' (1 if missing and 0 otherwise) for that variable, which allows the (linear) model to estimate a separate parameter for the missing records. With tree-based models, I usually just impute an out-of-range value.</p>
<p>I would also be interested to hear how others handle this!</p>
<p>[quote=I can do the splits NO PROB;82256]</p>
<p>I would be interested to know if your ensemble performs better if you use probit-models</p>
<p>[/quote]</p>
<p>I used sklearn and it does not support probit as far as I know... I made a small wrapper for the probit from statsmodels, but that scores about 0.02 worse on CV - maybe you have suggestions to improve it? I'm not very familiar with probit. The newton solver fails on a singular matrix. My wrapper:</p>
<p>import statsmodels.api as sm<br>class probitwrapper():<br> result=None<br> def __init__(self):<br> return<br> def fit(self,X,y):<br> self.model=sm.Probit(y,X)<br> self.result=self.model.fit(method='bfgs',gtol=1e-9,maxiter=100)<br> def predict_proba(self,X):<br> assert self.result != None<br> return [[1-e,e] for e in self.model.predict(self.result.params,exog=X)]</p>
<p><strong>EDIT</strong>: oops, I forgot to add an intercept... if I replace X by [[1.0]+xrow for xrow in X] it scores about the same as logistic regression (C=1e9) in sklearn</p>
<p>My average probabilities before and after applying the weights:<br>2008: 0.018 > 0.009 (w=0.5)<br>2010: 0.019 > 0.008 (w=0.4)<br>2012: 0.032 > 0.042 (w=1.3)<br>2014: 0.015 > 0.015 (w=1.0)</p>
<p>Congratulations to the winners! I'm looking forward to see your solutions.</p>
<p>My model is an ensemble of 12 logistic regression models that each use a different set of weather variables, because having all variables at once lead to train set overfit. Sets were different by the period (30, 60 or 90 days before) that was used to calculate average weather features, and by the subset of columns (one based on literature with temperature, precipitation and wind speed; the others on forward selection using cv). Apart from that, each model also had species, location (nine geographical blocks based on lat/lon) and a trap-specific maximum number of duplicate rows per year (ie the leakage feature).</p>
<p>This ensemble scores 0.811 (0.823 public) and could be improved to 0.833 (0.842 public) by fitting linear weights for each year using the public testset. I would prefer the latter to be disallowed, but felt it was necessary to keep up with other teams doing it.</p> |
Zillow Prize: Zillow’s Home Value Prediction (Zestimate) | 17th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Zillow Prize: Zillow’s Home Value Prediction (Zestimate)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I found this to be a really enjoyable first competition, which I tackled with my <a href="https://www.kaggle.com/dashiellg">brother, Dash</a>. Dash, in addition to being great to work with, introduced me to Machine Learning, Kaggle, and taught me the basics of Python.</p>
<p>While we tried out several different approaches, there were only a few particularly important components to our final solution:</p>
<ul>
<li><p>Using information about neighbouring properties, including the ~3M observations not associated with a sale.</p></li>
<li><p>Imposing constraints on the fitting in order to achieve more robust results.</p></li>
<li><p>Stacking</p></li>
</ul>
<p>If you are interested in a bit more detail, please see the attached file for a PowerPoint presentation (given at the Vancouver <a href="https://www.meetup.com/LearnDataScience">Learn Data Science</a> Meetup). I would also be happy to answer any questions.</p>
<p>Thank-you to Kaggle for being a forum for this kind of competition, and to Zillow for sponsoring!</p>
<p>Hi SunDevil,</p>
<p>Thanks for asking, I'm glad there is some curiosity about this method. I think it has fairly broad applicability to noisy problems. The presentation doesn't really stand on it's own (without being spoken to), so I hope the following will help:</p>
<p>Slide 32 is a simplified representation of the effective coefficients (weights) given to the observations around the red lines by a GBDT with a few trees. To illustrate how the decision trees have nodes that fit the local structure of each variable, the slide shows two examples; each with a different pattern of weightings.</p>
<p>Slide 33 shows what we are trying to achieve with our new approach - a weighting to surrounding observations that is independent of your location and of the local structure.</p>
<p>Slides 34 and 35 show the process by which we achieve the objective shown in slide 33. We turn out observations into differences, ratios, etc... Then when we pass them into the GBDT (or other method), it does not know where each observation is in the spectrum of values, and is forced to create a weighting of local observations that is common across all values in the spectrum of observations.</p>
<p>Let me know if you want any of the formulas used in 35.</p>
<p>I know this is probably a new concept, and it is not simple, so let me know if this isn't clear and I will try again.</p>
<p>Best regards,</p>
<p>Morgan</p>
<p>Hi Sid,</p>
<p>Great questions!</p>
<p><em>Does the CSF also inform which features to choose for the 500 nearest houses features you added, or ?</em></p>
<p>The CSF creates forecasts directly.</p>
<p>I'm going to tackle your remaining questions out of order...</p>
<p>*For the CSF process in the slides how do you end up with many N*N matrices?*</p>
<p>We created pairwise differences (ratios, etc) for each variable. </p>
<p>So if we started with three houses, built in 1965, 1999, and 2015, we would get the following matrix:
[0,34,50];[34,0,16];[50,16,0]. We thereby turned out 3x1 vector into a 3x3 matrix. We repeat this process for all the variables we want to include.</p>
<p><em>In the CSF process the “new Y ~ new X” fit uses the additional features from the similarity to the other houses?</em></p>
<p>Yes, in fact it <strong>only</strong> uses the similarity measures.</p>
<p><em>How are the YSimilarity, X1Similarity and X2Similarity exactly combined for the similarity matrix?</em>
To clarify the above, "new Y" = YSimilarity; "new X" = X1Similarity, X2Similarity, etc.</p>
<p>We used a GBDT to generate forecasts of YSimilarity. These forecasts can be interpreted as correlation forecasts, or as a "similarity matrix."</p>
<p><em>I did not exactly get how the GBTrees fit into the CSF process or are they just for feature importance for later?</em></p>
<p>We use them to create similarity/correlation forecasts.</p>
<p><em>In GBTrees You used “alternative weights” for neighboring observations. Is the alternative weight function based on just kNN?</em></p>
<p>We extract the "alternative weights" (aka multivariate coefficients) from the correlation matrix. As I think you have already figured out, this is pretty similar (but likely superior) to a kNN where you are defining your neighbours using a method that weights features according to their importance.</p>
<p>Hope that helps!</p>
<p>Morgan</p> |
Zillow Prize: Zillow’s Home Value Prediction (Zestimate) | Share some insights about LightGBM vs. CatBoost in Zillow Prize-1 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Zillow Prize: Zillow’s Home Value Prediction (Zestimate)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My final ranking benefited from the only_catboost kernel: <a href="https://www.kaggle.com/abdelwahedassklou/only-cat-boost-lb-0-0641-939">https://www.kaggle.com/abdelwahedassklou/only-cat-boost-lb-0-0641-939</a>. It looks obvious that with little feature engineering, plain CatBoost performs much better than plain LightGBM, Here is what I found out.</p>
<p>With plain LightGBM, using sale-month as a categorical feature, even In-Sample prediction errors (the diff between predicted pricing error and real pricing error) are biased, i.e. if we group errors by month, we will find within each group, the errors have non-zero median. But errors of total population is unbiased, which means LightGBM is not doing a very good job in training local-structures. With plain CatBoost, there is no bias within each month group. I think this is the major reason CatBoost is performing better. </p>
<p>To look into why, from CatBoost tech-websites, I found this reference: 'Fighting biases with dynamic boosting' (arXiv: 1706.09516). It says, if we directly train with Gradient Boosting, the errors are actually biased (here bias is defined in a different way from what I mentioned above), the idea to solve the problem is to not repeatedly use the same samples along iterations. I did not have time to look into CatBoost source code, but I assume that since it appeared in reference, the idea should have been applied. Also, it is consistent with the observation that CatBoost trains much slower. And with Zillow Prize-1, this bias problem is amplified by the fact that the data is of high noise, so small contribution from bias adjustment could have made a huge difference in ranking.</p>
<p>This is all I find out, looking forward to anyone else contributing ideas like how this error-bias problem actually leads to the training-bias within months, and how this bias-fighting idea is actually implemented within CatBoost.</p>
<p>Now, if anyone interested in my solution, I would like to share some general ideas.</p>
<p>Before knowing about CatBoost, I solved the LightGBM bias problem by adding one more layer of modeling on target months (10, 11, 12) errors. which could be viewed as model stacking, not from a different model-family on the same data, but the same model family on different data. By doing this, I achieve manual bias adjustment within the group for sure, and hopefully to capture some more month-specific local structures. And of course, with smaller sample size, for this second layer training, a different param set and a smaller hand-picked feature set is used.</p>
<p>Then, when I was settled with my model, I looked around and saw this only_catboost kernel less than 24 hours before deadline. I did not have much time nor submission opportunities to play with it, so I used much of it as it is, even with less ensembles. But the original kernel has several critical flaws that needs to be fixed:</p>
<ul>
<li>1, there is no effective prediction for 2017, which is easy to notice.</li>
<li>2, there is no need to mark 2017 tax-features NA for training, just need to make sure to use 2016 features for 2016 errors and 2017 for 2017.</li>
<li>3, there should be no sale-day feature: letting alone whether there is pattern at that dimension, we do not have that information when performing prediction.</li>
<li>4, for month and quarter features, the original kernel used them as year-month, and year-quarter, i.e. same month from 2016 and 2017 are used as different groups. This is critically bad, it would make 2016 prediction (public LB) good, but 2017 prediction very bad, as 2017 values of these features would not appear in training data at all. </li>
</ul>
<p>Either one above used alone would have made into first 100th, 0.0749786 for CatBoost and 0.0748932 for 2-step LightGBM (I believe with some more careful tweaks CatBoost could do better), so I think the key in this competition is the training bias. </p>
<p>Besides all above, I used little feature engineering. Just 2 things,</p>
<ul>
<li>1, census_and_block info is not numerical, it is actually a concatenation of 3 pieces of number-like categorical information. First four digits contains the same info as FIPS, next 6 census, and last 4 block.</li>
<li>2, including census and block, there are several high-cardinal categorical variables. For CatBoost I directly used them as categorical; for LightGBM, I delegated them as group_by_mean against a popular numerical variable, which is chosen according to feature importance for each categorical variable. I did not have a chance to try this on CatBoost.</li>
</ul>
<p>I also tried brute-force feature engineering, but found it not useful. For each numerical vs. categorical feature pair, I did within group: group-mean, group-neutral, group-norm and abs value of above, then select by feature importance. It turns out to be overfitting to CV set, and not performing well on testing data. </p>
<p>This is my first Kaggle competition, I really learned a lot, and I hope this sharing is beneficial for someone else. </p>
<p>Looking forward to the top teams sharing about their insights and tricks, especially on feature engineering, which I did not do very well.</p> |
chaii - Hindi and Tamil Question Answering | 10th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Data</h2>
<p>In my final submission I only used models which were trained on <a href="https://github.com/facebookresearch/MLQA" target="_blank">MLQA</a>, <a href="https://github.com/deepmind/xquad" target="_blank">xquad</a> and the competition data.<br>
I experimented a little bit with data augmentation but I was not able to improve my score.</p>
<h2>Models</h2>
<p>My final submission was an ensemble of finetuned</p>
<ul>
<li><strong>XLM-Roberta</strong> (pretrained on squad 2) and</li>
<li><strong>MuRIL</strong></li>
</ul>
<h2>Training</h2>
<p>I split the external data + the competition data into <strong>5 folds</strong> and trained both models on the five folds.</p>
<p>During training I ussed <strong>early stopping</strong>: after each epoch I checked the Jaccard Similarity on the oof split and afterwards used the weights of the epoch where JS was highest (usually this was after the second epoch).</p>
<p>I used <strong>Kaggle TPUs</strong> for training.</p>
<h2>Ensembling</h2>
<p>I first ensembled by taking the mean over the 5 folds for both xlm-rob and muril. I then determined for both models the top n predicted strings based on the sum of the start and end logits.</p>
<p>In order to get the final prediction I then used the following algorithm:</p>
<p>`<br>
For each model:</p>
<ol>
<li>Store the top n postprocessed predicted strings in a list.</li>
<li>Sort the list (from low to high) based on the score of the predicted string (where the score is the sum of the start and end char logit).</li>
<li>Assign a new score to each predicted string, by simply taking its index in the list.</li>
</ol>
<p>For each predicted string:</p>
<ol>
<li>Sum up the new scores from each model.</li>
<li>Pick the string with the highest sum as the final prediction.<br>
`</li>
</ol>
<p>I also tried mapping the token logits back to the characters but I was able to achieve better results using the algorithm above.</p>
<h2>Implementation</h2>
<p>I used <strong>tensorflow</strong> and uploaded my code <a href="https://github.com/drehero/kaggle-solutions/tree/main/chaii-hindi-and-tamil-question-answering-10th-place" target="_blank">here</a>.</p>
<h2>Other</h2>
<p>I also trained <strong>RemBert</strong>, but unfortionately I was not able to ensemble all three models because I always ran into "Notebook Exceeded Allowed Compute" errors during submission.</p>
<h2>Acknowledgements</h2>
<p>Thanks to the community. You shared interesting ideas in discussions and notebooks and I was able to learn a lot from you. The following notebooks where especially helpful to me:</p>
<ul>
<li><a href="https://www.kaggle.com/rhtsingh/chaii-qa-5-fold-xlmroberta-torch-fit" target="_blank">https://www.kaggle.com/rhtsingh/chaii-qa-5-fold-xlmroberta-torch-fit</a></li>
<li><a href="https://www.kaggle.com/rhtsingh/external-data-mlqa-xquad-preprocessing" target="_blank">https://www.kaggle.com/rhtsingh/external-data-mlqa-xquad-preprocessing</a></li>
<li><a href="https://www.kaggle.com/thedrcat/chaii-eda-baseline" target="_blank">https://www.kaggle.com/thedrcat/chaii-eda-baseline</a></li>
<li><a href="https://www.kaggle.com/kishalmandal/5-folds-infer-combined-model-0-792" target="_blank">https://www.kaggle.com/kishalmandal/5-folds-infer-combined-model-0-792</a></li>
</ul> |
chaii - Hindi and Tamil Question Answering | 12th Place Solution - top of the silver... | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to thank Kaggle and the competition hosts. This competition was really interesting because the training data is really small and we were encouraged to use external data.</p>
<p>My solution is really simple. There are only two key points - Character Level Ensemble & Postprocessing.</p>
<h2>Machine Resources</h2>
<p>Colab pro+ & kaggle notebook</p>
<h2>Experiment tracking</h2>
<p>W&B(<a href="https://wandb.ai/site" target="_blank">https://wandb.ai/site</a>)</p>
<h2>Model</h2>
<p>I trained three models.</p>
<ul>
<li>XLM-RoBERTa</li>
</ul>
<pre><code>model: deepset/xlm-roberta-large-squad2
data: competition dataset + MLQA + XQUAD
fold: 5-fold
max_length: 384
stride: 128
epochs: 2
lr: 2.0e-5
</code></pre>
<ul>
<li>MuRIL</li>
</ul>
<pre><code>model: google/muril-large-cased
data: competition dataset + MLQA + XQUAD
fold: 5-fold
max_length: 384
stride: 128
epochs: 2
lr: 2.0e-5
</code></pre>
<ul>
<li>RoBERTa (WECHSEL)<br>
Thank you for sharing! <a href="https://www.kaggle.com/bminixhofer" target="_blank">@bminixhofer</a></li>
</ul>
<pre><code>hindi model: roberta-large-wechsel-hindi
tamil model: roberta-large-wechsel-tamil
hindi data: competition dataset + MLQA + XQUAD
tamil data: competition dataset + translated SQuAD2.0
fold: 5-fold
max_length: 400
stride: 135
epochs: 3
lr: 2.0e-5
</code></pre>
<h4>result</h4>
<table>
<thead>
<tr>
<th>model</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>XLM-RoBERTa</td>
<td>0.7060</td>
<td>0.778</td>
<td>0.738</td>
</tr>
<tr>
<td>MuRIL</td>
<td>0.7038</td>
<td>0.760</td>
<td>0.743</td>
</tr>
<tr>
<td>RoBERTa</td>
<td>0.7481 & 0.5907</td>
<td>0.763</td>
<td>0.735</td>
</tr>
</tbody>
</table>
<p>MuRIL is strong💪</p>
<h2>Character Level Ensemble</h2>
<p>It's impossible to simply average above three models' output because their tokenizers are different. So I expanded token probability to character probability using offsets.<br>
At first, I tried ensembling XLM-RoBERTa & MuRIL. It scored 0.782 in public LB (+0.004 boost), but 0.754 in private LB (+0.011 boost)!!</p>
<h2>Postprocessing</h2>
<p>On the last weekend, I checked how the models were getting the answers wrong.<br>
I found some predictions are consist of Devanagari numbers instead of Arabic numerals.<br>
<strong>ex) answer_text: 1945, prediction: १९४५</strong><br>
My postprocessing function is below.</p>
<pre><code>def postprocess_v3(pred, context, language):
if language == "hindi":
bad_starts = [".", ",", "(", ")", "-", "–", ";", "।"]
bad_endings = ["...", ",", "(", ")", "-", "–", ";", "।"]
elif language == "tamil":
bad_starts = [".", ",", "(", ")", "-", "–", ";"]
bad_endings = ["...", ",", "(", ")", "-", "–", ";", "."]
tamil_ad = "கி.பி"
tamil_bc = "கி.மு"
tamil_km = "கி.மீ"
hindi_ad = "ई"
hindi_bc = "ई.पू"
pred = pred.lstrip().rstrip()
if language == "hindi":
numerical_table = str.maketrans({
'०': '0',
'१': '1',
'२': '2',
'३': '3',
'४': '4',
'५': '5',
'६': '6',
'७': '7',
'८': '8',
'९': '9'
})
translated_pred = pred.translate(numerical_table)
if translated_pred in context and len(pred.split()) == 1:
pred = translated_pred
if pred == "":
return pred
while any([pred.startswith(y) for y in bad_starts]):
pred = pred[1:]
while any([pred.endswith(y) for y in bad_endings]):
if pred.endswith("..."):
pred = pred[:-3]
else:
pred = pred[:-1]
if pred.endswith("..."):
pred = pred[:-3]
if any([pred.endswith(tamil_ad), pred.endswith(tamil_bc), pred.endswith(tamil_km), pred.endswith(hindi_ad), pred.endswith(hindi_bc)]) and pred+"." in context:
pred = pred+"."
if pred.find("(") != -1:
_pred = pred[:pred.find("(")]
if _pred != "":
pred = _pred
return pred
</code></pre>
<p>Applying postprocessing, it scored 0.791 in public LB (+0.009 boost) and 0.761 in private LB (+0.007 boost).</p>
<p>I selected three models ensemble, it scored 0.792 in public LB but 0.760 in private LB…</p> |
chaii - Hindi and Tamil Question Answering | 13th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> !</p>
<p>Thanks <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>. Yes, that is what I had started off with but I found that increasing the voting pool increased the LB score by +0.004 and further checking for similar answers improved the score by +.001</p>
<p>Yes different archs used different tokenizations. That coupled with different doc stride and max-len means that start/end logits have different shapes. So I just ensembled based on multi stage voting of count of predictions- Kernel is here: <a href="https://www.kaggle.com/trushk/qa-ensemble-voter-top-n-votes-similarity#Change-vote-count-if-votes-are-similar" target="_blank">https://www.kaggle.com/trushk/qa-ensemble-voter-top-n-votes-similarity#Change-vote-count-if-votes-are-similar</a></p>
<p>I would like to thank Kaggle and Chaii (Challenge in AI for India) for setting up this interesting Multilingual Question Answering Challenge. </p>
<p>It turns out we have a couple of solutions that would put us Gold Position but that is how these competitions go sometimes! </p>
<p><strong>Acknowledgements</strong></p>
<p>A big shout out to <a href="https://www.kaggle.com/kishalmandal" target="_blank">@kishalmandal</a>, <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> and <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> for their wonderful kernels and models that we could leverage. </p>
<p>Also thanks to my wonderful teammates <a href="https://www.kaggle.com/khyeh0719" target="_blank">@khyeh0719</a> and <a href="https://www.kaggle.com/maxjeblick" target="_blank">@maxjeblick</a> for their contributions to our solution. This was a team effort! </p>
<p><strong>External Datasets</strong></p>
<p>The models in the final pipeline used a mix of the baseline dataset and the MLQA and Squad datasets that were disclosed early on in the competition in the datasets page here: <a href="https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering/discussion/264581" target="_blank">https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering/discussion/264581</a> </p>
<p><strong>Models</strong></p>
<p>All models below were trained with doc_stride=135, max_length=400</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>LB Score</th>
<th>Comments</th>
<th># Folds Used</th>
</tr>
</thead>
<tbody>
<tr>
<td>deepset/xlm-roberta-large-squad2</td>
<td>0.792</td>
<td>5 fold Public Model</td>
<td>5</td>
</tr>
<tr>
<td>deepset/xlm-roberta-large-squad2</td>
<td>0.781</td>
<td>10 fold Tez model Trained Locally</td>
<td>3</td>
</tr>
<tr>
<td>google/rembert</td>
<td>0.777</td>
<td>5 Fold Trained Locally</td>
<td>4</td>
</tr>
<tr>
<td>google/muril-base-cased</td>
<td>0.772</td>
<td>5 Fold Trained Locally</td>
<td>4</td>
</tr>
</tbody>
</table>
<p>Apart from the models above we tried various different architectures including but not limited to <strong>electra, muril-base, multilingual BERT, span BERT, tamillion, indic-bert</strong> but none of them were as good as the architectures in the table for this task. </p>
<p>There was very little CV -> LB correlation in the competition for us. High CV scores did poorly on LB while decent but not too poor CV results gave higher LB scores. I suspect that this is due to the noise in the dataset and also that private dataset is cleaner with multiple annotators labeling the data. </p>
<p><strong>Speeding up calculations</strong><br>
- <code>model.half()</code> - FP16 for slight improvement in speed and no impact to LB score<br>
- Reusing features for same archs<br>
- Torchscript/jit - Experiments resulted in out of memory<br>
- Tried joblib and multiprocessing to do parallel calculations - All experiments worked on train set but out of memory on test. Maybe because test set had longer contexts? </p>
<p><strong>Ensemble Strategies</strong><br>
One of the main challenges was the 5 hr kernel run limit. With different architectures and possibly different max_len/doc_stride it became pretty evident that we needed a good way to ensemble models. Averaging logits only works for same archs and if same doc_stride/max len is used.<br>
I read up best practises on how do to this including top solution in the TensorFlow Question Answering Challenge here: <a href="https://www.kaggle.com/c/tensorflow2-question-answering/discussion/128832" target="_blank">https://www.kaggle.com/c/tensorflow2-question-answering/discussion/128832</a> . I came across majority/hard voting in the solution descriptions and decided to implement a very simple voting algorithm that landed up giving a significant boost to LB score. </p>
<p><strong>Voting</strong><br>
The voter took the cleaned predictions (after basic post processing) as inputs. We had 4 architectures in the final pipeline and the first stage checked for a majority vote among the 4 candidate predictions based on the top answer from each architecture. If there was no majority, the voting pool was expanded to top N predictions among all architectures. We found that taking top 2 votes from each arch gave good results. To further improve on the 2nd stage of the voting, if there was still a deadlock in voting, the votes were compared for similarity using the Levenshtein Ratio and if votes were within a certain threshold, one of those answers were selected as the final since the metric is Jaccard and having part of the answer correct would still be useful. </p>
<p><strong>Score progression</strong> <br>
- Public LB kernel -> <strong>LB 0.792</strong> <br>
- Voting ensemble with 3 different architectures -> <strong>LB 0.798</strong> (+0.006)<br>
- Improving models in voting -> <strong>LB 0.801</strong> (+0.003)<br>
- Adding another architecture to voting and selecting best models -> <strong>LB 0.809</strong> (+0.003)<br>
- Fine tuning voting logic (see above) -> <strong>LB 0.811</strong></p>
<p>Code can be found here: <a href="https://www.kaggle.com/trushk/qa-ensemble-voter-top-n-votes-similarity" target="_blank">https://www.kaggle.com/trushk/qa-ensemble-voter-top-n-votes-similarity</a></p>
<p>Description of voting logic: <a href="https://www.kaggle.com/trushk/ensemble-using-voting-13th-place-solution" target="_blank">https://www.kaggle.com/trushk/ensemble-using-voting-13th-place-solution</a></p> |
chaii - Hindi and Tamil Question Answering | 14th place submission | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'm a little frustrated as I had a 0.768 solution which I didn't submit, which is the difference between gold and silver. But still I'm happy to get my second competition medal!</p>
<h2>Fine-tuning on SQuAD</h2>
<p>I think some of the other solutions have already mentioned this, but MuRIL and RemBERT both significantly benefitted from being fine-tuned on SQuAD before being fine-tuned on the competition data. It seemed like a pretty obvious thing to try since the XLMR checkpoint everyone was basing their models on had also been fine-tuned on the same data. I used this approach for all of my MuRIL and RemBERT models.</p>
<h2>Torchscript</h2>
<p>I exported all of my models to torchscript which gave me a slight inference speed up. I was able to ensemble 20 folds, but it wasn't quite enough to fit 25 folds in the time limit. I also tried exporting to ONNX, but this didn't give me any speed up at all. It was my first time using either torchscript or ONNX so I'm sure there are a bunch of additional optimisations which could've been made.</p>
<h2>0.768 which I didn't submit</h2>
<p>The one I didn't submit was 10 fold MuRIL-large-cased and 10 fold RemBERT. These models actually had my best CV as well, but the public LB was pretty mediocre so I stupidly ignored them. When I calculated their OOF scores I noticed that although they had very similar CV scores (0.722 for MuRIL and 0.719 for RemBERT) they performed very differently on each language:</p>
<table>
<thead>
<tr>
<th></th>
<th>MuRIL</th>
<th>RemBERT</th>
</tr>
</thead>
<tbody>
<tr>
<td>OOF All</td>
<td>0.722</td>
<td>0.719</td>
</tr>
<tr>
<td>OOF Hindi</td>
<td>0.782</td>
<td>0.753</td>
</tr>
<tr>
<td>OOF Tamil</td>
<td>0.601</td>
<td>0.652</td>
</tr>
</tbody>
</table>
<p>MuRIL was better at Hindi questions, and RemBERT was much better at Tamil, so for inference I just divided the predictions by language, giving the Hindi questions to MuRIL and the Tamil questions to RemBERT. The final predictions were the mean of 10 folds of each model.</p>
<p>I also had another submission with these 2 models where I generated the predictions of each fold separately and then had the folds vote on the start and end position of the answer. That one also wasn't submitted, but would've been 0.765.</p>
<h2>The actual 14th place submission</h2>
<p>I switched between 5-folds and 10-fold strategies a couple of times during this competition. I ended up using 5-folds as I was concerned that I wouldn't be able to get enough diversity into my ensemble using 10-fold models. 10 folds was also really slowing down the amount of time it took to complete a training run. </p>
<p>My final 0.758 solution was 5-fold MuRIL x3 and 5-fold XLMR. The different MuRIL checkpoints had all been trained on slightly different data:<br>
MuRIL 1: fine-tune on SQuAD -> fine-tune on competition data + MLQA Hindi + XQUAD Hindi<br>
MuRIL 2: fine-tune on SQuAD -> fine-tune on competition data + MLQA (Hindi and English) + XQUAD Hindi<br>
MuRIL 3: fine-tune on SQuAD -> fine-tune on competition data + Google translated Tamil SQuAD sample + MLQA Hindi + XQUAD Hindi</p>
<p>The final ensemble contained these models with their folds averaged in a few different ways. The averaged predictions were then combined through voting on the start and end positions.</p>
<p>Thanks for the enjoyable competition!</p> |
chaii - Hindi and Tamil Question Answering | 16th place solution: Data augmentation and fixes + Evaluation across different models. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Also for people who know their Tamil pop culture: <a href="https://tinyurl.com/otherbanana" target="_blank">https://tinyurl.com/otherbanana</a> Couldn't help, now that the competition is over! :)</p>
<p>Thanks to Kaggle and Google India for hosting this competition and Congratulations to all the winners. That there are many medal winners who don't know either or both languages is very good news for low-resource languages in general. </p>
<h2>Data Augmentation and Data fixes</h2>
<p>Like many other competitors observed, adding TydiQA Telugu and Bengali datasets to the mix certainly boosted the model performances both in CV and LB. This shouldn't be surprising at all since most of us are using multilingual models and adding any Indic QA dataset will have a lot more real-world context overlap. </p>
<p>One other thing I usually do for all single-answer datasets is to fine-tune on Squad2 or other multiple-answer datasets first, instead of directly fine-tuning on an MLM checkpoint. This gives the models a better intuition on <em>not ignoring</em> missed labels. XLMR already comes with a model we all fine-tuned further on. I did these for Muril and Rembert, both of which gave a consistent 0.8 - 0.81 across the LB. Private LB scores were also in the higher range for Muril and Rembert fine-tuned on top of Squad2. (0.76 - 0.77) </p>
<p>Chaii dataset is a bit noisy on its own. I didn't fix any incorrect labels. But I <a href="https://www.kaggle.com/jdoesv/chaii-realigned-answers-single" target="_blank">fixed the answer-start values</a> by picking the first matching answer candidate from <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> 's fine-tuned version (that he published on the first week of this competition). </p>
<h2>Models</h2>
<p>I trained XLMR, Muril-Large, and Rembert over their Squad2 checkpoints, with Chaii dataset, MLQA Hindi, XQuaD Hindi, TydiQA Telugu and Bengali datasets. Didn't do anything wrt the model perse. Just playing around with hyperparameters, etc. But because this competition provides an entire passage as a context, this is as much as a passage selection task as it is a question-answering task. So I maintained a 1:2 ratio of relevant vs irrelevant contexts, by sampling on Chaii's train dataset and using the remaining external data as-is (they were smaller passages) for all models' training.</p>
<p>Rembert is a real humdinger of a model. It had the most NLU capabilities among the three model-types (at least the way I trained them). Case in point: </p>
<p>There was a question about <code>the largest mountain</code> in Chaii Dataset and the human annotator marked <code>Mount Everest</code> as the answer. But Rembert (Squad2 checkpoint. Not the Chaii checkpoint) predicted <code>Mauna Loa</code> because the context read <code>The tallest mountain doesn't necessarily mean the largest. For instance, Mauna Loa has a base area of ...</code>. </p>
<p>I picked the most interesting example. But it consistently out-performed both Muril-Large and XLMR when it came to picking the right context. It performed well on extracting outputs from Wikipedia's trivia boxes too. It didn't mark the exact word boundaries for predicted answers. But it almost always had the right answer. </p>
<h2>Ensembling</h2>
<p>This is where I messed up with selecting the submissions. My best internal submission came from a 3-fold Rembert (0.778). But since I assumed that Rembert wasn't right in generating answer boundaries, I fed its passage outputs to Muril-Large to generate word-boundaries. This reduced the scores to 0.757 :P It was a bit of a gamble between relevant context selection vs answer selection, and I picked an approach that weaves both. This didn't work out so well in retrospect :)</p>
<p>Will cleanup the notebooks and publish them later today. But for now, here are the Squad2 checkpoints that I fine-tuned on Rembert and Muril-Large.</p>
<ul>
<li><a href="https://huggingface.co/Sindhu/rembert-squad2" target="_blank">Rembert Squad2 Finetuned checkpoint</a></li>
<li><a href="https://huggingface.co/Sindhu/muril-large-squad2" target="_blank">Muril-Large Squad2 Finetuened checkpoint</a> </li>
</ul>
<h2>Some other interesting approaches</h2>
<p>The low-resource nature, combined with the morphological richness of Hindi and Tamil in NLP scene restricted the usage of better pre-training approaches like MPNet and Electra, or even SOTA QA models like LUKE. Hindi is a fusional language. So a word can have multiple morphemes depending on gender, tense, etc (Like <code>go</code> vs <code>went</code> in English). The current fix (that seems to be working well) to compensate for this is to train on a large amount of data. </p>
<p>Agglutinative languages like Tamil are a different kind of reality altogether. One thumb rule I learnt during this competition to evaluate models for agglutinative languages is to look for models' metrics on Turkish datasets (Turkish is an agglutinative beast). Where it isn't available, metrics on Japanese datasets will work just as fine. Character-based transformers like <a href="https://huggingface.co/google/canine-c" target="_blank">Canine</a> and <a href="https://arxiv.org/pdf/2106.12672.pdf" target="_blank">Charformers</a> sound like promising approaches for this type of language. They aren't mature/stable yet. But this is NLP. They might very well be the norm this time next year. </p>
<p>This competition served as an exponential learning curve for me. So much so that, during the start of the competition, I look for metrics in published NLP papers. By the end of the competition, it was more about ablation studies. It's also the first time I was a bit active in forums, GitHub, etc. So the confidence boost helped a lot too. I might be a bit more active on Kaggle going forward, and invest in a decent GPU source too :) </p>
<p><strong>Edit:</strong> <a href="https://www.kaggle.com/jdoesv/clean-rembert-feat-selector-xlmr-muril-ans-gen" target="_blank">Final submission inference notebook</a>. Even though this approach didn't work in this particular competition, the usage of a QA model for passage selection might still benefit someone in a better-labelled dataset. All inputs are the three types of the models. Of the two utility scripts, <code>hf-qa-utils</code> is mostly boilerplate code from the HuggingFace example notebooks to tokenize and map back datasets. <code>passage-qa-utils</code> does the sub-context selection from a Rembert QA model. </p>
<p><a href="https://www.kaggle.com/jdoesv/custom-ensemble-outputs?scriptVersionId=77964669" target="_blank">A second approach</a> of using 3 individual model outputs and passing them through an LGBM Ranker also faired quite well in private LB, that I forgot to mention in the original post.</p>
<p>Thank you for your kind words <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> !😄</p> |
chaii - Hindi and Tamil Question Answering | 17th place solution - BigBird 🐥 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Me too! I only got a silver 😄</p>
<p>Thanks for voting me MVP! I humbly accept the award 😄</p>
<p>I tried hard this competition to learn as much as I could and share as much as possible while keeping a few tricks up my sleeve. Your early notebooks laid the foundation that many of us used throughout the competition. Thank you for your hard work, and congratulations on the 1st place finish 🎉</p>
<p>I'll change my emoji to a kangaroo now 😊</p>
<p>I also tried without doing a warm start and it was terrible 😄</p>
<p>And I didn't use Tensorflow to train the BigBird model - I used Jax/Flax. <a href="https://github.com/huggingface/transformers/tree/master/examples/flax" target="_blank">Hugging Face has a couple of examples</a> that make it dead simple to get going. I'm sure there are more optimizations and details that I could have done, but I didn't have the time nor experience to learn them in time for this competition.</p>
<p>Also, this would have been extremely expensive without using the TRC. I'm curious to see if Kaggle takes a stance on whether competitors can use TRC in the future</p>
<p>Thanks for the discussions and posts! </p>
<p>Thanks Chris! It means a lot to get praise from you 😊</p>
<p>I wanted to run a 4k seq length model… but it was too big to train! </p>
<p>Sometimes the shakeups work in your favor, sometimes they don't. They more experience you have, the less likely it will be to drop. Keep practicing!</p>
<p>Thanks for all of the discussion and notebooks! See you in the next one!</p>
<h1>👍</h1>
<p>First of all, a huge thank you to the competition hosts and Kaggle. I spent an absurd amount of time on this competition, and I’ve definitely learned a lot about QA, multilingual models, and Kaggle in general.</p>
<p>This was basically the first time I put myself out into the public section of Kaggle, and I’m so happy for the following achievements since the start of this competition:</p>
<ul>
<li>3x Kaggle Expert (Datasets, Notebooks, and Discussions), soon to be 4x 😊</li>
<li>2x Gold notebooks</li>
<li>20+ gold/silver comments</li>
<li>Going from 600+ on the LB a few days ago to ending at 17th today (the turtle after my username was initially set to represent my progress. Slow and steady!)</li>
<li>A new job 🤗</li>
</ul>
<h2>My solution</h2>
<h4>XLM-R</h4>
<ul>
<li>6 folds, 384 max length, 128 stride, private/public LB 0.747/0.77, CV avg 0.718</li>
</ul>
<h4>MuRIL Large</h4>
<ul>
<li>6 folds, 384 max length, 128 stride, private/public LB 0.739/0.777, CV avg (forgot)</li>
</ul>
<h4>RemBERT</h4>
<ul>
<li>5 folds, 384 max length, 128 stride, private/public LB 0.750/0.786, CV avg 0.748</li>
</ul>
<h4>MuRIL Large BigBird</h4>
<ul>
<li>6 folds, 1024 max length, 512 stride, private/public LB 0.756/0.796, CV avg 0.732,</li>
</ul>
<p>The last model is probably the only novel thing I did, and also the reason I placed this high. I noticed that the contexts for these passages were pretty long, and from a bit of research I saw that BigBird and LongFormer were the two architectures designed for long contexts without having the compute time blow up. I also noticed that the BigBird authors indicated that they did a “warm start” from a BERT checkpoint when training their model that reduced the number of training steps, so I figured I’d do the same from XLM-R or MuRIL (<a href="https://arxiv.org/pdf/2007.14062.pdf" target="_blank">see page 7 under section called Pretraining and MLM</a>). Since MuRIL is basically identical to BERT, all of the weights (except one module) loaded in to a BigBird model without trouble. The thing that caught me up for a bit was the fact that the position embeddings (which are learned) were going from (512, 768) to (4096, 768) (max_seq_length, hidden_size). I followed the Longformer authors and ended up tiling the original position embeddings 8 times to make it fit ([see page 6 under section Position Embeddings](Position Embeddings)).</p>
<p>I trained for 150k steps on TPU v3-8 using Flax with a batch size of 1 using Masked Language Modeling as the training task. I used the Tamil, Hindi, Telugu, Urdu, and English sections of mc4 and stuck it straight into the model without cleaning. Doing whole-word masking and data cleaning would probably help significantly, but I didn’t have the time. I did the same for the large size, but I could only fit a 1024 sequence length and I only trained for 90k steps at batch size 1. The base model got up to 75% accuracy and the large model got 71% on MLM in the eval set.</p>
<p>From there, my MuRIL Large, XLM-R, and Muril Large BigBird models were trained first on SQuAD2 and then on XQuAD, MLQA, TyDiQA (telugu), and chaii datasets, validating only on a stratified chaii fold. Since I still had compute time, I did all of this easily and quickly on TPU v3-8 using Flax. <br>
RemBERT was a pain to train on Kaggle GPUs, so I used Tensorflow and Kaggle TPUs. This made it easy, and I convinced myself that having some models in Tensorflow provided model diversity. I trained it on SQuAD2 and then TyDiQA, MLQA, XQuAD.</p>
<p><a href="https://www.kaggle.com/nbroad/test-1k-6f" target="_blank">Here is the final notebook that got my highest score</a>. It is messy and wasn't meant to be my final submission 😅</p>
<h3>Ensembling</h3>
<p>I didn't actually get a submission out with an ensemble. My highest score without ensembling different models was 6 folds of MuRIL BigBird Large. I would never be able to get 23 folds to finish in time, but I was planning on splitting the passages based on length at around 10k characters. Under 10k go to XLMR, MuRIL Large, RemBERT, and MuRIL BigBird Large. Above 10k would just be MuRIL BigBird Large. Some of those contexts are so long especially when broken up with 384/128.</p>
<p>For ensembling, I tried voting but ran out of time before getting something to work. I ultimately converted the token logits to character logits by averaging the overlap between multiple sections and concatenating them together. Once they are character logits, they can be averaged across all models without worrying about lining up models with different tokenizers. I got this working with a few hours left, and I made one small mistake in my submission, which made the notebooks exceed the submission time. It did seem like a promising way of combining models. <a href="https://www.kaggle.com/nbroad/final-sub-char-avg" target="_blank">Here is the notebook I didn't finish in time that has character-level averaging.</a> It's pretty messy because I was cramming stuff in earlier today to try to get it done.</p>
<p>Much thanks to TRC for the compute time on v3-8 TPUs. All of my models are now available on my <a href="https://huggingface.co/nbroad" target="_blank">Hugging Face profile</a>.</p>
<h3>What didn’t work</h3>
<ul>
<li>Attention heads on the end</li>
<li>Conv. layers on the end</li>
<li>Concatenating start logits to end logits</li>
<li>Translating at test time and doing cross-lingual QA (English questions with Hindi/Tamil contexts)</li>
<li>Monolingual models</li>
<li>“QA Pretraining” on tons of other languages before finetuning: Arabic, Persian, Bengali, German, Thai, Korean. Only English, German, Hindi, Telugu, and Tamil helped. I think they were the highest quality, and most of the others were poor translations.</li>
<li><a href="https://arxiv.org/abs/2101.00438" target="_blank">Splinter</a>/<a href="https://arxiv.org/pdf/2109.01951.pdf" target="_blank">FewShotQA</a> <ul>
<li>I thought this would be perfect for this scenario because it excels at small datasets, but I could never get the model to converge</li>
<li><a href="https://github.com/oriram/splinter" target="_blank">They even had starter code</a> but it was in Tensorflow 1 and I stubbornly tried to turn it into PyTorch.</li></ul></li>
<li>NaturalQuestions/TriviaQA<ul>
<li>NQ is just a pain to work with because it’s absurdly large and the formatting is weird. I don’t think I ever got the training data to actually look right which is why I gave up on using it.</li>
<li>Ran out of time before trying TriviaQA.</li></ul></li>
<li>Training a question-generating model to label more data<ul>
<li>mt5 is huge and hard to train. I could only get a base model trained and it didn’t seem that good. <a href="https://arxiv.org/pdf/1906.05416.pdf" target="_blank">This article indicates</a> that using millions of pseudolabels works, but I didn’t think I’d have the capacity to get this going.</li></ul></li>
</ul> |
chaii - Hindi and Tamil Question Answering | 18th place solution: label variation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you kaggle and host for holding such an exciting competition. We have learned a lot.</p>
<p>Here is what I tried, it was really tough…</p>
<h3>key points of final submission</h3>
<ul>
<li><strong>GroupKFold using first 20 characters of contexts as a key</strong><ul>
<li>I noticed that there exist some duplicated contexts in training data. Some of them are not completely same contexts, but almost the same contexts with minor change. So I used GroupKFold using first 20 characters of contexts as a key.</li></ul></li>
<li><strong>Weighted ensemble of 7 all data train models (trained 2 epochs, weights are tuned by optuna)</strong><ol>
<li>XLMRoBERTa</li>
<li>XLMRoBERTa using all the "answer_text" in the context (many positive labels)</li>
<li>XLMRoBERTa, filtering the positive labels of samples which have 11 or more "ansewr_text" in the context. I tried this to de-noise the labels because labels of samples which have more "answer_text" should tend to be wrong. I also filtered text batches which contain "answer_text" but do not hold start/end position because if this answer_text is the right one, this case should be noise, too.</li>
<li>XLMRoBERTa with negative under sampling. pos:neg = 1 : 1</li>
<li>XLMRoBERTa with negative under sampling. pos:neg = 1 : 5</li>
<li>XLMRoBERTa with train pseudo labeling. I tried this because pseudo labels can make the label more accurate. The first "answer_text" should not be right one. In addition, there exist cases where the models cannot answer the question because of text batch. If the information about question and "answer_text" are not in the same text batch, I think models should not answer the question correctly. Pseudo labeling should filter these cases, too.</li>
<li>XLMRoBERTa with train pseudo labeling, and trained only using the text batches which have "answer_text".</li></ol></li>
<li><strong><a href="https://github.com/guchio3/kaggle-chaii-2021/blob/c697d485c2192bab0b60abfbc521741728b8618d/src/model/chaii_model.py#L192-L203" target="_blank">cumax function</a></strong></li>
<li><strong>number text postprocessing</strong><ul>
<li>if the predicted texts contains only Devanagari numbers, I changed it arabic ones.</li></ul></li>
</ul>
<pre><code>def devanagari_number_to_arabic_number(
answer_text: str, context: str, only_whole_numeric: bool
) -> str:
if only_whole_numeric and not answer_text.strip().isnumeric():
return answer_text
devanagari_to_arabic = {
"०": "0",
"१": "1",
"२": "2",
"३": "3",
"४": "4",
"५": "5",
"६": "6",
"७": "7",
"८": "8",
"९": "9",
}
new_answer_text = answer_text
for devanagari_num, arabic_num in devanagari_to_arabic.items():
new_answer_text = new_answer_text.replace(devanagari_num, arabic_num)
if context.find(new_answer_text) > 0:
return new_answer_text
return answer_text
</code></pre>
<h3>tried, but not used</h3>
<ul>
<li><strong><a href="https://github.com/guchio3/kaggle-chaii-2021/blob/2d974b88c2689b1d09047947d2c8d19838320d9e/src/prediction/prediction_result.py#L50-L209" target="_blank">char level ensembling</a></strong><ul>
<li>I've implemented char level ensembling logic to ensemble diverse (ex. muril) models, but I didn't use it because it decreased public LB score so much. This worked for private LB, but I couldn't adopt it anyway… </li></ul></li>
<li><strong>2 stage modeling</strong><ul>
<li>I thought that the models don't know which batched texts are good. So I tired 2 stage modeling. In 1st stage, I calculate the probability of holding the "answer_text" for each batched text. In 2nd stage, I tried 2 things. ① multiply predicted probability to logits of each batched text. ② filter top N batched text, and calculate logits using the model trained only using batched texts which have "answer_text". The local AUC of 1st stage was 0.964, but both approaches did not work so well.</li></ul></li>
</ul>
<p>One interesting finding is that if the question is same or similar, the answer_text tend to be same. I tried to fill the predicted text of submission file using question similarity (calculated jaccard and levenshtein distance) with train.csv. I could only achieve 0.004, but there exist same questions and answer_texts in test data with train data. <a href="https://www.kaggle.com/guchio3/fill-sub-by-train/notebook" target="_blank">notebook</a></p>
<p>I hope I will achieve a solo gold in the next one ;)</p>
<p><strong><a href="https://github.com/guchio3/kaggle-chaii-2021" target="_blank">code</a></strong><br>
<strong><a href="https://wandb.ai/guchio3/kaggle-chaii-2021/table?workspace=user-guchio3" target="_blank">experiment results</a></strong></p>
<p>Ah, actually my pseudo label is not for test dataset, but for validation dataset. (so it just re-labels the training data to decrease label (=answer_start) noise.)<br>
It didn't change the validation score a lot, but I think it should contribute to variation for the ensemble.</p>
<p>Hi, thank you for your commment!</p>
<p>Yes, I basically used original dataset alone for validation, but I used original one with external datasets (xquad & mlqa) for validation only when I created pseudo labels.</p>
<p>Thank you~ :D</p>
<p>Thank you!</p> |
chaii - Hindi and Tamil Question Answering | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Krish! I didn't really know which technique would work best, except from some intuition by failing in many competitions previously and later on reading about the winning solutions… The more competitions/experiments you run, the better you'll hone your intuition :) Not sure if my music recos will help 😂let me know if you try! Rig = GPU workstation. </p>
<p>Thanks <a href="https://www.kaggle.com/watzisname" target="_blank">@watzisname</a>! I still plan to do some ablation studies, can't really tell the exact impact of various methods, especially that just differentiating seeds already had a big impact on CV/LB. How did you do random cropping? Conceptually, I don't see how it might hurt model's performance…</p>
<p>Thanks <a href="https://www.kaggle.com/jdoesv" target="_blank">@jdoesv</a> it was great competing with you on this challenge, and thanks for bringing MURIL to the forums!! </p>
<p>Thanks Yassine! My main inspiration for augmentations is fastai course/book, not aware of any research/papers, but might look for it now that I have a bit more free time :) </p>
<p>Thanks Michał!! Have you tried listening to Kult while doing a Kaggle comp? If not, maybe that's the problem 😂😂😂</p>
<p>When I started on Kaggle, I didn't understand many top solution descriptions to be honest… If you're getting started, I recommend the fastai course.</p>
<p>Thank you! On Kaggle, it's my 2nd competition where I put so much work, after HPA. Outside Kaggle, it all depends on how big the return on invested effort is…</p>
<p>Thanks to Kaggle and the hosts for this competition! Thanks to everyone in the community I engaged with in the forums and discussions, I learned a lot from you.</p>
<h2>CV Setup</h2>
<p>When I'm writing this summary, I only know my public LB rank, so this paragraph may turn bitter. I decided to fully trust public LB in this competition and didn't even track local CV. The train dataset was small and confirmed to be noisy, while the public LB was bigger and contained 3-way annotations, which meant higher quality. I expected a higher correlation between public and private LB than between my local CV and private LB.</p>
<h2>Architecture</h2>
<p>All my models are standard QA models (AutoModelForQuestionAnswering) and the postprocessing is directly reapplied from the HF example notebook by Sylvain Gugger that I also used in my public EDA. I used XLM-Roberta Large, MURIL Large and Rembert, and blended the predictions from several models per backbone. In the next stage, I used majority voting at word level across the 3 backbones to decide which words will make it to the final prediction.</p>
<p>For diversity, I submitted another ensemble without blending single models, only doing majority voting. It had worse score on public LB, but same on private - both my chosen subs are .787 private. My best private sub is .796. I also have a single model/fold (MURIL) that scored .802 on public LB and .783 on private.</p>
<h2>Data Recipes</h2>
<p>I started the competition, like most of other participants, with XLM-Roberta Large pretrained by Deepset on English SQUAD and finetuned on chaii data. Later, I started to pretrain my own backbone models (I mean QA pretraining, not MLM). I focused on TyDi dataset, because the host mentioned that chaii data was collected in a similar way. I went through several experiments with different pretraining and finetuning protocols and the key insight was that the performance of pretrained backbone often exceeded its finetuned version. Based on this observation, I decided to drop the 2-stage approach and train my models in 1-stage. However, I wanted to control the order in which I provide different datasets to the model. I call this <strong>data recipes</strong> because it feels like cooking. </p>
<p>Here is an example of a data recipe that I used in my final experiments: </p>
<ol>
<li>Take the English, Bengali and Telugu subset of TyDi dataset and 2/3 of chaii train dataset. Tokenize it with sequence length 256 and remove 90% of negative examples (negative sampling). Add tokenized English SQUAD and Hindi parts of MLQA and XQUAD. Shuffle all together and put aside. </li>
<li>Repeat step 1, but replace English SQUAD with subset of Natural Questions. Use sequece length 384 for tokenization (progressive resizing, see below). </li>
<li>Concatenate the data collected in steps 1 and 2. </li>
<li>Train. </li>
</ol>
<p>Some more details:</p>
<ul>
<li>Because I moved the data combination/shuffling logic into the dataset, I used sequential sampler and 1 epoch for training. HuggingFace <code>datasets</code> library made this super easy. </li>
<li>I used negative sampling, usually 0.1, for TyDi, chaii and NQ datasets because they came with longer contexts.</li>
<li>I initially experimented with some auto-translated datasets like Tamil SQUAD but they didn't help my performance, so at the end I only used original language datasets and those translated manually (Hindi MLQA/XQUAD). </li>
<li>I tried various languages from TyDi and found that English/Bengali/Telugu worked best.</li>
<li>I wanted to add some more data, so I took Google Natural Questions that contained a short answer and trained a simple Roberta-base classifier to distinguish NQ data from TyDi English dataset. I selected a subset of NQ that was most similar to English TyDi. </li>
<li>Since I was not tracking CV, I initiatially used all chaii data. Later on, I moved to a 3 fold setup - for diversity and for early stopping. If an interim checkpoint had a much better CV than the final checkpoint, I would usually test it on public LB. </li>
</ul>
<h2>Image-Like Augmentations</h2>
<p>This part is heavily inspired by <strong>fastai</strong> - during one of the Deep Learning courses <a href="https://www.kaggle.com/jhoward" target="_blank">@jhoward</a> talked about his work transfering the pretraining method from Computer Vision to NLP (ULMFiT) and mentioned that there are many other opportunities by following this approach. I focused on the augmentation techniques. </p>
<h3>Random Cropping</h3>
<p>This is a standard augmentation in CV, but I haven't seen it applied to NLP, although it fits perfectly here: we need to chunk a very long text into a fixed token length fragments. Rather than doing it once and showing the same examples in each epoch, why not make it a dynamic and random crop? Initially, I implemented a function that took a random crop with padding around the answer text for positive examples, and a random crop for negative examples. Once I moved to data recipes described above, I realized that the same can be simpler achieved by using a different stride and negative sampling every time I add a dataset to my recipe. </p>
<h3>Progressive Resizing</h3>
<p>This is another trick that Jeremy teaches in the fastai lectures, applied commonly for images. In case of progressive resizing, I use different sequence lengths during training. E.g. start with 256, then move to 384, then move to 448. </p>
<h3>Cutout</h3>
<p>In computer vision, we sometimes black out patches of image as augmentation. In NLP, we can do the same by replacing some tokens with <code>[MASK]</code>. I saw that <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> used this technique in Tweet Sentiment competition. I used 0-10% cutout in my models. </p>
<h2>Fine-tuning</h2>
<p>I didn't experiment too much with different hyperparameters for finetuning, I usually started with the defaults described in each backbone paper and often stuck with that. I did most of my experiments on the XLM-Roberta backbone. MURIL had the best performance on the public LB (I believe due to tokenization), although the training run sometimes collapsed (a restart usually helped). </p>
<p>Here are my default hyperparameters for each backbone (batch size = 8):</p>
<ul>
<li>XLM-Roberta: gradient accumulation: 4, learning rate: 1e-5, wd: 0.01</li>
<li>MURIL: gradient accumulation: 4, learning rate: 3e-5, wd: 0.01</li>
<li>Rembert: gradient accumulation: 16, learning rate: 1e-5, wd: 0.01</li>
</ul>
<h2>Post-processing</h2>
<p>Other than the standard postprocessing (I didn't modify the default functions from the official HuggingFace QA notebook), there was an opportunity to improve predictions by addressing XLM-Roberta and Rembert tokenization issues. These models tend to merge some punctuation marks with subwords, while annotators often split them. I initially used <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a>'s method of simply stripping punctuation, and in the last day made some modifications similar to the method shared by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a>. I add dots after BC/AD, I add parenthesis if included within the stripped prediction etc. I didn't see an impact in my score by adjusting this, but my position shifted from 3 to 2 based on this postprocessing. This didn't work well on private LB though (-0.004). I didn't do any such postprocessing for MURIL because it handled it well with its tokenizer. </p>
<h2>Frameworks</h2>
<p>I went all in HuggingFace in this challenge and can't imaging doing it another way. Other than <code>AutoModelForQuestionAnswering</code>, I used the <code>HF Trainer</code> initially, and later realized they have a QA version of the trainer, which has EM/F1 measures implemented for validation. I used EM/F1 for early stopping rather than observing the validation loss. Last but not least, HF <code>datasets</code> was a huge enabler for me, its caching works like magic.</p>
<h1>Things that Didn't Work</h1>
<h2>Label Error Corrections</h2>
<p>Quite early in the competition, I tried to correct the noisy labels by doing out of fold predictions, looking at the model errors and replacing the label with prediction if - based on Google Translate - I thought it's better than original. However, this degraded my performance on public LB and I dropped this. I missed the opportunity to correct the answer start, I only tried it in the last 2 days after <a href="https://www.kaggle.com/zacchaeus" target="_blank">@zacchaeus</a> published his corrected dataset. The models trained with it appeared to perform better, but they didn't help my ensemble at the end. </p>
<h2>MT5</h2>
<p>I was quite bullish on MT5, especially when I tried a pretrained checkpoint from HF Hub and it scored .713 without any finetuning. Also, it looks like MT5 XXL is SOTA for this type of challenges with limited data. However, the largest model I was able to run in Kaggle kernel was MT5-Large (is it possible to run a bigger one?). I got MT5-Large to train on Google Colab (thanks to AdaFactor) as I was travelling during this time, but I couldn't surpass the .713 score of the pretrained backbone. </p>
<h2>Jaccard Loss</h2>
<p>I implemented my own version of jaccard loss (Lovasz-Hinge) but it didn't improve my performance, even when combined with CrossEntropy. It's very likely that I made some implementation error, if you'd like to take a look and give me feedback I'd be happy to share that implementation. </p>
<h2>XLNet Head</h2>
<p>I also tried the XLNet head where end-logits are predicted by taking into account start-logits, using beam-search on inference. It didn't work too well for me, could be another implementation error. </p>
<h2>Tokenization</h2>
<p>I didn't venture to tamper with the tokenizers, but I tried to split some punctuation marks before running the tokenizer - this didn't help my score either. </p>
<h1>Other</h1>
<h2>Music</h2>
<p>I listened to live concert recordings of 2 Polish rock bands during my coding sessions. Here are some samples if you want to check them out: </p>
<ul>
<li><a href="https://youtu.be/8XszVw_-z60" target="_blank">https://youtu.be/8XszVw_-z60</a></li>
<li><a href="https://youtu.be/0B5gk0QjHUc" target="_blank">https://youtu.be/0B5gk0QjHUc</a></li>
</ul>
<h2>GPU</h2>
<p>I have a rig with 1 RTX 3090 and a Colab Pro account.</p>
<h2>Break</h2>
<p>I started the competition in the first hours and prepared an EDA, baseline and QA tutorial. Then I took a break and then really focused in the last month of the competition. I used all subs I had available during that time except for 2 that I missed when DST change messed up with my submission plans. </p>
<h1>Links</h1>
<p><a href="https://www.kaggle.com/thedrcat/chaii-ensemble-10postuning" target="_blank">Inference Kernel</a><br>
<a href="https://github.com/kldarek/chaii" target="_blank">Github Repo</a></p> |
chaii - Hindi and Tamil Question Answering | 20th Place Solution; XLM-Rs, xtune, hacky postprocessing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!!<br>
(BTW, your posts in discussion helped me a lot! In addition, your higher rankings in pubLB made me keep motivated!)</p>
<p>Thank you! I feel so happy to hear that!</p>
<p>That was a little different from my feeling (= vague memory…) during the competition. It might depend on settings. Anyway, I believe that SAM is still promising in the next tasks!</p>
<p>Thank you for asking!</p>
<p>I found that my logs of SAM exp were missing… I'm also still interested in SAM.<br>
So, I re-run a simple setting using <a href="https://huggingface.co/deepset/xlm-roberta-large-squad2" target="_blank">https://huggingface.co/deepset/xlm-roberta-large-squad2</a> as the pretrained model and compared (1) baseline, (2) SAM from the beginning, (3) SAM only after 200 updates.<br>
(hyperparameters-> max_seq_length=400, doc_stride=160, epochs=5, train_batch_size * gradient_accumulation_steps=32, AdamW, lr=1.5e-05, weight_decay=0.005, warmup_ratio=0.1, all used the same seed) (fp32 was used because my fp16+SAM seemed to be somehow broken.)</p>
<p>Jaccard comparison on the average of 5-fold CV with early stopping.<br>
(1) Baseline: 0.678<br>
(2) SAMfromBegin: 0.686<br>
(3) SAMfrom200it: 0.677<br>
In the results, (2) SAM seems a little better performance, though less conclusive.</p>
<p>Thank you to Kaggle, the hosts, the discussion forums, and all the rivals!<br>
I really enjoyed my first Kaggle thanks to all of you!</p>
<p>I got 20th place at the private LB (0.755). (although was 9th at the public LB (0.811)!)<br>
My solution was the ensemble of 9 XLM-RoBERTa-large-based models (from 5 different ways of model construction) and some hacky postprocessing for inference.</p>
<p>Because I haven't evaluated them in detail, each model construction is dirty and may contain many meaningless (or even harmful) tricks and differences. Note that I describe them simply as what I did and what I guess.<br>
The starting point is the famous <a href="https://www.kaggle.com/tkm2261/reproduction-of-0-792-notebook" target="_blank">0.792 notebook</a>s.</p>
<p>(I mimicked the format of <a href="https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering/discussion/287923" target="_blank">1st place Darek's solution</a>.)</p>
<h4>CV Setup</h4>
<p>Simply used sklearn's model_selection.StratifiedKFold of 5-fold and y = data['language']<br>
It is used for early stopping and model (seed) selection.</p>
<h4>Architecture</h4>
<p>Basically, it is a standard extraction QA BERT model (AutoModelForQuestionAnswering), i.e., encoding by a transformer and predicting the logits for (start_index, end_index) via a linear layer.<br>
The score of each phrase is calculated by start_logit + end_logit. (If both indicate CLS token's position, it represents 'no-answer.')</p>
<p>I tried some variants for enhancing the diversity of models.</p>
<ul>
<li>transformer -> linear (baseline)</li>
<li>transformer -> cat([h_last, h_19, h_15, h_11, h_last_of_CLS]) -> linear -> LeaklyReLU -> [k=3 depthwise conv -> linear] x2 with residual connection -> linear (variant A)</li>
<li>transformer -> cat([h_last, h_19, h_15, h_11, h_last_of_CLS]) -> linear -> [new BERTLayer] x 2 -> linear (variant B)</li>
</ul>
<p>where h_{*} indicate *-th layer's output at the token and h_last_of_CLS is the final output at CLS token.</p>
<p>I did it almost just for fun! I expected that they possibly could contribute to the ensemble.</p>
<h4>Data Recipes</h4>
<p>Some external datasets are sometiems used or sometimes not.</p>
<p>Finetuning</p>
<ul>
<li><a href="https://www.kaggle.com/rhtsingh/mlqa-hindi-processed" target="_blank">XQuAD-hindi</a>, <a href="https://www.kaggle.com/rhtsingh/mlqa-hindi-processed" target="_blank">MLQA-hindi</a> (thanks, <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a>)</li>
<li><a href="https://rajpurkar.github.io/SQuAD-explorer/" target="_blank">SQuAD-english</a></li>
<li><a href="https://github.com/facebookresearch/MLQA#translate-train-and-translate-test-data" target="_blank">MLQA-translate-hindi</a> with some filtering like "context length <= 2000"</li>
<li><a href="https://github.com/microsoft/unilm/tree/master/xtune" target="_blank">xtune's training datasets for xquad</a></li>
</ul>
<p>Additional pretraining</p>
<ul>
<li><a href="https://indicnlp.ai4bharat.org/corpora/" target="_blank">IndicCorp in Hindi and Tamil</a>, Wikipedia in <a href="https://dumps.wikimedia.org/hiwiki/latest/" target="_blank">Hindi</a> and <a href="https://dumps.wikimedia.org/tawiki/latest/" target="_blank">Tamil</a></li>
</ul>
<h4>Fine-tuning</h4>
<p>In total, I used five settings for the ensemble.<br>
They may have different settings:</p>
<ul>
<li>Model architectures.</li>
<li>Pretrained models. When additional pretraining is performed, the initialization is from XLM-RoBERTa-Large. In addition, two-stage finetuning is sometimes used.</li>
<li>Finetuning datasets. The chaii dataset + some externals.</li>
<li>Hyperparameters. Maybe, less important. epoch={1, 4}, lr=1.5e-5, weight decay={5e-3, 1e-2}, total_batch_size={8, 16}, max_seq_length={384, 400}, doc_stride={128, 135}</li>
</ul>
<p>([ ] shows scores at the public LB)</p>
<p>(1) [0.774, 0.775]</p>
<ul>
<li>Variant A-arch; using CNN</li>
<li>MLM pretraining with 30% corpus</li>
<li>MLQA, XQuAD</li>
</ul>
<p>(2) [0.778, 0.782]</p>
<ul>
<li>Baseline Arch.</li>
<li>MLM pretraining with 30% corpus</li>
<li>MLQA, XQuAD, MLQA-translated, SQuAD</li>
</ul>
<p>(3) [0.769, 0.770]</p>
<ul>
<li>Baseline arch.</li>
<li>MLM pretraining with only Wikipedia x 4 epochs</li>
<li>MLQA, XQuAD, SQuAD</li>
</ul>
<p>(4) [0.763]</p>
<ul>
<li>Variant B-arch; using attention</li>
<li>MLM pretraining with 100% corpus</li>
<li>1st stage: SQuAD + <a href="https://arxiv.org/abs/2106.08226" target="_blank">consistency regularization</a> (repro. of xtune)</li>
<li>2nd stage: MLQA, XQuAD, SQuAD</li>
</ul>
<p>(5) [0.765, 0.774]</p>
<ul>
<li>Baseline arch.</li>
<li>XLM-RoBERTa</li>
<li>1st stage: xtune's xquad setting 1</li>
<li>2nd stage: xtune's xquad setting 2 without augumentation.</li>
<li>3rd stage: MLQA, XQuAD</li>
<li>(BTW, <a href="https://github.com/microsoft/unilm/tree/master/xtune" target="_blank">xtune</a>(+ finetuning) was very helpful for training a best single model (0.790))</li>
</ul>
<h4>Data Augmentations</h4>
<p>I expected that adding external data and regularizations (dropout, weight decay, early stopping, consistency regularization) may work sufficiently. So, I didn't use special data augmentation.<br>
Instead, at each epoch, I sometimes simply used different 'stride' hyperparameters (e.g., +Uniform[-10, 10]) for constructing the training dataset of 'strode chunks'.<br>
This produces (a little) different examples at each epoch.</p>
<h4>Post-processing</h4>
<p>It uses tricks in the famous 0.792 notebooks.<br>
In addition, I originally used some tricks after analyzing errors (in CV evaluation) by my models.<br>
I guessed that some preferences or tendencies could happen in annotation processes.</p>
<h5>Arabic Numerical Expressions</h5>
<p>I observed some cases, where a model prediction and GT indicate the same numerical expression but the prediction is written in Hindi but GT is written in Arabic (and both expressions are in the text). This is an ambiguity of answers due to orthographical variants. Although I'm not familiar with Hindi, GT answers seem more frequently written in Arabic. Therefore, I performed a simple replacement (Hindi -> Arabic) by<br>
<code>{'०':"0", '१':"1", '२':"2", '३':"3", '४':"4", '५':"5", '६':"6", '७':"7", '८':"8", '९':"9"}</code> if the model predicts Hindi number but the same Arabic number also exists in the text.</p>
<p>(Maybe the same as the <a href="https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering/discussion/287917" target="_blank">tkm kh</a>'s processing)</p>
<p>I tried the same trick for Tamil but not effective.</p>
<h5>Earlier Phrases</h5>
<p>It seems that a phrase at earlier positions can be an answer a little more frequently than the others.<br>
I added a simple bias for the score by <code>- (2.0 * start_char_index / len(context)</code>.</p>
<h5>Stride Boundaries</h5>
<p>BERT-based QA by huggingface implementation split long text into many subtext chunks (with some stride setup), each of which is shorter than max_seq_length of the model.<br>
I guess that prediction on the positions near the boundaries of chunks might be unreliable because some important surrounding context could be missing by chunking.<br>
Therefore, I added a simple bias for the score by</p>
<pre><code>if [it is NOT the beginning of the whole text] and [start_index_in_chunk is earlier than 10]:
bias = -3
elif [it is NOT the end of the whole text] and [end_index_in_chunk is later than max_length-10]:
bias -= 3
else:
bias = 0
</code></pre>
<p>Even if a phrase gets a penalty due to the bias, the phrase may occur in another chunk without penalty because stridden chunks have overlapped tokens.</p>
<h5>Things that Didn't Work (for My Ensembles)</h5>
<ul>
<li>I thought mT5s could be the best model. But, I failed to get good performances with them; even reproducing mT5 papers' results in SQuAD.</li>
<li>Label smoothing. It could help but almost seem meaningless. I also tried something similar to Jaccard-based Soft Labels, but less helpful.</li>
<li>MURIL Large. Many winners of chaii seem to use MURIL too. But, my MURIL was somehow not good.</li>
<li>info-xlm-large. It was very good as a single model. But, finally not helpful in my ensemble…</li>
<li>Tagging the questions by dataset-specific special tokens for discrimination, as used in NMT with <a href="https://aclanthology.org/2020.acl-main.532.pdf" target="_blank">backtranslation</a> or for <a href="https://aclanthology.org/2021.findings-acl.264.pdf" target="_blank">multilingual modeling</a>. It seemed to alleviate the performance change due to dataset selection but seems not very helpful when selecting appropriate external datasets only.</li>
<li>I tried to use SQuAD-translated-tamil and XQA-tamil but were not helpful in my final submission.</li>
<li>In NaturalQA, <a href="https://arxiv.org/abs/1901.08634" target="_blank">it</a> is reported that using CLS-token scores for calibrations.</li>
<li><a href="https://arxiv.org/abs/2110.08529" target="_blank">SAM-based optimization</a>.</li>
<li><a href="https://github.com/microsoft/unilm/tree/master/xtune" target="_blank">xtune</a> was very helpful for training a best single model. But, often less helpful in my ensembles.</li>
</ul>
<p>Thanks!</p> |
chaii - Hindi and Tamil Question Answering | 21 place solution: Trust CV & Differential fusion | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You are welcome! Thank you for reading.😄</p>
<p>Thank kaggle stuff holds this great competition. Thank the great community provides so many fantastic notebooks and discussion. Finally thank my teammate <a href="https://www.kaggle.com/guohey" target="_blank">@guohey</a> and <a href="https://www.kaggle.com/thousandtie" target="_blank">@thousandtie</a> for accompanying me on this unforgettable 10-day India trip.</p>
<p>Our solution is mainly the topic: trust the cv and fuse models with as much different as possible.<br>
We can't find a way to ensemble rembert and xlmr, so we decided to use only Rembert in one final submit and XLMR in the other.</p>
<ol>
<li><p>Why do we decide to Violently fusion of different models<br>
Because we found that adding data can't improve performance of single model in both CV and LB.</p></li>
<li><p>Why do we decide to trust CV more than LB?<br>
Actually, there's a bit gamble. We can't get a better LB score than the 792 submission before the last 2 days. Besides, no matter how I blend my own model with 792, even if I give my model a small weight, it's worse than 792. I calculate the cv and lb of each model in the 792 submission, finding the fold 0 and fold 1 have poor lb and cv, while the last 3 folds have great cv and great lb. However, without using the poor first 2 fols, its LB score drops greatly. Reading the training notebook of the first 2 folds, I found nothing special. <br>
Taking the above experimental results and analysis into consideration, I would rather believe the 792 submission overfit LB. So instead of trying to beat the public 792 on LB, I chose to use my CV's better model to ensemble and prayed for its shake up on PLB.<br>
Luckily, this gamble paid off.</p></li>
<li><p>How do we make models different?<br>
We take different seed and different data. <br>
For example, the 12 folds rembert submission consists of the following models:</p></li>
</ol>
<ul>
<li>2 fold with <strong>only hindi external data</strong> and <strong>seed 2021</strong> (LB 782)</li>
<li>3 fold with <strong>hindi and tamil external data</strong> which is popular in community and <strong>seed 42</strong> (LB776)</li>
<li>4 fold with the same data of the above one and <strong>seed 1111</strong> (haven't got LB, we didn't have enough submission)</li>
<li>2 fold with the data of the above one <strong>adding a sample of 15k <a href="https://www.kaggle.com/zzy990106" target="_blank">@zzy990106</a> external tamil data</strong> and seed 99.<br>
We tried some other combinations, sampling more external data and replace it with some folds in the seed1111 part. But this one plays best in LB, so we use it for final submision of rembert. <br>
We take the same principle in xlmr, but maybe we use some/all folds of the public 792 models in our xlmr ensemble, this strategy didn't work in xlmr.(No matter how I blend with 792, it is worse in Lb.)</li>
</ul>
<ol>
<li><p>How do we trust our cv?<br>
We train 5 fold of different data and seed each time. After we got weight, we calculate their CV. We only use CV better than 0.68 to ensemble.</p></li>
<li><p>What can be improved in the future?<br>
We can't find a way to evaluate the cv of our ensemble submission, because there are different seeds, and each seed only use 20%-80% model.<br>
In fact our best submission has a 0.760 PLB, which is really close to gold. But its LB is worse than our best 795 rembert submission, so we didn't choose it. Maybe with a better cv strategy, we have a chance to get a gold medal.<br>
If you have any ideas to calculate cv of the ensemble output, please discuss with me!</p></li>
</ol>
<p>Thank you for your reading, feel free to comment and upvote!</p>
<p>Congratulations <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> your another silver medal. It seems like we have almost the same experience and strategy in this game🤝. </p>
<p>Thank you <a href="https://www.kaggle.com/lucamassaron" target="_blank">@lucamassaron</a> . I only validate on the training dataset(keep it fixed), no external data in validation. </p>
<p>You are welcome😁</p> |
chaii - Hindi and Tamil Question Answering | 22nd Place Solution: Model Architecture | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle for hosting an interesting and challenging challenging NLP competition. This was my first serious NLP competition and really enjoyed every bit of it and learnt a lot on the way.</p>
<p>Firstly, a couple of things made this very very challenging competition</p>
<ol>
<li>Data annotation issues: There were multiple data annotation issues in the train data and although it seemed easy to get frustrated, the challenge of the competition was to handle this kind of real world data and annotation issues that exist all the time.</li>
<li>Lack of CV/LB corelation: Since there was no good co-relation, it meant that we didnt really know what to focus on. This lead to efforts being distributed all over the place. I trained XLM-Roberta Large, MURIL Large, tried multiple loss functions, and different kinds of heads and only after the results were announced did we know that Muril and Mean/Max head really worked (more on this below)</li>
</ol>
<h1>Architecture</h1>
<ol>
<li>XLM-Roberta: I used the Trainer models from Hugging Face</li>
<li>Muril-Large: These had a really good CV/LB co-relation, but the training was really unstable. I couldnt get it work unreliably across a number of experiments. This is something I will seek to improve after the competition.</li>
<li>XLM-Roberta with Mean/Max Head: This worked like a charm in terms of CV/LB co-relation and unfortunately I realised this in the last week. Whenever, CV increased LB score jumped and I would really like to understand why this particular "head" works so well. Credit to <a href="https://www.kaggle.com/rashmibanthia" target="_blank">@rashmibanthia</a> for this idea <a href="https://www.kaggle.com/rashmibanthia/xlm-roberta-jaccard-validation-mean-max-head/notebook" target="_blank">here</a></li>
</ol>
<p>Muril and Mean/Max really proved once again, that architecting and using great models is really important for a good outcome.</p>
<h1>Data</h1>
<ol>
<li>Augmenation: I tried random-crops and was excited about this idea, but the CV didnt really improve. I should have tried checking the Public LB score of this technique, but I think it wouldn't have improved. The reason this probably didn't work is because the model is too powerful to be fooled by this level of augmentation. </li>
<li>Auto-Translated Data: I used Auto-translated data, but this really hurt performance. This makes sense since the translations are not really of great quality.</li>
<li>TyDi Dataset: I experimented with all the languages in the Tydi Dataset and found that that Telugu performed the best, followed by English and Bengali. Other languages didnt really do well.</li>
</ol>
<h1>Post-processing</h1>
<ol>
<li>I fine tune the post-processing shared by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> to remove other special characters like ( and ) and also replace k.m with k.m. etc. This gave a boost of 0.002</li>
</ol>
<h1>Things that didn't work</h1>
<ol>
<li>Data Augmenation using Auto-translated data: As described above this didn't work well</li>
<li>Using Median instead of Mean: I had spent sometime in the Google Brain - Ventilator Pressure Prediction and there was an idea by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> that really worked well when the predictions were far removed from mean. I tried the same idea of using Median here, but that dropped score by 0.005 on LB. Upon further evaluation, it did look like the eventual logits were not as far apart as I thought they were.</li>
<li>End_Logits modification: During evaluation of model predictions, I realised that the start_logits was working well, but the model was placing a low emphasis on end_logits. So I tried fine tuning it with <code>np.log100(np.sqrt(end_logits))</code>. This did work well on public LB by about 0.002, but i eventually dropped it since I felt it wouldn't do well in the Private LB (and that did happen).</li>
</ol>
<h1>Things that I wanted to try but didn't get to</h1>
<ol>
<li>Base Tuning on Auto-translated data and then fine-tune on chaii+mlqa+tydi data: I believe this has real potential, but didn't have the time to try </li>
<li>Ensemble Voting: I thought this had potential, but didn't get to it.</li>
</ol>
<p>Finally, thanks to everyone who contributed to make this competition interesting.</p> |
chaii - Hindi and Tamil Question Answering | 24th place solution & attempt – only with XLM-RoBERTa🍵 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to Kaggle and host for the opportunity to have a fun competition.</p>
<p>To be honest, I did not understand how to ensemble models with different tokenizers like XLM-Roberta and Muril. Therefore, I used only XLM-Roberta to create the models, and I think this is the reason why my score did not improve enough. Now I am impressed with other people's solutions and how they ensemble like this.</p>
<ol>
<li>At the beginning of the competition, I first worked on data augmentation. I tried to create Hindi and Tamil version of SQUAD using googletranslate, as many people had done by now. However, instead of increasing my score, it decreased significantly. I threw it away immediately.</li>
<li>I tried to make separate Hindi and Tamil models. This also lowered my score significantly. <strong>I took this fact against me and guessed that adding data from other similar languages would increase the score.</strong></li>
<li><strong>I decided to add TyDi-QA's Bengali and Telugu to the study. I think my score went up reasonably well. (CV and LB were both unstable, so it's hard to show exactly how much they improved.)</strong></li>
<li>I cleaned the context of the training data. First, I removed useless line breaks and spaces, but it had no effect on the score. Next, I automatically removed strings that looked like tags in the Wikipedia source code using regular expressions. This also did not cause any significant change in the score.</li>
<li><strong>I created multiple models and ensembled them by changing the max_length (the unit of context division) in various ways, such as 320, 384, and 448. This was inspired by my previous experience in Melanoma competition where we were able to increase our score by creating models with different image sizes. As a result, my score increased significantly.</strong></li>
<li>When I changed the doc_stride (the length of the overlap when splitting the context) in various ways, I noticed that the score fluctuated greatly. In my experiments with CV, I got relatively high scores in the range of doc_stride=150~190 for max_length=512. In the end, I chose doc_stride=176,180, but the scores were very unstable and it was difficult to decide on one doc_stride. In a way, I thought this competition had a "lottery by doc_stride" aspect to it (which it didn't, actually).</li>
</ol>
<p>I wanted to find out what the teams "tkm kh" and "zzy + Zacchaeus", who dominated the top positions from the beginning of this competition, were aware of, and I wanted to find out too. I learned a lot.<br>
<a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>, congratulations on your win! I can learn a lot from your solution.<br>
<a href="https://www.kaggle.com/tkm2261" target="_blank">@tkm2261</a>, <a href="https://www.kaggle.com/kanbehmw" target="_blank">@kanbehmw</a>, <a href="https://www.kaggle.com/zzy990106" target="_blank">@zzy990106</a>, <a href="https://www.kaggle.com/zacchaeus" target="_blank">@zacchaeus</a>, and all the other participants deserve my respect🍵</p> |
chaii - Hindi and Tamil Question Answering | 28th Place Solution(Contains additional reading material) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for your comment!</p>
<p>Thanks! <br>
Congratulations too!</p>
<p>Congratulations to the winners in the medal place! Also, thanks for everyone contributing to this competition!</p>
<p>Then, I'm happy that I get my second silver medal in this cometition which is my first time to participate a multilingual languge competition (maybe I have some chance to get a gold one 😆). My effective solution almost come from new translated data, ensembling method and diversity model with different trained data.</p>
<ul>
<li>translated data :<br>
<a href="https://www.kaggle.com/vanle73/nq-translated-data" target="_blank">https://www.kaggle.com/vanle73/nq-translated-data</a><br>
It contains two version data with different number samples. I translated with Google API before I truncated the span keeping the maxium length less than 5000.</li>
<li>ensembling method</li>
</ul>
<ol>
<li><p>logits average (like almost everyone do)</p></li>
<li><p>logits sum vote:<br>
This method is designed for different pretrained model with different tokenizer. I choose the top 3 different predicted answers for a single sample, keeping span score, then sum all span score from different model, finally get the top 1 ranking answer as the final result. </p>
<p>The two methods have the similar socre in private /publie LB.</p></li>
</ol>
<ul>
<li>diversity model with different trained data<br>
At the beginning, I regard this competition as a weakly supervised problem, So I try to denoise the data using HardEm algorithm iteratively, it actually have a obvious improvement relative to just using origin data, but still below the baseline. Then I try to do neg-samlping to filter some samples, it still have some improvement, but a low score. Then I found this competition is a "data competition" after I add xquad/mlqa data. <br>
So I just translated the similar data ---NQ which I joined last two years. It actually bring more benefit. My finally ensembling method also contain all model mentioned above because of diversity.</li>
</ul>
<p>Besides, I also tried some other methods, having not get a better result. beacure I have not my own training GPU and not do enough experiment, my teammates does😨. So I list it for your reference hoping helpful:</p>
<ul>
<li><p>Pretained model :<br>
<a href="https://indicnlp.ai4bharat.org/indic-bert/" target="_blank">indic-bert</a><br>
<a href="https://huggingface.co/mrm8488/HindiBERTa" target="_blank">hindiBert</a><br>
<a href="https://huggingface.co/flax-community/roberta-hindi" target="_blank">roberta-hindi</a><br>
<a href="https://github.com/microsoft/unilm/tree/master/infoxlm" target="_blank">info-xlm</a><br>
<a href="https://github.com/CZWin32768/XLM-Align" target="_blank">xlm-align</a></p></li>
<li><p>Different model structure:<br>
<a href="https://github.com/yuwfan/FILTER" target="_blank">FILTER</a><br>
VECO: Variable and Flexible Cross-lingual Pre-training for Language Understanding and Generation</p></li>
<li><p>Ensembleing:<br>
<a href="https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/reports/default/15848021.pdf" target="_blank">BERT for Question Answering on SQuAD 2.0</a><br>
<a href="https://www.linkedin.com/pulse/stacking-ensemble-squad20-mohamed-el-geish/" target="_blank">Gestalt: a Stacking Ensemble for SQuAD2.0</a><br>
<a href="https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/reports/default/15775971.pdf" target="_blank">Applying Ensembling Methods to BERT to Boost Model Performance</a><br>
<a href="https://aclanthology.org/2021.dialdoc-1.9.pdf" target="_blank">Ensemble ALBERT and RoBERTa for Span Prediction in Question Answering</a></p></li>
<li><p>offline translated model:<br>
Hi2en:<br>
<a href="https://huggingface.co/Helsinki-NLP/opus-mt-hi-en" target="_blank">https://huggingface.co/Helsinki-NLP/opus-mt-hi-en</a><br>
En2hi:<br>
<a href="https://huggingface.co/Helsinki-NLP/opus-mt-en-hi" target="_blank">https://huggingface.co/Helsinki-NLP/opus-mt-en-hi</a><br>
Tmail2en:<br>
<a href="https://huggingface.co/Helsinki-NLP/opus-mt-dra-en" target="_blank">https://huggingface.co/Helsinki-NLP/opus-mt-dra-en</a><br>
En2tmail:<br>
<a href="https://huggingface.co/Helsinki-NLP/opus-mt-en-dra" target="_blank">https://huggingface.co/Helsinki-NLP/opus-mt-en-dra</a><br>
mBART-50:<br>
<a href="https://huggingface.co/facebook/mbart-large-50" target="_blank">https://huggingface.co/facebook/mbart-large-50</a></p></li>
<li><p>weakly supervised method:<br>
MML& HardEM:<br>
A Discrete Hard EM Approach for Weakly Supervised Question Answering<br>
<a href="https://doi.org/10.18653/v1/D19-1284" target="_blank">https://doi.org/10.18653/v1/D19-1284</a>. <br>
Mutual Information Maximization Approach:<br>
A Mutual Information Maximization Approach for the Spurious Solution Problem in Weakly Supervised Question Answering<br>
<a href="https://arxiv.org/abs/2106.07174" target="_blank">https://arxiv.org/abs/2106.07174</a></p></li>
</ul>
<p>Finally, thank <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> for your notebooks( Extremely agonizing text tokenizing processing🙄 ! ).</p>
<p>Update:</p>
<p>Score vote maybe a better ensembling method in this competition for private LB! My best submissions also come from it but I havn't choosed them😂<br>
<a href="https://postimg.cc/Y4qWR1PC" target="_blank"><img src="https://i.postimg.cc/xdL305yM/Wechat-IMG1495.png" alt="Wechat-IMG1495.png"></a></p> |
chaii - Hindi and Tamil Question Answering | 36th Place - Trust Your CV - Optimize PP | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Adldotori </p>
<p>Logits are <code>log odds</code> (like "there is log(3 to 1) chance this horse will win the race"). And <code>sigmoid(logits)</code> are <code>probabilities</code> (like "there is 75% chance this horse will win"). After we convert to <code>probabilties</code> then if <code>start</code> has for example probability <code>0.8</code> and <code>end</code> has probability <code>0.6</code>. Then the probability that this combination of <code>start</code> and <code>end</code> is the correct pair is <code>0.8 * 0.6 = 0.48</code> by probability product rule. There's a 48% chance. To me, the product of probabilities is more meaningful (and interpretable) <code>score</code> than adding <code>logits</code> (which i'm not sure what adding <code>logits</code> means).</p>
<p>Next i give <code>start</code> a weight of <code>1.2</code> because i think it is more important to have correct <code>start</code> than correct <code>end</code>. And this was verified by evaluating CV scores.</p>
<p>(Note <code>expit</code> above is <code>sigmoid</code>)</p>
<p>What is <code>TYDI QA Dataset</code>?</p>
<p>After reading top solutions it appears that using <code>Muril-Large</code> was one of the magics! What was <code>Muril-Large</code> CV score? (and perhaps using <code>Rembert</code> was helpful too).</p>
<p>On first day, i make 5 submissions using the five weights below and get respective LB scores:</p>
<ul>
<li>2,1,1,1,1 - LB 0.789</li>
<li>1,2,1,1,1 - LB 0.787</li>
<li>1,1,2,1,1 - LB 0.786</li>
<li>1,1,1,2,1 - LB 0.789</li>
<li>1,1,1,1,2 - LB 0.794</li>
</ul>
<p>From above, i see that the strongest LB fold is fold 5 and the weakest LB folds are folds 2,3. So on second day I try to strengthen fold 5:</p>
<ul>
<li>1,1,1,1,3 - LB 0.791</li>
</ul>
<p>Next, I try to weaken folds 2,3 (and decrease fold 5 to keep in correct proportion):</p>
<ul>
<li>1, 0.85, 0.85, 1, 1.7 - LB 0.795</li>
</ul>
<p>Note that this is a "trust your LB" optimization and works in some competitions, but didn't work in this competition.</p>
<p>Thanks Yam</p>
<p>Thanks. Congratulations to you and your team for maintaining your great result on both public and private LB</p>
<p>Ah yes, that makes sense because the description page says that our competition data was prepared the same way as TydiQA dataset. So that external dataset should help our models learn the format of the data (even if it is different languages).</p>
<p>Thanks. Are you asking how I found the optimal LB weights? I explain the procedure in one of the other comments.</p>
<p><a href="https://www.kaggle.com/kishalmandal" target="_blank">@kishalmandal</a> In my early days, i trusted too many LB models and got shake down a lot. Now in my 2nd and 3rd year on Kaggle, i always build a "trust your cv" model. And in most competitions this model outperforms my "trust your LB" model.</p>
<p>Thanks for your wonderful notebooks. This was a difficult competition to understand the data, understand external datasets, understand CV-LB gap, and other stuff. Your public notebooks helped everyone.</p>
<p>Thank you Kaggle and host for a fun and interesting competition.</p>
<p>Thank you <a href="https://www.kaggle.com/kishalmandal" target="_blank">@kishalmandal</a> for the wonderful public notebooks. I joined a few days ago and immediately studied Kishal's top scoring public notebooks.</p>
<h1>Final Submission 1 - Best LB</h1>
<p>My first final submission was my maximum <code>public LB of 0.795</code>. I took Kishal's LB 0.792 public notebook <a href="https://www.kaggle.com/kishalmandal/5-folds-infer-combined-model-0-792" target="_blank">here</a> and changed the weights to </p>
<pre><code>start_logits = (start_logits1 + 0.85*start_logits2 + 0.85*start_logits3
+ start_logits4+ 1.7*start_logits5 )/5.4
end_logits = (end_logits1 + 0.85*end_logits2 + 0.85*end_logits3
+ end_logits4 + 1.7*end_logits5)/5.4
</code></pre>
<p>This achieved public LB 0.795 which was 60th place on public LB. But when i calculated the local CV score, it was only <code>CV 0.677</code>. This was very concerning. (And as it turns out, this submission had private LB 0.718 and had 728th place 😮 so it was overfitting public LB)</p>
<h1>Final Submission 2 - Best CV</h1>
<p>For my second final submission, I downloaded <a href="https://www.kaggle.com/kishalmandal" target="_blank">@kishalmandal</a> train notebooks <a href="https://www.kaggle.com/kishalmandal/chaii-fit-2-epochs-mlqa-xquad-chaii/" target="_blank">here</a> and <a href="https://www.kaggle.com/kishalmandal/chaii-fit-7-epochs-extra-tamil-data/" target="_blank">here</a>. I optimized the hyperparameters, learning rates, and learning schedules and achieved local <code>CV 0.697</code>.</p>
<h1>Modify Boost Post Process +0.004!</h1>
<p>Next, I adjusted the popular public post process. I changed the first line which uses logits to the second line which uses the Bayesian product of probabilities (and gives start position a slight emphasis over end position).</p>
<pre><code>"score": start_logits[start_index] + end_logits[end_index]
from scipy.special import expit #THIS IS SIGMOID FUNCTION
"score": expit(1.2*start_logits[start_index]) \
* expit(end_logits[end_index])
</code></pre>
<p>This boosted CV by +0.003, public LB by +0.003, and private LB by +0.004! </p>
<h1>Best Final Submission</h1>
<p>My best final submission turned out to be my second submission which was my best CV and not my best LB. This model had CV 0.700, Public LB 0.784, and Private LB 0.744. Once again, "trust you CV" was the best strategy!</p>
<h1>UPDATE - Post Comp Reflection</h1>
<p>My solution is only XLM-Roberta. After reading top solutions, it appears that ensemble with <code>Muril-Large</code> (and maybe <code>Rembert</code>) was the secret sauce to achieving a higher LB score! And TydiQA external dataset.</p> |
chaii - Hindi and Tamil Question Answering | 37th Place Solution [Sad MT5 Vibes] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'd like to thank Google research and Kaggle for hosting this competition through this competition I have gained a lot of experience and the field of NLP. this was my first NLP competition on Kaggle and I'm glad to have scored a silver medal which would tentatively make me a Kaggle competitions expert. 😃</p>
<h2>Approach 1 : Question answering</h2>
<p>I trained the following models namely XLM-Roberta-large , MURIL , REMBERT , BERT trained on XQUAD. All models here have been trained for 5 folds with extra data. MLQA, Hindi xQuad, Tamil translated SQUAD.</p>
<table>
<thead>
<tr>
<th>Model Name</th>
<th>H & T Split</th>
<th>Public LB Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>XLM Roberta Large</td>
<td>No</td>
<td>0.771</td>
</tr>
<tr>
<td>XLM Roberta Large</td>
<td>Yes</td>
<td>0.754</td>
</tr>
<tr>
<td>MURIL</td>
<td>Yes</td>
<td>0.738</td>
</tr>
<tr>
<td>Rembert</td>
<td>No</td>
<td>0.788</td>
</tr>
<tr>
<td>BERT</td>
<td>No</td>
<td>0.642</td>
</tr>
<tr>
<td>InfoXLM</td>
<td>No</td>
<td>0.714</td>
</tr>
</tbody>
</table>
<p><em>My final model was a single five-fold ensemble of from Rembert trained on the extra data available from MLQA and XQUAD and Tamil translated SQUAD</em></p>
<p>I would have tried ensembling models but averaging scores did note seem to work and as i had started seriously getting into the competition only in the last week I did not have time to figure out a voting pipeline as many other solutions have done.</p>
<h2>Approach 2: Seq2Seq (Incomplete) 😭</h2>
<p>the second approach I attempted was by building a text to text transformer model using the mT5-base transformer. This approach showed promise as it was the only one in which all the correct letters were being predicted but in this I face the unique and weird problem that none of the spaces between the words were being predicted. hence I could not properly post process the outputs or this model to get a get a proper submission. I would be really grateful if someone from the Kaggle community would be able to help me out with this problem.</p>
<p>The links to my notebooks are as follows:</p>
<ul>
<li>Training Notebook : <a href="https://www.kaggle.com/aishikai/chaii-seq2seq-mt5-fit" target="_blank">Click here</a></li>
<li>Seq2Seq Training notebook : <a href="https://www.kaggle.com/aishikai/chaii-rembert-fit" target="_blank">Click Here</a></li>
<li>Inference Notebook : <a href="https://www.kaggle.com/aishikai/chaii-inference" target="_blank">Click Here</a></li>
<li>H&T Split Inference Notebook : <a href="https://www.kaggle.com/aishikai/chaii-inference-ht-split" target="_blank">Click Here</a></li>
</ul> |
chaii - Hindi and Tamil Question Answering | 3rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: chaii - Hindi and Tamil Question Answering
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>you can read it here: <a href="https://huggingface.co/google/muril-large-cased" target="_blank">https://huggingface.co/google/muril-large-cased</a></p>
<p><strong>Muril-Large + Magic</strong></p>
<p>First of all, I would like to thank Kaggle and the competition host for hosting this challenge.<br>
The third place came as a surprise to me and I was not expecting it at all, since my public LB score was not high.<br>
Muril is all I have. I have found that xlm-r is more efficient when finetuning on the squad 2.0 dataset, so I tried it similarly with muril-large.</p>
<p><strong>Data</strong></p>
<ul>
<li>competition data, MLQA, XQUAD, SQUAD v2</li>
</ul>
<p><strong>Script to finetune muril on squad 2.0:</strong></p>
<ul>
<li><a href="https://www.kaggle.com/nguyenduongthanh/fine-tune-muril-squad-v2" target="_blank">https://www.kaggle.com/nguyenduongthanh/fine-tune-muril-squad-v2</a></li>
</ul>
<p><strong>Muril-Large-Squad-v2 Checkpoint</strong></p>
<ul>
<li><a href="https://www.kaggle.com/nguyenduongthanh/muril-large-squad-v2" target="_blank">https://www.kaggle.com/nguyenduongthanh/muril-large-squad-v2</a></li>
</ul>
<p><strong>Fine-tuning:</strong><br>
I just fine-tune on 2 epochs each fold</p>
<ul>
<li>MuRIL Large 10 folds: max-length=384, doc_stride=128, lr=1e-5, gradient accumulation=1</li>
</ul>
<p><strong>Ensemble</strong></p>
<ul>
<li>Simple average of 10 folds</li>
</ul>
<p><strong>Notebook</strong></p>
<ul>
<li>Train: <a href="https://www.kaggle.com/nguyenduongthanh/simple-train" target="_blank">https://www.kaggle.com/nguyenduongthanh/simple-train</a></li>
<li>Inference : <a href="https://www.kaggle.com/nguyenduongthanh/simple-10-folds-of-muril-large" target="_blank">https://www.kaggle.com/nguyenduongthanh/simple-10-folds-of-muril-large</a></li>
</ul>
<p>Fine-tuning: <a href="https://www.kaggle.com/nguyenduongthanh/simple-train" target="_blank">https://www.kaggle.com/nguyenduongthanh/simple-train</a></p>
<p>My notebook here: <a href="https://www.kaggle.com/nguyenduongthanh/simple-10-folds-of-muril-large" target="_blank">https://www.kaggle.com/nguyenduongthanh/simple-10-folds-of-muril-large</a></p> |