Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
BirdCLEF 2023
top 7th solution - `sumix` augmentation did all the work
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h4>What's important in <a href="https://www.kaggle.com/code/loonypenguin/ensemble-openvino-models-inference-notebook" target="_blank">our best inference notebook</a> ?</h4> <ul> <li>Ensemble of 19 models</li> <li>Ensemble of <code>efficientnet_b2</code> from <code>torchvision</code> and <code>rexnet150</code> from <code>timm</code></li> <li>Speedup with <code>openvino</code></li> <li>For submission we trained all models with knowledge distillation on full dataset - (<code>rkl</code> prefix is indicator)</li> </ul> <p>Our five models from 19 with <code>rkl-rx-wdfbn-rx-ema-200-full</code> prefix achieve 0.7471 on private test set and 0.83194 on public test set, <code>rx</code> - <code>rexnet150</code>, distillated <code>rexnet150</code> models from <code>rexnet150</code> trained before</p> <h4>Custom Augmentations</h4> <p>Let's imagine you're at some party right now and you're surrounded by talking people. It's not hard for you to pay attention to a certain group to hear their conversation and you can freely change your target. So the task - to cut specific speaker from audio - is natural. What we can say about audio in which - one person cries and another one laughs? In such audio there are two actions - laughing and crying, and this is true for birds too. With this consideration we've come up to <code>sumup</code> idea:</p> <pre><code>def sumup(waves: torch.Tensor, : torch.Tensor): batch_size = len() perm = torch.randperm(batch_size) waves = waves + waves[perm] { : waves, : torch.clip( + [perm], =, =) } </code></pre> <h6>Boosting score</h6> <p>What if you have an audio where one bird is more louder than another? This audio is still with two different birds, but if signal from second bird is tiny, it's okay to assume, that maybe probability of second bird is small or even zero. </p> <pre><code>def sumix(waves: torch.Tensor, : torch.Tensor, max_percent: = , min_percent: = ): batch_size = len() perm = torch.randperm(batch_size) coeffs_1 = torch.rand(batch_size, device=waves.device).(-, ) * ( max_percent - min_percent ) + min_percent coeffs_2 = torch.rand(batch_size, device=waves.device).(-, ) * ( max_percent - min_percent ) + min_percent label_coeffs_1 = torch.where(coeffs_1 &gt;= , , - * ( - coeffs_1)) label_coeffs_2 = torch.where(coeffs_2 &gt;= , , - * ( - coeffs_2)) = label_coeffs_1 * + label_coeffs_2 * [perm] waves = coeffs_1 * waves + coeffs_2 * waves[perm] { : waves, : torch.clip(, , ) } </code></pre> <p>We propose yet another customization of data mix specific for audio domain - <code>sumix</code> augmentation. As you can see, we used linear slope in our experiments, but it's open question what's slope for labels better to use, maybe <code>min_percent = 0.3</code> doesn't need slope at all. This augmentation is applicable to any audio classification task. <code>sumix</code> is better than <code>sumup</code> because it also affects loudness, i. e. implies another audio augmentation. </p> <p>When we decided to check solutions from past competitions, we tried to add background noise in our pipeline, but noise didn't show any improvement. Our thoughts - incorporating new background noises for audios is already in <code>sumix</code> or <code>sumup</code>. For gaussian/colored noises we have similar explanation, why those augmentations didn't work - competition's data is not excellent and consists of noisy samples, so <code>sumix</code> and <code>sumup</code> propagate noise and make noise diverse by themselves.</p> <h4>Training pipeline</h4> <ul> <li><code>concatmix</code> with probability 0.5 - <a href="https://github.com/dazzle-me/birdclef-2022-3rd-place-solution/blob/main/experiments/new-metrics-exp-17/fold_4/saved_src/models/layers/concat_mix.py" target="_blank">implementation</a></li> <li><code>sumix(min_percent=0.3, max_percent=1.0)</code> with probability 1</li> <li><code>mixup</code> on mel-spectrograms with <code>Beta(1.5, 1.5)</code> with probability 1</li> <li><code>cutmix</code> on mel-spectrograms with <code>Beta(1.5, 1.5)</code> with probability 0.5</li> <li><code>AdamW</code> with constant <code>LR</code> 1e-3 and <code>weight_decay</code> 1e-2 for all models</li> <li><code>binary_cross_entropy</code> for every class, labels are primary labels plus secondary labels</li> <li>Random 5 continuous seconds from audio at training, first 5 seconds at validation</li> <li>We turned off <code>weight_decay</code> for all normalization layers</li> <li><code>EMA</code> with <code>decay</code> 0.999</li> <li>Almost total correlation between LB and our CV, ensemble of 20 models gathered as best in CV achieves 0.75021 on private test set and 0.83328 on public test set, this ensemble is our top-3 submission on public test set</li> </ul> <h4>Knowledge distillation</h4> <p>After training a model, we use it as teacher for another model with the same architecture, augmentations, <code>AdamW</code>, <code>EMA</code> and etc., except <code>loss</code>. At second stage with distillation we use $$L = 0.33 * BceFromTargets + 0.34 * KL(probs_{student} || probs_{teacher}) + 0.33 * KL(probs_{teacher} || probs_{student}) $$</p> <h4>Models</h4> <ul> <li><code>rexnet150</code> with <code>dropout</code> 0.3 and <code>drop_path</code> 0.2</li> <li><code>rexnet150-time</code> - same <code>rexnet150</code> but with attention head, distilled from <code>rexnet150</code> without <code>time</code>-aware head</li> <li><code>efficientnet-b2</code> with <code>dropout</code> 0.3 and <code>drop_path</code> 0.2</li> <li><code>efficientnet-b2-time</code> - analogously, distilled from <code>efficientnet-b2</code></li> </ul> <p>Already mentioned hacks allow to reach score 0.7471 with 5 <code>rexnet150</code> trained on full data with <code>batch_size=336</code> 200 epochs and knowledge distillation on a single <code>A100 (40GB)</code>. To boost performance we ensemble <code>rexnet150</code> with <code>efficientnet-b2</code> from <code>torchvision</code>. To boost performance yet another time we use attention head - copy-past <code>self-attention</code> operation from <code>transformers</code>. All experiments were conducted in mix-precision with <code>bfloat16</code> type.</p> <p>Ensemble of 5 models with prefix <code>rkl-rx150-wdfbn-rx150-time-ema-ep200-full-{5seeds}</code>, i.e <code>rexnet150</code> with attention head for time awareness distilled from simple <code>rexnet150</code>, achieves 0.75151 on private set and 0.83261 on public set.</p> <h4>We didn't try:</h4> <ul> <li>External data not from BirdClef2021/BirdClef2022</li> <li>SSL</li> <li>SED models</li> <li>Utilization of BirdNet</li> <li><code>eca_nfnet</code> from <code>timm</code></li> </ul> <h4>We tried, but it didn't work:</h4> <ul> <li>Pretraining our models on BirdClef2021/BirdClef2022, we made few attempts, it didn't show any improvements in LB, experiments conduction became much slower, so we gave up the idea</li> <li>Any background noise (even which was used in previous year's competition)</li> <li>Any LR scheduler</li> <li>Any Noise(Gauss, Colored)</li> <li><code>tf_efficientnet_b2_ns</code> from <code>timm</code></li> <li><code>SwinV2</code>/<code>ConvNeXt</code> from <code>torchvision</code></li> <li>Manifold MixUp</li> <li><code>focal_loss</code>, <code>soft_focal_loss</code></li> <li>Weighted samplers</li> <li><code>SpecAugment</code></li> </ul> <h4>Speeding up inference time</h4> <p>We measured inference time with random input in kaggle kernel, I don't remember which one of intel's processors was, often in our kaggle notebook we got <code>Intel Xeon 2.2 Gz</code>. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13651743%2F0d18919aee04af8f54bfe0880472543d%2Fphoto_2023-05-25%2023.34.07.jpeg?generation=1685046878803412&amp;alt=media" alt=""></p> <p>Our thoughts - <code>ONNX</code> by itself utilizes 2 cpus, <code>OpenVino</code> is faster cause it additionally fuses some layers for inference and uses compilation specific to a certain CPU in submission notebook to take advantage of processor's type</p> <h4>Appendix: <code>MultiHeadAttentionClassifier</code>'s implementation. All hyperparameters are preserved</h4> <pre><code></code></pre>
Blue Book for Bulldozers
Final Placement
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Blue Book for Bulldozers <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Admins!</p> <p>When the final placement will be announced?</p> <p>Thanks</p>
Bosch Production Line Performance
#3 place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bosch Production Line Performance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It is another way to say get count of leak values within a station. Two components may have the exact same numeric values in a station - if no other components have those exact values, those two components get a count of two. Likewise for three components which have the same numeric values, get a count of 3 assigned etc. </p> <p>Hope this explains the oof rollmean, bit easier to code than explain. If R is not to your liking, pandas has a rolling_mean function also. </p> <pre><code>&gt; library(zoo);library(caret);set.seed(2) &gt; vec = rep(0, 40) &gt; vec[sample(1:40, 10)] = 1 &gt; # Create a vector will use as our target &gt; vec [1] 0 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 &gt; # The next line is the infold rollmean, with window of 5. this will overfit on your training data &gt; rollmean(vec, 5, fill=NA) [1] NA NA 0.2 0.2 0.4 0.6 0.6 0.4 0.4 0.2 0.0 0.0 0.2 0.2 0.2 0.4 0.4 0.2 0.2 0.4 0.2 0.2 0.2 0.2 0.0 0.2 0.2 0.2 0.2 0.2 0.0 0.2 0.2 0.4 [35] 0.4 0.4 0.4 0.4 NA NA &gt; # Now, lets get the out-of-fold rollmean with window of 5 &gt; # We need an index based on the vector with two folds &gt; idx = createFolds(vec, 2, list=F) &gt; idx [1] 1 1 2 1 2 1 1 1 2 2 2 2 2 2 2 2 2 2 1 1 1 2 2 1 2 1 1 1 2 1 1 2 2 1 1 1 1 1 2 2 &gt; # Create a vector of results for each fold. &gt; fold1 = fold2 = vec &gt; # get OOF rollmean of fold1 &gt; fold1[idx==2] = 0 &gt; fold1 = rollmean(fold1, 5, fill=NA) &gt; # get OOF rollmean of fold2 &gt; fold2[idx==1] = 0 &gt; fold2 = rollmean(fold2, 5, fill=NA) &gt; # Get the result using only OOF elements &gt; result = fold1 &gt; result[idx==1] = fold2[idx==1] &gt; result [1] NA NA 0.0 0.2 0.2 0.2 0.2 0.0 0.4 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.4 0.2 0.0 0.0 0.2 0.0 0.0 0.0 0.0 0.2 0.0 0.0 0.2 0.2 0.0 [35] 0.0 0.0 0.2 0.2 NA NA &gt; plot(vec, xlab = "Sequence of Components", ylab = "Passed-0 Failure-1", type="l", col = "blue", ylim = c(0,1), lwd=1) &gt; lines(rollmean(vec, 5, fill=NA), type="l", col="red", lwd=3, lty=2) &gt; lines(result, type="l", col="green", lwd=3, lty=2) &gt; legend("topright", c("Actual failures", "Rolling mean of failures", "Out of fold rolling mean"), + col=c("blue", "red", "green"), lty=c(1,3,3), lwd=c(1,2,2)) </code></pre> <p><img src="https://www.dropbox.com/s/t75djdlf5l10r9x/OOFRollingMean.png?dl=1" alt="enter image description here" title=""></p> <p>@willie, thanks! will try that</p> <p>Sort of the same as the previous rollmean code; but instead of calulating the rollmean, put int fold1 and fold2 what was the n-1 value (lag), or the n+1 (lead) value for each element. Mathias implemented this so I may not have it exactly correct. </p> <p>@mr.z .. Flowpath is from all stations</p> <p>Five</p> <p>Sure jiandong, will put up a lot of the feature part on github anyway... may be a couple of weeks though. </p> <p>Before starting... a big thanks to Bosch team and all of the folks who contributed a lot to the community's learning in this competition &#8211; among others, JohnM's (superb!) and gingermans' flowpath visualisations; CPMP and Daniel FG's super fast MCC measurement functions for R and python; Mathias (Faron) and Marios for releasing the leak early and making it a competition; all the wrapper's for lightgbm (I used Raddar's R functions, not to take away from great ones by Laurae(great top10 finish!), Aradalan and others &#8211; and Microsoft team for releasing it). <br> Congrats to Gabor, Ash, Daniel FG, scnndl (looking forward to all your solutions!) and all other players.... and of course my team mates Marios, Mathias and Stas! I told Marios at the start I wanted to get GM status from this competition, and to the end he was determined to keep to that :)</p> <p>For feature engineering, here is what worked roughly in order of effectiveness,</p> <ul> <li>Mathias's leak shown in the <a href="https://www.kaggle.com/mmueller/bosch-production-line-performance/road-2-0-4">public script</a> </li> <li>Second leak, used per station, which worked especially well on L3_S29 and L3_S30. If you check JohnM's <a href="https://www.kaggle.com/jpmiller/bosch-production-line-performance/flowpath-viz/code">viz</a>, you can see these lines have very few NA's so the order by date can be captured very well. On other lines, with more NA's, the date order is more broken. We basically sorted on the dates in each station, and put an indicator for sequential duplicate numeric rows within the station. Created a leak feature like this for each of the stations, but L3_S29 and L3_S30 worked particularly well. </li> <li>Related to second leak above, within each station, column bind the numeric values to a string, and get the count of the unique strings. </li> <li>Row-wise summed the count and the leak (found in the last two bullet points), over all the stations. </li> <li>Out of fold rolling mean of target (using 2 fold to replicate train/test split); sorted based on start and end dates of all stations. Got different rolling mean windows &#8211; 5, 10, 20, 100, 1000, 5000 &#8211; interestingly it was the really big windows which helped most; probably because it captured periods with low failure rate. </li> <li>Lag and lead of target out of fold. </li> <li>Encoded a few categorical columns with out of fold Bayesian Mean -there was one cat var with a lot of levels which captured a lot of 1's particularly well. This encoding was limited to the non duplicated categoricals which had over circa 7 unique values. </li> <li>Counts of non-duplicated categoricals columns, counts of non-duplicated date and numeric columns. </li> <li>Encode the paths of whether each row passed through stations. </li> <li>Row wise NA counts for numeric, as well as max/min per stations. </li> </ul> <p>We had about 50 models in a stack, made up at second level of xgb, neural net.. and I think linear models. First level models were xgb, lightgbm, extratrees, rf, glmnet, neural net and others. Best single model was lightgbm with LB 0.502. As I guess everyone experienced, the stack was very instable, which lead to a lot of frustration and long periods of stagnation. </p> <p>We had an unplanned boost in the last hour of the competition :) With the instability, as BreakfastPriate pointed out, there seemed to be a few 100 rows being the main drivers for the last bit of score. We downloaded around 25 subs (containing the 1's and 0's) from LB which scored over 0.51, then column binded these subs and took the median value, row wise, of the 25 Response values. This gave a nice boost - around 0.003 on public and private &#8211; moved us up to fourth place on public LB .... with 20 mins to go. Had 2 more subs, and tried different thresholds, as opposed to median, but this didn't improve the straight median of all the 25 best subs. <br> A histogram (cut off on the y-axis) of the row wise sum of the 25 subs can be seen below... </p> <p><img src="https://www.dropbox.com/s/z9b91qrh2cg0mhy/25subs.png?dl=1" alt="enter image description here" title></p>
Bosch Production Line Performance
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bosch Production Line Performance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Russ W;144584]</p> <ol> <li><p>For &quot;leak stratified&quot;, does that mean that you kept chunks together in separate folds, or you split up the chunks equally across folds?</p></li> <li><p>What xgb parms did you typically use for L1, L2, and L3? Did you use any other modeling methods to ensemble?</p></li> </ol> <p>[/quote]</p> <ol> <li><p>We allowed the same chunk to appear in multiple folds. We assumed that Train/Test and Public/Private were simple random splits.</p></li> <li><p>I tried ExtraTreesRegressor a few times. Deeper trees worked better for me but we had all sorts of different xgbs in the ensemble. One of the best xgb parameters:</p> <ul><li>early stopping <strong>auc</strong></li> <li>alpha <strong>0</strong></li> <li>booster <strong>gbtree</strong></li> <li>colsample_bytree <strong>0.6</strong></li> <li>min_child_weight <strong>5</strong></li> <li>subsample <strong>0.9</strong></li> <li>eta <strong>0.03</strong></li> <li>objective <strong>binary:logistic</strong></li> <li>max_depth <strong>14</strong></li> <li>lambda <strong>4</strong></li></ul></li> </ol> <p>I found it strange that decreasing eta did not help to get better models.</p> <p>Sorry for the delay. It was good to have a kaggle free weekend after two months. (Ok not completely kaggle free but at least in read only mode:).</p> <p>As we had 6-7 hours local time difference (USA, EU) we tried to split the work. I was mostly doing feature engineering and tried to improve our best single xgb score. Ash tried all kinds of different modeling techniques and built the L2/L3 ensemble.</p> <h2>Data Exploration</h2> <p>I spent the first two weeks with data discovery. Numeric feature plots, statistics, station to station transition probabilities, correlation matrices for each stations was necessary to have ideas how to handle the thousands of anonymous features.</p> <p>I start each competition with desktop machine (16GB RAM). It takes some extra effort in the beginning to deal with lower level data manipulations due to the memory constraints. Getting closer to the raw data helps sometimes later. </p> <p>As the numeric/date features had .3f/.2f precision we could rescale (*=1000, *=100) everything to integers. We kept only the min time for each station. Sparse matrices came handy as we had a lot of missing values. </p> <h2>Leak/Magic/Data Property Features</h2> <p>Ash found quite early that consecutive rows had feature duplication and correlated Response. We called consecutive Ids ordered by StartStation, StartTime *chunk*s.</p> <p>For each chunk we used features</p> <ul> <li>number of records</li> <li>rank ascending</li> <li>rank descending</li> </ul> <p>We used the original order to add features based on previous and next records.</p> <ul> <li>Response</li> <li>feature hash equality for each station</li> <li>StartTime, EndTime</li> <li>Numeric raw features</li> </ul> <h2>Time Features</h2> <p>I figured out in the beginning that <a href="https://www.kaggle.com/gaborfodor/bosch-production-line-performance/notebookd19d11e4f2">0.01 time granularity means probably 6 mins</a>.</p> <p>So we had to deal with 2 years of manufacturing data. This observation did not gave direct performance boost but gave us enough intuition to construct a lot of time based features.</p> <ul> <li>StartStationTimes</li> <li>StartTime, EndTime, Duration</li> <li>StationTimeDiff</li> <li>Start/End part of week (mod 1680)</li> <li>Number of records in next/last 2.5h, 24h, 168h for each station</li> <li>Number of records in the same time (6 mins)</li> <li>MeanTimeDiff since last 1/5/10 failure(s)</li> <li>MeanTimeDiff till next 1/5/10 failure(s)</li> </ul> <h2>Numeric Features</h2> <ul> <li>Raw numeric features (most of the time we used the raw numeric features or simple subsets based on xgb feature importance)</li> <li>Z-scaled features for each week</li> <li>Count encoding for each value </li> <li>Feature combinations (f1 + - * f2)</li> </ul> <p>We saw interesting similarity in the station transition probability matrix between station S0-S11 and S12 - S23. You have probably seen the same in the Shopfloor visualizations. We also noticed that the number of features are the same across these station groups.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/144544/5351/StationFeaturSimilarity.png?sv=2012-02-12&se=2016-11-17T16:25:47Z&sr=b&sp=r&sig=pEgr%2B4FRBrS2Owl2ctg5sOEMBzMbWQoL78R%2FEed8h%2BA%3D" alt="enter image description here" title></p> <p>The correlation matrices were similar for these stations so we tried combining the numeric features for the same stages. (e.g. L0_S0_F0 + L0_S12_F330 etc.)</p> <h2>Categorical Features</h2> <p>We could not squeeze much out of the categorical features. Most of the time we just dropped them. We had a few models with aggregated categorical features for each station in [24, 25, 26, 27, 28, 29, 32, 44, 47]. 'T1' was replaced by 1 all the other values were replaced 100.</p> <h2>Validation</h2> <p>Given the imbalanced labels and binary evaluation criteria we noticed quite high 5-FOLD CV stds (~0.01) in the beginning. After a few experiments we decided to use 4 FOLD &quot;leak-stratified&quot; CV (forcing to have the same number of duplication for each fold). Each &quot;single&quot; L1 model was trained with 3 different seeds on the same folds. Our latest ensemble scores were quite stable (std &lt; 0.004) and very often our CV/LB score improved hand in hand.</p> <h2>Rebalancing</h2> <p>We kept every failure and records with duplication. For the remaining 0s 50-90% down sampling was used. It made the training a bit faster and worked as bagging for the model averaging part helping the later ensemble stages.</p> <h2>Results</h2> <p>In the last month of the competition we continuously improved and tried to keep us in the top 3. Fortunately we could improve significantly both CV and LB with our last submissions on Friday. This made the submission selection easier. <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/144544/5348/results.PNG?sv=2012-02-12&se=2016-11-17T16:01:24Z&sr=b&sp=r&sig=%2FBvQoJg5kEuzvBWFBoHFs7GU%2Fgr7AotglS54yYgZk3M%3D" alt="enter image description here" title></p> <p>[quote=CPMP;144552] Can you explain a bit more this:</p> <ul> <li>MeanTimeDiff since last 1/5/10 failure(s)</li> <li>MeanTimeDiff till next 1/5/10 failure(s)</li> </ul> <p>Mean over what? <br> [/quote]</p> <p>Mean(EndTime for latest K failures where EndTime &lt; CurrentEndTime) - CurrentEndTime</p> <p>Mean(EndTime for the first K failures where EndTime &gt; CurrentEndTime) - CurrentEndTime</p>
Bosch Production Line Performance
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bosch Production Line Performance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The main advantage is of course that you can train (and fine-tune with consecutive runs) an entire level 2 ensemble in minutes instead of days, which is the reason I could do it in the last week. In order to take all given information into account I still used all data for level 1, but this was all done in one run.</p> <p>However, there is a small twist too: due to the mcc evaluation metric, there may be a small performance gain in training the final model only on the &quot;higher probability&quot; samples, and thus having more accurate predictions near the top, whereas lower probabilities/ranks do not matter.</p> <p>First of all: congratulations to all 3 winning teams for your great results, as well to all other participants who put considerable work and finished in good (but maybe not top) positions! Also thanks, to kernel/forum contributors!</p> <p>In fact, I had practically quit after seeing the word &quot;leakage&quot; in the forum. However, the beautiful post by JohnM with the workflow visualization that I had read early on, made me reconsider in the last week as I had since the intention to try out stacking over station groups and I thought it could be worth a try anyway (and as a bonus it turns out that this also somewhat demystifies the &quot;leak&quot;). This is why I submitted the sample submission on the cut-off day and then rushed to finish with 12 submissions the last 3 days.</p> <p>Due to the latter and the robust cv approach, my scores seemed to follow the pattern: private &gt; public &gt; cv (and model performance order was consistent). However, a downside is that I did not effectively use any feedback from the LB, which maybe could be helpful... but also very tricky!</p> <p>Here is an overview of my solution:</p> <ul> <li><p>&quot;Batch features&quot;: These seem equivalent (or a tiny bit more powerful/expressive) compared to the &quot;magic features&quot;.</p> <ol><li>Identify samples that are exactly identical (it turns out these always have consecutive Ids), and count the number of duplicates (powerful) and the Id rank (somewhat useful as this is additional info). </li> <li>Count number of <em>different</em> samples in almost identical &quot;batches&quot;, using workflow for guidance (very powerful). It is convenient that these have consecutive Ids too, although Id here does not give extra info as these seem to be ordered by consecutive columns. So, lines can be subdivided into (almost) mutually exclusive station groups (L0: [S0-S11] and [S12-S23], L1 and L2 by station, L3: [S29-S37, S38] and [S39-S51], where S38 must be considered separately in the end), and starting from earlier to later groups, &quot;batches&quot; are samples that are duplicates up to this point (the most common pattern consists of samples followed by exact duplicates everywhere except in S38).</li> <li>The value of the Id rank consists in almost excluding the very last Id in a given &quot;batch&quot; (with simultaneously both above definitions) from potential failures. Everything considered, this identifies 5265 failures in ~70k rows (from &lt;7k failures in ~1.1M rows) in the train set, and correspondingly ~70k rows in the test set. </li></ol></li> <li><p>Stacking based on different feature subset combinations:</p> <ol><li>Level1, using all data, do almost all combinations of the following: a. train models with and without numeric features, b. with and without &quot;batch&quot; features, c. use the described workflow decomposition to train on separate station groups</li> <li>Level2: blend the metafeatures with (a selection and minor adaptation of) the original features, training only on previously identified &quot;batches&quot; (ie ~70k rows).</li> <li>Algorithms: for level1 just xgboost..., for level2 (in order of performance) xgboost, random forest, extratrees, keras. The final solution was a simple average.</li></ol></li> <li><p>Trading off efficiency in favor of robustness:</p> <ol><li>Level1 1 cross validation based on a customized &quot;StratifiedLabelKFold&quot; with shuffling, split on &quot;batches&quot; instead of samples. Each fold contained all corresponding (exact or approximate) duplicates, and models were tuned based on auc. The trade-off is that the test metafeatures were (probably) more accurate compared to cv metafeatures, and I thought this would make for a more robust approach, at the cost of some loss of information as regards absolute performance.</li> <li>In level2 tuning was based on plain stratified folds, using the lower bound of the 99% confidence interval of a moving average of mcc versus prediction rank as evaluation metric. The latter (is simpler than it might sound and) was helpful in finding a robust cut-off threshold (namely, ~2860 ones in my final solution).</li></ol></li> </ul> <p>(edit: copyedit)</p>
Bosch Production Line Performance
8th place solution (team LAJMBURO)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bosch Production Line Performance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Below is the solution of team LAJMBURO (8th private LB (0.50888), 11th public (0.51001) ).</p> <p>There are two audiences intended:</p> <ul> <li>If you are interested in only the models/features, then read the part 1 until &quot;Features&quot; included and you can skip everything else (maybe you might want to read &quot;Validation method&quot; and &quot;Submission method&quot; from part 2 of this post.</li> <li>If you are interested at how a team of 8 is working, then read from &#8220;the name of the team&#8230;&#8221; which is part 2</li> </ul> <hr> <h2>Part 1: Models &amp; Features</h2> <p>Two parts:</p> <ul> <li>Modeling and data</li> <li>Features</li> </ul> <h3>Modeling and data</h3> <p>We used simple <strong>level 1 models</strong>:</p> <ul> <li>LightGBM</li> <li>xgboost</li> <li>Random Forest</li> <li>Neural Networks (didn&#8217;t get picked up on level 2, so we removed it)</li> </ul> <p>We had <strong>five level 1 datasets</strong> (gbm=LightGBM, xgb=xgboost, rf=Random Forest - features are not exactly including all the features described later), I&#8217;m including their <strong>mean MCC on cross-validation</strong> (for Public LB score, add at least 0.02 - numbers might be off a bit):</p> <ul> <li>Data set 1 (0.477 gbm): order, raw numeric, date, categorical</li> <li>Data set 2 (0.482 gbm, 0.477 xgb, 0.473 rf): order, path, raw numeric, date</li> <li>Data set 3 (0.479 gbm, 0.473 xgb): order, path, numeric, date, refined categorical</li> <li>Data set 4 (0.469 xgb, 0.442 rf): has features sorted by numeric values + date features + faron&#8217;s magic features, path, unsupervised nearest neighbors (L1 = Manhattan / L2 = Euclidean distances) per label</li> <li>Data set 5 (0.43 xgb): has faron&#8217;s magic features, path, unsupervised nearest neighbors</li> </ul> <p><strong>Level 2 data set</strong> was a bit special, and included:</p> <ul> <li>Level 1 predictions (we had 12 predictions from level 1)</li> <li>Data set 5</li> <li>Duplicate feature (count and position)</li> </ul> <p><strong>Level 2 stack models</strong> (giving the weaker model a stronger weight was better):</p> <ul> <li>30% weighted xgboost gbtree (~0.488 CV)</li> <li>70% weighted Random Forest (~0.485 CV)</li> </ul> <p>We had a very <strong>minor issue at overfitting LB</strong>:</p> <ul> <li>We used a threshold multiplicand of 0.92 from the cross-validated threshold (which seemed the best multiplicand), 0.51001 Public LB, 0.50888 Private LB</li> <li>If we were using the threshold from cross-validation without a multiplicand, we would have had, 0.50923 Public LB, 0.50954 Private LB. But that's not an issue at all overall.</li> </ul> <p>In the case it were to overfit severely, we also made a backup set which is leakage-free (using features not using the label directly) and without a multiplicand. For instance, features like the Unsupervised Nearest Neighbors was removed. A single xgboost provided 0.506 on Public/Private LB, good enough for a gold medal.</p> <h3>Features:</h3> <p><strong>Order</strong>:</p> <ul> <li>Faron&#8217;s magic features (sorted by max time, min time)</li> <li>Magic features sorted by numeric values</li> <li>Same timestamps previous/next sample after sorting by tmax</li> <li>Previous / next response for duplicates</li> </ul> <p><strong>Path</strong>:</p> <ul> <li>Redefine station and line numbers based on timestamps to make sure timestamp was constant for each station</li> <li>Cluster of samples by path and calculated per sample the absolute and relative difference between numeric / time difference (max - min) features and cluster mean. </li> <li>Path based error rate (leave one out method) but it did not improve score</li> <li>Merge features based on path</li> <li>Entry / exit station</li> <li>Previous and next station for S29 - S37 (one-hot encoded)</li> </ul> <p><strong>Datetime</strong>:</p> <ul> <li>Timestamp per station, per line, merged per path</li> <li>Max-min per station, per line, merged per path</li> <li>Kurtosis + kurtosis per line (very strong, surprisingly)</li> <li>Lead/lag response rate statistics after sorting by tmax, id</li> <li>Lead/lag response rate statistics after sorting by tmax, numeric, id</li> <li>Timestamp based error rate (leave one out method) but it did not improve score</li> <li>Timestamp label-based density (overfit)</li> </ul> <p><strong>Numeric</strong>:</p> <ul> <li>Use supervised decision trees to try to predict the numeric value using the date, per station =&gt; the split points allow to define thresholds, which clustered the values by group of dates (over 4000 features) =&gt; used xgboost per station to shrink it to ~120 features as adding 4000 features didn&#8217;t help at all</li> <li>Numeric after removing time trend (did not improve score)</li> <li>Raw numeric data (a specific selection of them)</li> </ul> <p><strong>Categorical</strong>:</p> <ul> <li>Couple of features one-hot encoded added a little bit</li> </ul> <h3>In the end</h3> <p>Earning 0.500+ LB is possible in 10 minutes using a single model (LightGBM) under 16GB RAM with appropriate features. But creating these features do need more than 16GB RAM&#8230;</p> <p>xgboost can outperform LightGBM if tuned appropriately (my tuned xgboost was scoring 0.483 CV, which confirmed itself on LB, beating all our cross-validated model submissions including LightGBM) but it is much slower than LightGBM! (and was eating 58GB RAM.. mmm yummy! when you know LightGBM used only 16GB at most!)</p> <hr> <hr> <p>If you are not interested into reading about the team (and only were interested in the model/feature), you can skip everything below (there might be the &quot;validation method&quot; and &quot;submission method&quot; which you might want to read).</p> <hr> <hr> <h2>Part 2: The team</h2> <p>Multiple parts:</p> <ul> <li>The name of the team</li> <li>Team composition (this is what happens when you are play the recruiter for a team project)</li> <li>Validation method</li> <li>Stuff we used in the team</li> </ul> <h3>The name of the team&#8230;</h3> <p>It is just the concatenation of the initials of the team members! We didn&#8217;t have a hard time to find it&#8230; because I put it temporarily as a placeholder name. And then we didn&#8217;t even chat about the name...</p> <h3>Team composition</h3> <p>I had so many people applying after my Red Hat forum post that I had to make a selection. For a team of 6 (that's big for Kaggle - but &quot;big&quot; is not that big, really) working like in a real project, it requires a lot of balancing and diversity in skills. Therefore, using the 2 to 20+ lines people sent me when &#8220;applying&#8221; (either directly by email for those who got my email, through Skype, through text, or through Kaggle), I selected 4 people (I had already one in my team) according to their skills:</p> <p><strong>Finally, we had:</strong> (including a diversity of R/Python)</p> <ul> <li>One who can go &#8220;ham&#8221; for trying to push on LB (Shubin) - great competition spirit</li> <li>One with a lot of &#8220;let&#8217;s try this&#8221; features (Joost) - it helped a lot</li> <li>One who can analyze methodically what he has and what he gets as feedback to improve (Michael) - very good at avoiding going too fast</li> <li>One who can help providing better insights using R packages over Python packages (Rodolphe) - great at providing alternatives</li> <li>One who is more about the organization and providing a good kickstart with sample test scripts for the team (Alex, cf see <a href="https://www.kaggle.com/apapiu/house-prices-advanced-regression-techniques/regularized-linear-models">what</a> he can do in the House Prices competition)</li> </ul> <p>And from my experience in making hackathons with mandatory teams, I do know that learners:</p> <ul> <li>Are starting very fresh in mind and are extremely committed early</li> <li>Are (not always for any person) slowly dropping out as time passes by and commitment requires to increase (which is exactly what must not happen when kaggling)</li> </ul> <p>In addition, we all have a RL (a real life) out of the world of Kaggle, therefore availabilities could be somewhat scarce and rare.</p> <p>Hence, typically we were 2 to 4 working on this competition. Thus, that&#8217;s not really a big team, and it was extremely easy to handle.</p> <p>There are two team events that happened:</p> <ul> <li>Adding jayjay to our team, when he asked for a team (while he was in top 15 Public LB with 0.49+), as he had a solid feature set (we knew it could be a bit overfitting before even merging with him, as our team experienced massive differences by just changing the threshold for MCC)</li> <li>Adding onthehilluu to our team, to think more about pushing on LB (feature insight) and sharing knowledge/experience (when we merged with jayjay and managed to push a bit on LB with his single model, the initial objective of sharing experience was left behind, and the new objective was set to get a gold medal, i.e top 12)</li> </ul> <p>And frankly, we were stuck with jayjay&#8217;s set for about 15 days. Literally, it took us that time before we started to see real improvements: there is a strange feature conditioning which made all our sets we had to not play well together (by pair - with jayjay&#8217;s). When we started to add new features (not from any existing feature set we had), real improvements kicked in. And there was only 1 week left when we started seeing good improvements&#8230;</p> <h3>Validation method</h3> <p>We used 5-fold cross validation for our supervised models. I generated a fold very early during this competition so we could all have the same basis to work on this data. But then we found my folds might not be stable enough! Therefore, someone created &#8220;stable folds&#8221; to use. However, what we found is that &#8220;too stable&#8221; is &#8220;not good enough&#8221;. The issue was probably lying in the duplicates. Therefore, we decided to stick with my initial folds, which are actually very good as we could spot fold instability very easily!</p> <h3>Submission method</h3> <p>This one is a very long text to explain. There is no magic behind submitting (compute threshold from CV), but not wasting submissions is another thing. How to submit on the LB was not very tricky. Not tricky due to the metric to optimize: either 0 or 1s. Add into the factors the fold instability... That was.. literally doom. I worked on a &#8220;submission creator&#8221; which took CV prediction (out of fold), CV test predictions, and a full model predictions, and which output a lot of different metrics in addition to a diagnostics file and 10 submission files.</p> <p>Each of these 10 submission files had their positive count printed in the diagnostics. Here is an example of diagnostics.</p> <pre><code> Fold 1 converged after 0836 iterations. Fold 2 converged after 0571 iterations. Fold 3 converged after 0499 iterations. Fold 4 converged after 0585 iterations. Fold 5 converged after 0566 iterations. Iterations: 611.40 + 129.874 Fold 1: AUC=0.9125954 Fold 2: AUC=0.9221191 Fold 3: AUC=0.9188253 Fold 4: AUC=0.9200112 Fold 5: AUC=0.9183175 AUC: 0.9183737 + 0.0035463 Average AUC using all data: 0.9037691 Fold 1: MCC=0.4756098 (0534 [38.84%] positives), threshold=0.3749080 =&gt; True positives = 76.592% Fold 2: MCC=0.4769881 (0437 [31.76%] positives), threshold=0.5057880 =&gt; True positives = 84.897% Fold 3: MCC=0.4801341 (0560 [40.70%] positives), threshold=0.3444360 =&gt; True positives = 75.536% Fold 4: MCC=0.4737958 (0605 [43.97%] positives), threshold=0.3298900 =&gt; True positives = 71.736% Fold 5: MCC=0.4829743 (0615 [44.69%] positives), threshold=0.3086210 =&gt; True positives = 72.520% MCC: 0.4779004 + 0.0036627 Threshold: 0.3727286 + 0.0781904 Positives: 550.20 + 071.37 Detection Rate %: 39.991 + 05.185 True positives %: 76.256 + 05.237 Undetected positives: 0959.20 + 0028.86 Average MCC on all data (5 fold): 0.4734361, threshold=0.3727286 Average MCC using all data: 0.4747755, threshold=0.3478090 Submission overfitted threshold on all MCC positives: 2896 Submission average validated threshold on all MCC positives: 2736 Submission average of overfit+validated threshold positives: 2814 Submission with all data overfitted threshold on all MCC positives: 2933. Threshold=0.347809 Submission with all data average validated threshold on all MCC positives: 2803. Threshold=0.3727286 Submission with all data average of overfit+validated threshold positives: 2867. Threshold=0.3602688 Submission with all data by taking the amount of positives of overfitted threshold on all MCC positives: 2896. Threshold=0.354801 Submission with all data by taking the amount of positives of average validated threshold on all MCC positives: 2736. Threshold=0.385727 Submission with all data by taking the amount of positives of average of overfit+validated threshold positives: 2814. Threshold=0.370321 Submission with all data by taking the sum of positives of validated positives: 2751. Threshold=0.383094 Cross-validated used features list (all used features to copy &amp; paste): c(&quot;sameL0_next&quot;, &quot;sameL1_next&quot;, &quot;CATEGORICAL_Last_____1&quot;, &quot;S24.311&quot;, &quot;sameL3_next&quot;, &quot;GF1&quot;, &quot;GF0&quot;, &quot;FOR30_Sum_S&quot;, &quot;CATEGORICAL_out_out_L3_S32_F3854_class2&quot;, (censored because way too long) Cross-validated multipresence used features list (all used features to copy &amp; paste): c(&quot;sameL0_next&quot;, &quot;sameL1_next&quot;, &quot;CATEGORICAL_Last_____1&quot;, &quot;S24.311&quot;, &quot;sameL3_next&quot;, &quot;GF1&quot;, &quot;GF0&quot;, &quot;FOR30_Sum_S&quot;, &quot;CATEGORICAL_out_out_L3_S32_F3854_class2&quot;, (censored because way too long) </code></pre> <p>Does it look like Chinese to interpret what is wrong and what is OK? At first sight, yes. I made a small tutorial (by message on a chat&#8230; un-obviously) to help my team (those who are modeling and/or testing internally features) decipher the diagnostics file. Here are good excerpts of what I wrote:</p> <blockquote> <p>My folds have extreme feature variance on each fold (not intentional, but it seems to help a lot), so if your features don't work/generalize on several folds, it will decrease the stability.</p> <p>CV variance is OK, but too much variance is not. I tested over 20 combos on 3-4-5-6 CV folds locally to test how variance can increase:</p> <ul> <li>at 6 fold, standard deviation (sd) is about 0.01</li> <li>5 fold, sd about 0.006</li> <li>4 fold, sd about 0.006 (not sure?)</li> <li>3 fold, sd about 0.004</li> </ul> <p>At the top of the diagnostics, you have all diagnostic things to analyze the model, so you can check for consistency &quot;out-of-fold&quot; (out of fold predictions) and &quot;all-of-out-of-fold&quot; (when taking all out of fold predictions together): AUC, MCC, threshold, positive rate, detection rate, true positives, undetected positives (false negatives), MCC using 5-fold average threshold, MCC using all data.</p> <p>If the difference between both is high, most of thresholding methods we use will fail (and we have a fallback for this in this scenario). For instance, 0.92 avg AUC out of fold, 0.85 AUC all data is not stable for submission. You can have high AUC and &quot;trash&quot; MCC. But we use AUC to monitor if predictions remain stable on each fold so we don't get any surprize when training on all data (because from that point we go nearly &quot;blind&quot;). If your probabilities predicted are stable, AUC on all data should be close to the average per fold</p> <p>When your folds are unstable (due to XYZ feature), the difference increases as a [0, 1] scale for a fold might not be much more different against the same [0, 1] scale for another fold.</p> <p>My script creates 10 submissions:</p> <ul> <li>1st one: use threshold using all data, on CV averaged prediction = mostly for debugging purposes</li> <li>2nd one: use average threshold from CV, on CV averaged prediction = mostly for debugging purposes</li> <li>3rd one: take mean of 1st and 2nd, on CV averaged prediction = mostly for debugging purposes</li> </ul> <p>Then you have 3 more submissions created (4th, 5th, 6th) which uses the predicted probabilities on testing data using the same thresholding techniques of 1st, 2nd, and 3rd. By default, the 4th submission is the best to submit (it's not that &quot;overfitted&quot; - it overfits more and more if your fold stability is lower)</p> <p>Then you have 7th 8th 9th, which uses the positive count found using the 1st 2nd 3rd methods but on the predictions using the model with all data together (those one should be avoided most of the times, but they are left as is if needed).</p> <p>And the final one the 10th: it takes the positive count of the CV and uses that same count on the test data from the model using all data. It assumes the hypothesis the positive count in train = positive count in test, which might not be true. Obviously, no one knows if true or not. This is the submission you must use when your folds are clearly unstable (it is the fallback method for submitting with an unstable CV).</p> <p>If you see on these debug a number like 10,000 or more (usually it sits around 2000-4000) it means your folds are unstable (captain obvious is here) and you need to look out what happened. It means also your CV models / full model predictions are unstable too (so only 10th should be used, unless the one you pick has stable predictions = value around 2000-4000). Therefore, only the fallback submission should be valid from that point, in most cases.</p> <p>If you make a submission using 1st, 2nd, or 3rd set, you MUST approximately get what you had in CV. If you don&#8217;t, the feature you added is bad enough to be conditioned by something in the folds. When you want to test 1 or more submissions from a single model, you use this order if your model is stable according to the diagnostics and your sight of it:</p> <ul> <li>4th (nearly always the best)</li> <li>6th (rare you need 2 submissions - only if you clearly think your model is good - acts as a confirmatory response vs your CV if 4th overshot the LB)</li> <li>5th (very rare you will need 3 submissions to test a single model - typically happens when we don't have enough to submit, this should always score lower on LB than 4th/6th no matter what you do, unless luck)</li> </ul> <p>If unstable: - 10th (and only this one)</p> <p>Average MCC on all data (5 fold) &lt;- use 5 fold threshold, average, and compute MCC using that average threshold</p> <p>Average MCC using all data &lt;- compute threshold on all data, use that threshold on all data</p> <p>Cross-validated multipresence features are in case you have features which are not picked up by your model if you input LightGBM CV output from my package. You can't compare both using <code>featurelist1[which(!(featurelist1 %in% featurelist2))]</code> in R.</p> </blockquote> <h3>Stuff we used in the team</h3> <p>Mainly the most important names for a fully decentralized team working on a Kaggle competition in a (non or not-non) project-mode:</p> <ul> <li>R programming language</li> <li>Python programming language</li> <li><a href="https://github.com/Microsoft/LightGBM">LightGBM</a> through the <a href="https://github.com/Laurae2/Laurae">Laurae package</a> (fastest R implementation at that time - you can do a 5-fold CV on 1000+ features on less than 40 minutes, I/O included)</li> <li><a href="https://github.com/dmlc/xgboost">xgboost</a> (because &#8220;unidimensional machine learning at Kaggle is not Kaggle without xgboost&#8221;)</li> <li>scikit-learn Random Forest (eats RAM but it works in Python without having to go through Java)</li> <li>H2O Random Forest (when in R you want to use parallelized and fast Random Forests&#8230;. T_T)</li> <li>Keras Neural Networks (who needs a neural network in the brain?)</li> <li>data.table (fread/fwrite and super fast data manipulation, please)</li> <li>Markdown, Rmarkdown, hacked pandoc (welcome Visual Studio executable hacks, by Laurae), iPython notebooks (yea, yea, we did reporting in the team to share insights / do &#8220;bookkeeping&#8221; - in a real project-mode)</li> <li>Computers (who does not need a computer to compete at Kaggle? Use a phone?)</li> <li>Monitors (because we are not blind)</li> <li>Windows / Linux (because you need an operating system on your computer&#8230;) AWS / Google Cloud Platform (when you needed more horsepower on your car)</li> <li>SSH FTP &amp; Filezilla (how do you share data securely with the 1Gbps network from Laurae with all the team, when you have 10GB files?)</li> <li>RStudio / Spyder (who needs an IDE to program?)</li> <li>Remote Desktop (to share sessions on a server, troubleshooting, etc.)</li> <li>Skype / Phone (could use Slack too, but we didn&#8217;t in the end as Skype was faster)</li> <li>GitHub (private repository, to share code privately)</li> <li>An internet browser to go on websites (it&#8217;s obvious you need to use a browser somewhere)</li> </ul> <p>The team experience so far? Really great! So much fun and no tension in the team, with people listening to each other =&gt; a nice ambiance to spend time on Kaggle. And people wanting to learn from others is always a wonderful experience, as it is the <strong>TEST/challenge which checks whether your knowledge was correct or not correct / biased</strong> (yea, you can learn wrong something, it may happen).</p> <hr> <p>Ah, and I think this is the post which will put me to Master tier in Discussions. 199 medals, now 200. A good double Master tier from this competition. Anyways, great work from everyone in this competition! (not only my team but the others! - look at all these great kernels and especially the pathing one!)</p> <p>There are 5 new competition Masters according to the top 12 private LB! (Alexey Noskov, Laurae, Shubin, Michael Maguire, Alexandru Papiu) - congratulations!</p>
Bosch Production Line Performance
Solution using Apache Spark ML?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bosch Production Line Performance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Any one used Apache Spark ML and managed to get a good score in private LB?. If yes please share solution/strategy.</p> <p>Thanks.</p> <p>Thanks Elena.Appreciate it. [quote=Elena Cuoco;144319]</p> <p>I started using the Spark Machine Learning library with python API, then I moved to Scala to integrate the xgboost j4 package for Spark. As many others in the competition, I started getting good score after the leak or &quot;data property&quot; were made public by <strong>Faron</strong>. Thanks also to <strong>CPMP</strong> for his notebook for best threshold selection.</p> <p>Here it is the <a href="https://github.com/elenacuoco/bosch-kaggle-competition-spark">github repo</a> for my spark code.</p> <p>[/quote]</p>
Bristol-Myers Squibb โ€“ Molecular Translation
10th solution private 0.61 : TNT/VIT/CAIT ensemble + tricks
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>we also make some modifications to our patch-based transformer.<br> we note that resize artifacts are only present in the resized images.<br> we want the model to handle resized and original images differently.</p> <p><img src="https://i.ibb.co/nC5tC3N/Selection-201.png" alt=""></p> <p>to be updated โ€ฆ.</p> <p>there are two part:</p> <ul> <li>part one: TNT/VIT/CAIT ensemble model (public/private LB 0.71/0.71)<br> <a href="https://www.kaggle.com/c/bms-molecular-translation/discussion/243766" target="_blank">https://www.kaggle.com/c/bms-molecular-translation/discussion/243766</a> (this post)</li> <li>part two: OCR + MolBuilder + object detection CNN (public/private LB 0.62/0.61)<br> <a href="https://www.kaggle.com/c/bms-molecular-translation/discussion/243809" target="_blank">https://www.kaggle.com/c/bms-molecular-translation/discussion/243809</a></li> </ul> <hr> <p>TNT/VIT/CAIT ensemble model (public/private LB 0.71/0.71)</p> <ul> <li><p>we train 5 models : 3x patched based VIT (patch size 16, image scale = 0.8,0.9,0.1), 1x TNT with image size 384 and 1x CAIT with image size 240. For details, please refer to <a href="https://www.kaggle.com/c/bms-molecular-translation/discussion/231190" target="_blank">https://www.kaggle.com/c/bms-molecular-translation/discussion/231190</a><br> (local CV 0.78) </p></li> <li><p>we do 5 iterations of image perturbation. if an image has gives an invalid inchi (verification by rdkit), we perturb the image and try inference again.<br> ( local CV 0.76) </p></li> <li><p>finally, we apply rdkit normalisation.<br> (public/private LB 0.73/0.74, local CV 0.72) </p></li> </ul> <p>The ensemble is our top score submission. We also have submissions for each of the individual models. We then sort all submission cvs by LB score. To create the final submission:</p> <ul> <li>1. We start off will predictions from the top score submission csv. </li> <li>2. For a current prediction has invalid inchi, it will be replaced from the corresponding one from the next higher score submission csv if ( and only if ) that corresponding one is valid</li> <li>3. The process is repeated for all csvs. <br> Final LB score is (public/private LB 0.71/0.71, local CV 0.70)</li> </ul> <p><img src="https://i.ibb.co/TYt5h7m/Selection-193.png" alt=""></p> <p>we use random shift and scale to perturb image</p> <hr> <p>here is a tip on ensemble strategy:<br> <img src="https://i.ibb.co/F3Yns33/Selection-202.png" alt=""></p> <p>we note that lb score is correlated to the number of invalid inchi (verified from rdkit). If you have fewer invalid inchi, you are likely to better LB score.</p> <p>instead of making many submissions, you can use the number of invalid inchi to estimate your lb score. To know the limit of LB score of your current method, you can extrapolate your graph to the lowest number of possible inchi. If that doesn't work well, you can change your ensemble strategy so that you have a steeper graph. (i.e. better lb score per invalid inchi)</p> <hr> <h2>Acknowledgment</h2> <p>I am grateful to Z by HP &amp; NVIDIA for sponsoring me a Z8 Workstation with dual RTX8000 GPU for this competition. In the final days, I need to run TNT/VIT/CAIT ensemble (5 transformers) on the 1.6 million test images. Some of the test images are large and have long sequences. Besides high computative power, we also need high GPU memory.</p> <p>One RTX8000 GPU has 48GB onboard. It takes about 7 hours to process all images using the workstation. This is very fast and definitely help our team to secure a gold medal of 10th placing in the final ranking!</p> <p>example of results:<br> <img src="https://i.ibb.co/ZzL5HLk/Selection-196.png" alt=""></p> <p>i trained with extra 2 million indigo images. i didn't exactly analyse/compare the results (due to lack of time). But it seems that it didn't help much in local cross validation.</p> <p>I may think that learning super resolution is a good solution, but i didn't have time to implement. </p> <p>If we validate only using clean indigo images, local CV score is about 0.65 (instead of 1.20 with kaggle noisy images)</p> <hr> <p>with the original indigo image, we are able to analyze/model the domain difference of test and train data. In fact, our local CV and public are almost identical. This makes it easier to design our experiments.</p> <p><img src="https://i.ibb.co/4Jqh27S/Selection-195.png" alt=""></p> <p>the kaggle dataset is rendered by indigo toolkit (use old version 1.4.0-beta)<br> <a href="https://lifescience.opensource.epam.com/indigo/" target="_blank">https://lifescience.opensource.epam.com/indigo/</a></p> <p><img src="https://i.ibb.co/ZzSv1mV/Selection-194.png" alt=""></p>
Bristol-Myers Squibb โ€“ Molecular Translation
110th place effort
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'd like to share a brief overview of the work I've done to get to the 110th place. That was close to the medal zone, in which I actually spent a few weeks, but the competition really went wild on the last ~10 days and I couldn't spend much time on it, so at the end I didn't earn any medal, but still learnt a lot.</p> <p><strong>Overview:</strong></p> <ul> <li>Used the data preprocessed in a few different image sizes ( (256,512,3), (512,512,3)โ€ฆ. ) and max length (282, 282/2).</li> <li>Uploaded all the preprocessed datasets to my personal account in Google Cloud Storage, which allowed me to use TPU in Colab Pro.</li> <li>Trained some Densenet pytorch models using GPU (oof, that took some time) and some tensorflow models using TPU. The tensorflow models differed from each other mostly in hyperparameters choices and the dataset used.</li> <li>Encoders tested: Densenet-169, EfficientNet-B0/B2/B4, EfficientNetV2-B2, and other <a href="https://github.com/rwightman/pytorch-image-models/tree/master/timm" target="_blank">timm models</a> </li> <li>Decoders tested: LSTM with attention, Transformer with attention</li> <li>I've got the most effective results by manually tuning the learning rate scheduler</li> <li>Post-processed ~10 submissions using <a href="https://www.kaggle.com/kirderf" target="_blank">@kirderf</a> <a href="https://www.kaggle.com/kirderf/basic-postp-with-different-submissions/" target="_blank">notebook</a></li> <li>RDKit normalization using <a href="https://www.kaggle.com/nofreewill" target="_blank">@nofreewill</a> <a href="https://www.kaggle.com/nofreewill/normalize-your-predictions" target="_blank">notebook</a></li> </ul> <p>I guess my solution lacked a voting system or a more careful look at the samples predicted to be in the medal zone (top 100).</p> <p>Thanks for everyone who competed and shared works and ideas!</p>
Bristol-Myers Squibb โ€“ Molecular Translation
12th place...
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/morizin" target="_blank">@morizin</a> </p> <p>Hi <a href="https://www.kaggle.com/antonymipt" target="_blank">@antonymipt</a>. Details for that one were as follows:</p> <ul> <li>Encoder backbone: resnest200e (timm)</li> <li>Optimizer: Adam</li> <li>Scheduler: cosine annealing</li> <li>Input size: 320 x 320</li> <li>Batch size: 32</li> <li>LR: 15e-5</li> <li>Attention dim: 512</li> <li>Embedding dim: 512</li> <li>Decoder dim: 2048</li> <li>mixed precision</li> <li># epochs: 20, each epoch contained all of the training data + additional randomly generated samples from <code>extra_approved_InChIs.csv</code>. Augmentation: Coarse dropout, random rotations</li> <li>5 additional epochs on pseudo labelled test data with a lower LR (5e-5)</li> </ul> <p>The choice of backbone was the biggest differentiator here. I experimented with many (efficientnets, efficientnet v2, different sizes of resnext, resnest, also different attention/embedding/decoder dims). Resnest200 took a long time to train, but the additional capacity meant it could achieve higher accuracy in the end.</p> <p>Thank you!</p> <p>Cheers. I'm going to go sleep for a week. Might have the capacity for a brighter outlook on it after that!</p> <p>Thanks <a href="https://www.kaggle.com/bamps53" target="_blank">@bamps53</a>. Congrats to you guys, great result. Fingers crossed we get a fair result, whatever that is.</p> <p>Thanks <a href="https://www.kaggle.com/tanulsingh077" target="_blank">@tanulsingh077</a>. Details for ViT training as follows:</p> <p><strong>Model details:</strong></p> <p>encoder backbone = timm <code>vit_deit_base_patch16_384</code> pretrained<br> transformer decoder = <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> TransformerDecode class. 8 attention heads; 6 layers; vocab size 300<br> input size = (384, 384)<br> optimizer = Adam<br> lr = 15e-5<br> batch size = 24<br> lr scheduler = cosine annealing</p> <p><strong>Training:</strong></p> <p>Mixed precision for 21 epochs (originally attempting 40 epochs). <br> At this point I ran into NaN losses.<br> Switched to full precision for 10 epochs, lowered batch size to 18.<br> Calculated pseudo labels for the test set based on ensemble of predictions from this + other models available at this point.<br> Trained for a further 5 epochs on the test set with a low lr.</p> <p>Things got interesting at this point. I still had time left, but not enough to train other models from scratch. The best option was to continue to train the ViT model. I decided to continue with the non-pseudo labelled version (the one that had been trained for 21 + 10 epochs). It went as follows:</p> <p>Train for 2 epochs. Power outage for a whole day. Attempt to train for another 8 epochs. Machine crash after epoch 2 (I assume load-relatedโ€ฆ At this point the whole thing looked like it was about to lift off). Train for another 3 epochs before time ran out.</p> <p>Uuuuff. Kaggle is tough! The last two months have felt like a really hard slog. I was trying to go for a solo gold but fell just short - congratulations to all the winners, and everyone in the medal zone, it was a pleasure to compete with you!</p> <p>Some recognition before I go into solution details: <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> - delighted you made it to the gold zone. My strongest models were based on the solutions which you so generously opensourced early on. I learnt a huge amount from team mates in the last competition that I entered (@christofhenkel, <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>, <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>, <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>) - I don't think I would have done as well had it not been for that. Cheers.</p> <p><strong>Solution strategy:</strong></p> <p>Given the size of both the dataset and the highest performing models, it became clear early on that I would need to make a strategy call: train multiple smaller models and ensemble them, or train a single large model and try to push the limit of what that could do. </p> <p>I felt that this competition was similar to <a href="https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles" target="_blank">Lyft</a> in terms of data size. My main learning from that competition was that patience can be a virtue: there is value in simply training a model for far longer than you think you should. Improvements are slow, but they are steady. </p> <p>To this end, I focussed on a single VIT transformer (based on <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>, 384 x 384 inputs). My best single checkpoint ultimately made it to 0.67 LB. Each epoch took c. 24 hours (!!!). Ensembled they reached 0.63.</p> <p>When experimenting early on I trained a LSTM+attention model based on <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> 's code with a resnest200 backbone. This reached 1.11 LB so I included it in my final ensemble for a small amount of diversification. </p> <p><strong>Data:</strong></p> <p>Both models were fit using all of the data provided, plus an additional 10% samples per epoch of randomly generated images from <code>extra_approved_InChIs.csv</code>. I based the image generation on <a href="https://www.kaggle.com/stainsby" target="_blank">@stainsby</a> 's <a href="https://www.kaggle.com/stainsby/improved-synthetic-data-for-bms-competition-v3" target="_blank">notebook</a>. Augmentation comprised random rotation (plus coarse dropout in the case of the LSTM model). The final ensemble set also included models that had been fit on pseudo-labelled test data.</p> <p><strong>Ensembling strategy:</strong></p> <p>Throughout the competition I had been using <a href="https://www.kaggle.com/nofreewill" target="_blank">@nofreewill</a>'s InChI normalisation as the basis for ensembling: each prediction was categorized as 1) normalized, where normalized result = prediction; 2) normalized, with modifications; 3) normalization fails. </p> <p>For each image in the test set I isolated the predictions with the lowest normalization category. Within this group I used simple voting, with ties decided by model preference (i.e. model were ordered highest preference to lowest preference based on their LB scores).</p> <p>I ensembled 7 checkpoints for the VIT model + 2 for the Resnest200/LSTM model. Each sample was computed twice per checkpoint: once in its original form, once flipped L-&gt;R.</p> <p>Congrats, all.</p> <p>Well, fair play for continuing to make solo attempts after that. I'm not sure whether I have the capacity for another one! Life-ex-kaggle circumstances makes it pretty difficultโ€ฆ</p> <p>Cheers, Rob</p>
Bristol-Myers Squibb โ€“ Molecular Translation
13th Place Team (a story from a Kaggle newbie's view)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Boost of Merge Algorithms</strong> section is updated.</p> <p>Multi Merge algorithm first draftโ€ฆ<br> <img src="https://i.imgur.com/o4Bq1XP.jpg" alt=""></p> <p>What I see is TPU memory allows you to go very large batch sizes like 128*8. I used <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a>'s reference notebook for TPU experiments. It took around 30 mins for each epoch, so I could try different ideas within a day. </p> <p>Thanks! Yes, we used Kaggle and Colab for our TPU experiments. </p> <p>The story has been completed. It was an enjoyable but tiring experience for me.<br> I hope you also enjoyed reading!</p> <p>Here is the final image, a small part of our merge experiments! <br> <img src="https://i.imgur.com/fMlfEWn.png" alt=""></p> <p><strong>Preparation</strong> section is updated. </p> <p>Preliminary research on InChI and some Ideasโ€ฆ<br> <img src="https://i.imgur.com/MrDswM7.jpg" alt=""></p> <p>Hello all,</p> <p>It was an amazing competition. I would like to thank my team members ( <a href="https://www.kaggle.com/eakdag" target="_blank">@eakdag</a>, <a href="https://www.kaggle.com/ubique" target="_blank">@ubique</a>, <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a>, <a href="https://www.kaggle.com/proletheus" target="_blank">@proletheus</a> ) for their great effort. We missed the gold medal at the last moment, but I learned a lot and had nice friends.</p> <p>As a Kaggle newbie, I will explain the story of our team from my view. It could be a bit long so I divided it into sections and I will fill them in time.</p> <h3><strong>[Introduction]</strong></h3> <p>I and Erkut were friends from high school (MFL) at Konya/Turkey. After 12 years we met in Eindhoven/Netherlands by chance. We are both pursuing a PhD at TU/e. I am working on cheminformatics applications and Erkut is working on computer vision. We had discussed trying a Kaggle competition together 2 weeks before the BMS competition started. When this competition popped up, it was an amazing opportunity :D. So we set up "MFL Eindhoven" team.</p> <h3><strong>[Preparetion]</strong></h3> <p>I had no prior experience with computer vision except playing with some toy datasets at the beginning of the competition. I took a quick <a href="https://www.kaggle.com/learn/computer-vision" target="_blank">computer vision course</a> from Kaggle. Then I developed <a href="https://www.kaggle.com/c/bms-molecular-translation/discussion/229168#1255349" target="_blank">my first kernel which detects the stereochemical layer</a> in the image. </p> <p>At the same time, I analyzed InChI and listed possible solution ideas. (see the picture of my early research in the comments)</p> <h3><strong>[Training First Models]</strong></h3> <p>First, we started image captioning with <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>'s famous <a href="https://www.kaggle.com/yasufuminakama/inchi-resnet-lstm-with-attention-starter" target="_blank">ResNet+LSTM notebook</a>. However, we had limited resources and training was super slow. We wanted to make many different experiments, so we needed a faster way. (<strong>LB:7.06</strong>)</p> <p><a href="https://www.kaggle.com/markwijkhuizen/tensorflow-tpu-training-baseline-lb-16-92" target="_blank">Mark's TPU notebook </a>: <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a> shared an amazing end-to-end TPU notebook. We used eff_B1+LSTM for training our models. Each epoch was taking around (30 min to 1 hour) which allowed us to make the following experiments:</p> <ul> <li>Larger Image size</li> <li>Different tokenizer </li> <li>Train with only longer sequences<br> (see comments for the list of other possible improvements)</li> </ul> <p>Now, we had several models (<strong>LB: btw 4.0 to 7.0</strong>) and it was time to ensemble them which will boost our score. <br> <img src="https://i.imgur.com/Hovv3C0.png" alt=""><br> PS: Special thanks to <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a> ! These experiments would not have been possible without your sharing.</p> <h3><strong>[Boost of Merge Algorithms]</strong></h3> <p><strong>Gold Rush</strong><br> Our first merge algorithm was "Gold Rush". It simply merges two submissions by selecting valid molecules detected by Rdkit. When we apply Gold Rush to our submissions from different models, our score improved to <strong>LB: ~3.50</strong>.<br> <img src="https://i.imgur.com/UX4r21n.png" alt=""></p> <p><strong>The Collector of Lost Souls</strong><br> After we merge all submissions, there are still around 100k invalid molecules. In order to find them, we developed an algorithm that takes inference from each epoch for only invalid molecules. Using this method, we collected 20k more valid molecules and our score improved to <strong>LB: ~3.30</strong>.</p> <p>During that time <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> shared his <a href="https://www.kaggle.com/c/bms-molecular-translation/discussion/231190" target="_blank">transformer and the inference script</a>. We inferred two submissions (<strong>LB:3.10 and 3.11</strong>). Then we also included these submissions into Gold Rush and we got a high improvement to <strong>LB:2.30</strong>. </p> <p><strong>Multi Merge</strong><br> After this point, I started to analyze my validation data. I found that avg error of valid molecules was 0.8. So even if we found all valid molecules, 0.8 was the top score we could reach. Then I started to compare different submissions and found different valid InChIs for the same molecules. It meant that valid molecules also need to be validated :D. So I developed a new algorithm based on voting and minimum distance. That really worked and our score improved to <strong>LB: ~2.00</strong>.<br> <img src="https://i.imgur.com/5nHXrdf.png" alt=""><br> Now, we had a great merge algorithm and we were in a good LB position, it was a good time to develop a team-up strategyโ€ฆ</p> <h3><strong>[Team Up Strategy]</strong></h3> <p>Our algorithm boosted 3.xx models to 2.00 level, so if we find new team members with high-scored models we could have played for top positions. Our algorithm gets more benefit from diverse models, so we always took this into account. We decided to go until 5 members. After each member joined our score would increase and we would have a better chance to merge with higher LBs. So we decided to go round by round and started to invite other top LBs. </p> <p><strong>PS:</strong> There were some suspicious accounts at the top positions, we decided not to invite them.</p> <h3><strong>[Boost of New Members]</strong></h3> <p>In the first round, <a href="https://www.kaggle.com/ubique" target="_blank">@ubique</a> accepted our invitation. He had SMILES model trained with ResNest (<strong>LB:2.03</strong>). When we added his models in our algorithm we reached (<strong>LB:1.16</strong>).</p> <p>Before the second round, <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> contacted us for a possible merge. He had just finished another competition and recently joined this one. He had an InChI Transformer model (<strong>LB:3.71</strong>) which predicts each layer separately. We thought that his experience would be very useful for the team and probably will improve his model quickly (it exactly came true). Since his InChI layers ensembled from separate models, valid ones were very reliable. His model boosted us to (<strong>LB:0.99</strong>).</p> <p>We reached that level, using relatively weak models. We needed a locomotive single model (preferably InChI). In the last round, <a href="https://www.kaggle.com/proletheus" target="_blank">@proletheus</a> joined our team, and we found our locomotive (<strong>LB:1.42</strong> Transformer InChI). By including his model into multi-merge, we reached (<strong>LB:0.78</strong>). </p> <h3><strong>[Developed Models]</strong></h3> <p>After that point, every day we worked hard and improved our models. In the end, our merged score reached (<strong>LB:0.62</strong>). The main models we developed are listed with scores (before pseudo-labeling) below.</p> <ul> <li>InChI / Transformer / <a href="https://www.kaggle.com/proletheus" target="_blank">@proletheus</a> / <strong>LB: 1.37</strong></li> <li>InChI (layered) / Transformer / <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> / <strong>LB: 2.01</strong></li> <li>InChI / eff_B1+LSTM / <a href="https://www.kaggle.com/sorkun" target="_blank">@sorkun</a> &amp; <a href="https://www.kaggle.com/eakdag" target="_blank">@eakdag</a> / <strong>LB: 3.89</strong></li> <li>SMILES / eff_B1+LSTM / <a href="https://www.kaggle.com/sorkun" target="_blank">@sorkun</a> &amp; <a href="https://www.kaggle.com/eakdag" target="_blank">@eakdag</a>/ <strong>LB: 1.89</strong></li> <li>SMILES / ResNest+GRU / <a href="https://www.kaggle.com/ubique" target="_blank">@ubique</a> / <strong>LB: 2.03</strong></li> <li>SELFIES / Transformer / <a href="https://www.kaggle.com/proletheus" target="_blank">@proletheus</a> / <strong>LB: 2.52</strong></li> <li>SELFIES / eff_B1+LSTM / <a href="https://www.kaggle.com/sorkun" target="_blank">@sorkun</a> &amp; <a href="https://www.kaggle.com/eakdag" target="_blank">@eakdag</a> / <strong>LB: 3.74</strong></li> </ul> <h3><strong>[Applied Methods]</strong></h3> <ul> <li>Molecular Reps (InChI, SMILES, SELFIES)</li> <li>OHEM</li> <li>Pseudo-labeling (improved single models but not improved the merged score)</li> </ul> <p><strong>Training First Models</strong> section is updated.</p> <p>More ideas about possible improvementsโ€ฆ<br> <img src="https://i.imgur.com/4naXMsM.png" alt=""></p>
Bristol-Myers Squibb โ€“ Molecular Translation
15th adversarial validation etc
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to say thanks to Kaggle and organizers hosts this exciting competition. Congrats winners who won this tough competition! </p> <h2>Findings</h2> <p>I have tried adversarial validation in early stage. Here is the result.</p> <p><img src="https://ucaa060899a3c8e0931bf9d91ade.previews.dropboxusercontent.com/p/thumb/ABLhfY6YZmPuwcl_I1J-i3lYgYixcNXnWlOyDMBIigA4YS88S8InmWe-QhG8Y2MoClDpnXxH96ia6Cy38-rarws-WL8pB59Q_DYay85J19n8R-jp-ucGx5cnkI4rMpbyGcmSO3aQGaPHoxMrdtEQuDchv_nJO8MoZRDJYXADTLCbDcptwZ806rIePExIe6-klx1th-v3kSfz267TdsYhG-YeJziUYy8FzTXr9xVn7gvaEDu40-REAbdxA_CqnyulEUq-8OS8kzBXv3rUs3kFqP9exr3AZ_0LRmFrJsN0Adbj4DJnBm_WUuZPWoXITU2OoG4E7sh1fekri7_ei0DSctsjzIuC_pJkOL7XouqLmmUd2NtWBflkqG5B6JLwK12yGzthW35ZOKL8R2hGZMJwezHG/p.png?fv_content=true&amp;size_mode=5" alt="adv val"></p> <p>This figure by itself tells how this test set was made. Test set can be split into two subsets. I call them as โ€ฆ</p> <ul> <li>Test subset A (left samples on figure which is not similar with train set)</li> <li>Test subset B (right samples on figure which is similar with train set)</li> </ul> <p>Then, I was pretty sure that pseudo labeling will work well for test set. I also thought that it's necessary to create models which fits both subset well. It's possible to estimate how much current model fits for each samples, if erase all half predictions.</p> <h2>Training and modeling</h2> <p>I used 3 models, Deit tiny, small and base.</p> <p>My model mainly predicts smiles instead of inchi. smiles is more compact than inchi. It helps reducing training time. smiles is converted inchi before submission. I also created a model to predict inchi, but it's for fallback when smiles is invalid. As a summary, I predicted inchi from 3 ways. The priority order is followings.</p> <ul> <li>norm inchi from predicted smiles</li> <li>norm inchi from predicted inchi</li> <li>raw predicted inchi</li> </ul> <p>Training steps consist from 3 steps.</p> <ol> <li>Pre-train only decoder by using randomized smiles canonicalization.</li> <li>Combine Deit with the decoder trained on step 1 and fine tune by extra synthesized images (size 224).</li> <li>Fine tune with train set (only hard samples) and pseudo labels with image size=384.</li> </ol> <p>At step1, I train transformer to predict canonicalized inchi and smiles from randomized smiles. Next, self attention and heads on decoder were transferred and combined with Deit. So, only cross attention on decoder is trained from scratch. As I don't have so many GPUs, 224 images are used at this step.</p> <p>Finally, I have trained 6 models at step 3.</p> <ul> <li>a1) Deit tiny trained by pseudo labels from Test subset A + train set (only hard samples).</li> <li>a2) Deit small trained by pseudo labels from Test subset A + train set (only hard samples).</li> <li>a3) Deit base trained by pseudo labels from Test subset A + train set (only hard samples).</li> <li>b1) Deit tiny trained by pseudo labels from Test subset B + train set (only hard samples).</li> <li>b2) Deit small trained by pseudo labels from Test subset B + train set (only hard samples).</li> <li>b3) Deit base trained by pseudo labels from Test subset B + train set (only hard samples).</li> </ul> <p>Position embedding was expanded for image size 384 by interpolation at step 3.</p> <h2>Inference</h2> <p>I used TTA (rot90) and logits averaging per token.</p> <p>For Test subset A, I used averaging weight 9:9:9:1:1:1 for models a1, a2, a3, b1, b2 and b3. On the other hand, for Test subset B, I used weight 1:1:1:9:9:9. It ends up with 0.64 on LB.</p>
Bristol-Myers Squibb โ€“ Molecular Translation
18th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! :))</p> <p>Did you use <code>cascading and agreement</code> even before my mad discussion about the importance of validation?</p> <p>I read somewhere in the discussions that FP32 fine-tuning helped for someone. I didn't check it, but would make sense if it helps a little, so I just went with it.</p> <p>Indeed it was enjoyable!! :)</p> <p>You joined kaggle 4 years ago, yet this is your very first comment? How? :D</p> <p>0.11 LD is on the whole submissions (all 1.6M molecules).<br> ~0.43 was the LD on the 390k molecules.</p> <p>That 0.11 difference means that the 0.71 LB submission is better in 0.05 characters but also is worse in 0.06 other characters than the 0.70 LB submission.</p> <p><strong>Output:</strong><br> I used 4096 byte pair encodings created from plain InChI text with <a href="https://github.com/google/sentencepiece" target="_blank">sentencepiece</a>.</p> <p><strong>Model:</strong><br> CNN -&gt; Encoder -&gt; Decoder<br> CNN is a custom model, built up from 5 layers of these:</p> <pre><code>def forward(self, x_in): x = self.bnl1(x_in) x = self.bnl2(x) + self.bnl4(self.bnl3(x)) x = torch.cat((x, self.pool(x_in)), dim=1) if self.r else x return x </code></pre> <p>where bnl[1234] is convolution, followed by batch norm and relu.<br> Encoder and Decoder are Transformer-based.</p> <p><strong>Sampling molecules:</strong><br> I removed samples that had no carbon or had no /b layer after /i (due to too few examples).<br> I calculated a weight for each molecule based on it's:</p> <ul> <li>complexity: <code>x.count('-') + x.count('-') * x.count('(') + x.count('(')</code></li> <li>atom count: number of all the atoms it's made up of</li> <li>atom rarity: I calculate how many times (M) an atom is in all samples (N). For each molecule I sum up one of {1/M, 1/(N-M)} values for each atom, depending on if it is in that molecule or not.</li> <li>layer rarity: the same as for atoms</li> </ul> <p>I take these values to a power and add them together with some reasonable coefficient -&gt; I have a final sampling weight for each molecule.</p> <p><strong>Training:</strong><br> Cross entropy loss; Adam; One cycle LR schedule; Mixed Precision + FP32 fine-tuning.<br> Scheduled attention dropout and drophead. I also used <a href="https://github.com/google/sentencepiece" target="_blank">sentencepiece</a>'s BPE-dropout.<br> I've trained 5 models on different aspect ratios. Then I also trained each of them on yet a different aspect ratio for a few more epochs.<br> Their individual LD on my validation are as follows:<br> <code>img_sizes = [(160,224), (192,192),</code><br> <code>(256,544), (224,640), (320,480),</code><br> <code>(288,512), (320,480), (480,320),</code><br> <code>(288,512), (256,576), (320,448),</code><br> <code>(384,384), (352,416), (192,768),]</code><br> <code>raw_lds = [1.5233, 1.5275,</code><br> <code>1.0955, 1.1850, 1.0699,</code><br> <code>1.1706, 1.0629, 1.3790,</code><br> <code>1.0664, 1.1234, 1.0832,</code><br> <code>1.4073, 1.2095, 2.1064,]</code><br> <code>norm_lds = [1.3661, 1.3879,</code><br> <code>0.9438, 1.0244, 0.9248,</code><br> <code>1.0205, 0.9280, 1.2254,</code><br> <code>0.9415, 0.9923, 0.9400,</code><br> <code>1.2318, 1.0646, 1.9285,]</code></p> <p><strong>Prediction:</strong><br> I've predicted all molecules with all models separately and normalized all of the predictions.<br> Then I searched those that have the same normalized form.<br> I've predicted the rest of them (~390k) with ensembled beam search of size 16 and searched those that are not valid predictions. These (~16K) I predicted with beam size of 64.<br> Taking all these predictions I only had ~5k molecules that I had no valid prediction for.<br> I've predicted the 390k molecules with a bunch of other settings (with different beam size and ensemble weights), and tried to sample the valid ones with different strategies, but I just couldn't break the 0.70 barrier.</p> <p>Very interestingly, I had a lot of submissions that had quite an LD between them but all of them were 0.70 on LB. I even had two submissions with 0.11 LD between them, and they're still 0.70 and 0.71 on LB.<br> I couldn't figure out if there could be any potential in this "phenomenon" with which I could make better predictions.</p> <p><strong>Didn't work:</strong></p> <ul> <li>Label smoothing</li> <li>Additional targets on encoder output</li> <li>Encoder level mixup (decoder works on the two targets individually from the input to the output)</li> <li>Sorting molecules by InChI BPE lenght for decoder speedup. I forgot to randomize, lost a whole week trying to figure out what went wrong. It felt incredible, as this happened right after I bought a 3090 โ€ฆ :DDD</li> </ul> <p><strong>Thank you all:</strong><br> This was a very interesting problem to work on, and was a huge pleasure to compete with all of you!<br> Even though I didn't end up in the gold zone, I don't regret spending that much money on a 3090, as so many worked so hard on this competition, and I'm more than happy that I could reach 18th place among so many so talented people!</p> <p><strong>Special thanks to:</strong><br> <a href="https://www.kaggle.com/stassl" target="_blank">@stassl</a> and <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> for the idea of normalization, and<br> <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for the importance of validation</p> <p>Thank you!</p>
Bristol-Myers Squibb โ€“ Molecular Translation
24th place (using molecular graph and atom coordinates)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, you are correct, I just draw it manually for demonstration, images are fed into network without any modification, the order is just used to sort network outputs.</p> <p>Thanks )</p> <p>It is not easy to navigate my code after several month ๐Ÿ˜œ, but I believe this should be correct version:</p> <pre><code>mol_indigo = indigo.loadMolecule(inchi) # ask indigo to calculate 2d coords mol_indigo.layout() # serialize mol to molfile with 2d coords assigned and then load it with rdkit, # it is simplest way to pass coords from indigo to rdkit mol = Chem.MolFromMolBlock(mol_indigo.molfile(), removeHs=False, sanitize=False) Chem.Kekulize(mol) # next few lines are responsible for correct stereo-bonds assignment based on 2d coords # so if you are interested in renumbering only, you can ignore them for b in mol.GetBonds(): if b.GetBondDir() in (Chem.BondDir.BEGINDASH, Chem.BondDir.BEGINWEDGE): b.SetBondDir(Chem.BondDir.NONE) conf = mol.GetConformer() Chem.WedgeMolBonds(mol, mol.GetConformer()) coord_x = [] coord_y = [] for i in range(mol.GetNumAtoms()): xyz = conf.GetAtomPosition(i) coord_x.append(xyz[0]) coord_y.append(xyz[1]) coord_x = np.array(coord_x) coord_y = np.array(coord_y) coords = np.array([coord_x, coord_y]).T COORDS_SCALE = 10 coords = (2 * (coords - coords.min(0)) / ( coords.max(0) - coords.min(0)) * COORDS_SCALE - COORDS_SCALE) coords[np.isnan(coords)] = 0 order = np.lexsort((-coord_y, coord_x)) mol = Chem.RenumberAtoms(mol, order.tolist()) # now atoms will be iterated in (x, y) order for a in mol.GetAtoms(): # ... </code></pre> <p>But maybe if you don't use rdkit, or you don't need anything except atom 2d coordinates, you can use epam.indigo directly to get coords and then compute atom order based on them.</p> <p>After I wrote this post, I started to think, whether it actually helps to renumber atoms based on 2D coordinates. Intuitively, from a human perspective, it seems reasonable - you just look at image and assign numbers from left to right, from top to bottom, even for rather large molecules it shouldn't be very difficult. While algorithm described in InChI specification requires quite complicated calculations. But intuition might be misleading sometimes - some tasks that are easy for humans might be hard for NN and vice versa.</p> <p>To validate this empirically, I've conducted a quick experiment training for one epoch the same model with same parameters - the only difference was in a few lines responsible for atom reordering.</p> <p><img src="https://user-images.githubusercontent.com/4602302/121061194-374d4a00-c7cc-11eb-87db-47dc6367b7de.png" alt="image"></p> <p>First plot is percentage of exact InChI matches, second - levenshtein distance. You can see that, indeed, simplified atom numbering (๐Ÿ”ด red curve) makes the task much easier than when atoms are numbered according to the InChI spec (๐Ÿ”ต blue curve), and training goes much faster.</p> <p>There is no intrinsic order in a molecule (nor in image) - it is just a graph with unordered sets of nodes and edges. So when we are forcing NN to learn some non-trivial order of atoms, we make the task harder than it is required. RDKit is able to calculate atom numbers according to InChI spec perfectly, so why not to outsource this task to it and make thing simpler for NN.</p> <p>Another experiment is comparing performance of models one that outputs InChI as string directly (๐ŸŸข green curve) and another outputs mol graph (๐Ÿ”ด red curve). The result is ambiguous. From one side, if comparing percentage of exact matches, then model based on mol graph beats model based on InChI string with large margin (75% vs 35% after one epoch), even without simplified atom numbering (๐Ÿ”ต blue curve) - (55% vs 35%). From another side, model that outputs InChI string directly has lower LD, even if doesn't predict the result exactly. So in order to achieve better score, you should probably combine both.</p> <p>Since reading DACON solutions, especially one that used object detection to detect atom and bonds coordinates and then rebuilding molecule, I was thinking what would be the best representation for molecule that can be learned by NN more naturally. </p> <p>While directly predicting inchi is the simplest solution as it requires minimal post-processing and it can result in lower LD even if predicted string is not correct InChI. But it has some disadvantages as well:</p> <ul> <li>it is long</li> <li>it has pretty complicated atom numbering (and is very sensitive to it)</li> </ul> <p>So I was quite surprised after training, that NN can actually learn very well to directly predict correct InChI with high accuracy. Even LD &lt; 2 - seemed to me very good result, considering not perfect image quality.</p> <p>While being able to predict <em>/c</em> layer very well, I observed that it struggles with identifying stereochemestry (/t layer), which seemed to be a simpler task. If it can predict whole sequence of atom connections, why it can't just predict + or - ? This was not the biggest source of error as if other layers predicted correctly, it contributed only 1-4 LD, but still it was unclear to me.</p> <p>So I was looking for others molecule representations to overcome mentioned disadvantages. One of them is SMILES that produces shorter strings, another one is molecular graph. I have chosen the latter. It was not exactly what was used in DACON 1st place solution, which was fully object detection approach, as it predicted both atom and bonds coord, and then identified which atoms are connected based on geometrical coordinates of atoms and bonds. Instead, in my representation I had to keep atom numbering (though I changed it from InChI numbering to more simpler based on 2d coordinated of atoms - from left to right, from top to bottom) and for each atom, besides its properties (like element type, chirality, isotope, xy-coords) I predicted also connections from this atom to other atoms - you can think of it as an adjacency matrix.</p> <p><img src="https://user-images.githubusercontent.com/4602302/120774226-c0dded00-c52a-11eb-9a62-0a997467792d.png" alt="image"></p> <p>To recover molecule from atom properties and bonds, I used something like this. It is not full code, but just simple example how you can manually build molecule from scratch. Important methods here to correctly detect stereochemestry are <code>DetectBondStereochemistry</code> and <code>AssignChiralTypesFromBondDirs</code> - they are actually responsible for reversing the procedure that is happening during rendering, when coordinates of molecules are calculated, instead these methods assign E/Z and cis/trans stereo based on 2d coordinated (x, y) of atoms predicted by model.</p> <pre><code>m = Chem.RWMol() # nodes m.AddAtom(Chem.Atom('C')) m.AddAtom(Chem.Atom('C')) m.AddAtom(Chem.Atom('C')) m.AddAtom(Chem.Atom('C')) m.AddAtom(Chem.Atom('F')) m.AddAtom(Chem.Atom('O')) m.AddAtom(Chem.Atom('Cl')) # edges m.AddBond(0, 1, Chem.BondType.SINGLE) m.AddBond(1, 2, Chem.BondType.DOUBLE) m.AddBond(2, 3, Chem.BondType.SINGLE) m.AddBond(3, 4, Chem.BondType.SINGLE) m.AddBond(3, 5, Chem.BondType.SINGLE) m.AddBond(3, 6, Chem.BondType.SINGLE) m.GetBondBetweenAtoms(3, 5).SetBondDir(Chem.BondDir.BEGINWEDGE) # coords coords = ((-2, -1), (-1, 0), (1, 0), (2, 1), (2, 0), (3.5, 1), (2, 2)) conf = Chem.Conformer(m.GetNumAtoms()) conf.Set3D(False) for i, (x, y) in enumerate(coords): conf.SetAtomPosition(i, (x, y, 0)) m.AddConformer(conf) # magic Chem.SanitizeMol(m) Chem.DetectBondStereochemistry(m) Chem.AssignChiralTypesFromBondDirs(m) Chem.AssignStereochemistry(m) m.Debug() opts = Draw.MolDrawOptions() opts.addAtomIndices = True opts.addStereoAnnotation = True Draw.MolToImage(m, options=opts) </code></pre> <p><img src="https://user-images.githubusercontent.com/4602302/115084791-06f6d700-9f12-11eb-8e07-0aaa5c739b61.png" alt="image"></p> <p>In parallel I was looking for ways to extend training data with other images that are as close to the original training images as possible. While rdkit was very popular and useful in this competition, its rendering of molecules was not exactly the same. So I was exploring other tools, and finally I managed to find the tool that produced the exact images (except noise). It was <a href="https://lifescience.opensource.epam.com/indigo/api/index.html" target="_blank">epam.indigo</a>. I was very happy, when I found it. Firstly, it made possible to use 10M molecules from the extra dataset, secondly I could use it calculate coordinates of atoms in all existing molecules.</p> <p>While I was able to simplify atom numbering scheme in my representation, that (hopefully) is easier for NN to learn, I still was not very satisfied with it. Ideally I'd like to get rid of any order of atoms. But I was not able to achieve that.</p> <p>Original image:<br> <img src="https://user-images.githubusercontent.com/4602302/120806284-58553700-c54f-11eb-83b5-51103c5c0edc.png" alt="image"></p> <p>Rendered by epam.indigo with atom numbers:<br> <img src="https://user-images.githubusercontent.com/4602302/120806360-6f942480-c54f-11eb-853e-d47f87e44dcd.png" alt="image"></p> <p>Rendered by epam.indigo with 2D (x, y) atom numbering:<br> <img src="https://user-images.githubusercontent.com/4602302/120806428-820e5e00-c54f-11eb-922b-c24eab3c3865.png" alt="image"></p> <p>What I was able to achieve with this representation is high accuracy of detecting stereo information. Going from almost 50% accuracy (for each stereocenter) when predicting inchi string directly to almost 100% accuracy when recovering it by rdkit from 2d coordinates. So I was able to achieve 95% exact inchi (LD=0) predictions on validation set, while predicting inchi as string I had somewhere below 90%, with ~5% mismatches only in 1-4 characters in stereolayer.</p> <p>One major disadvantage of this approach is that sometimes it can predict broken molecules with some bonds missing in the middle. It happened for &lt;1% of molecules, but then it produced very large LD, so in order to overcome this, I had to still use other model that predicted inchi as string. </p> <p>So my final submission (public LB = 0.88) was an ensemble of 2 models:</p> <ul> <li>model1: 384x384-&gt;16x16x256, effnet_b3 encoder, 6 layer transformer decoder (dim 256), inchi as string</li> <li>model2: 512x512-&gt;16x16x256, effnet_b0 encoder, 6 layer transformer decoder (dim 256), molecular graph representation</li> </ul> <p>In my submission only 4% of predicted inchis were from model1, others 96% were from pretty slim model2 (4M parameters in encoder, 5M in decoder). For all my training I was using only single Google Colab notebook. I didn't use beam search while inference.</p>
Bristol-Myers Squibb โ€“ Molecular Translation
27th place key highlights in 60 seconds
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ul> <li>Architecture - Combination of:<ul> <li>VIT encoder with 8 transformer decoder layers. I felt on edge about having no inductive bias so I used 2x 3x3 convolutional kernels as input adaptors to the VIT (inconlusive whether it helped - doubt it). <strong>Edit</strong> Forgot to mention the important bit - the second one was stride = 2 thereby allowing me to do "intelligent" downsampling from larger input images than normal hardware constraints would permit.</li> <li>TNT encoder with 8 transformer decoder layers.</li></ul></li> <li>Used selective patches (thank you <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for the idea) making sure to only feed relevant patches to the encoder. This significantly sped up training.</li> <li>Salt and pepper (especially pepper) augmentation for the win! Check <a href="https://www.kaggle.com/c/bms-molecular-translation/discussion/243932" target="_blank">this team's discussion</a>.</li> <li>RDKit generated images - I think this also helped because it let me increase training vocab.</li> <li><a href="https://www.kaggle.com/nofreewill" target="_blank">@nofreewill</a> normalization script but with cascading and agreement:<ul> <li>Cascading = if the inchi was invalid, try the next best model's inchi. Huge improvement!</li> <li>Agreement = if top model inchi is different from 2nd, and 2nd == 3rd == 4th, then use the latter. Tiny improvement.</li></ul></li> <li>As <a href="https://www.kaggle.com/fergusoci" target="_blank">@fergusoci</a> says - be patient and let the model train out for a while.</li> <li>Psuedolabelling training for 2 epochs - tiny improvement.</li> </ul> <p>What didn't work</p> <ul> <li>Kind of wasted my time a bit by splitting the transformer decoder up into two transformers, one for the chemical formula and another for the rest. The inspiration was similar to the one for <a href="https://www.kaggle.com/c/bms-molecular-translation/discussion/243943" target="_blank">noise injection as this team did</a>. Too bad the idea didn't work.</li> </ul> <p>GG</p>
Bristol-Myers Squibb โ€“ Molecular Translation
3rd place solution: brief summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used <a href="https://github.com/huggingface/transformers/blob/27d4639779d2d316a7c5f18d22f22d2565b84e5e/src/transformers/models/bert/modeling_bert.py#L1131" target="_blank">BertLMHeadModel</a> as a decoder.</p> <p>Thanks!<br> I used diagrams.net with google drive:<br> <a href="https://www.diagrams.net/doc/faq/google-drive-diagrams" target="_blank">https://www.diagrams.net/doc/faq/google-drive-diagrams</a></p> <p>First of all, I would like to thank the competition hosts, and all the participants for their hard work. This competition was very challenging, and I learned a lot.<br> I would also like to thank my teammates <a href="https://www.kaggle.com/lyakaap" target="_blank">@lyakaap</a> <a href="https://www.kaggle.com/bamps53" target="_blank">@bamps53</a>. I am convinced that I could not have achieved this result without them.<br> Here I will try to summarize some of the main points of our solution.</p> <h1>Overview</h1> <p>Our solution consists of three phases:</p> <ul> <li>Phase1: image captioning training</li> <li>Phase2: InChI candidates generation</li> <li>Phase3: InChI candidates reranking</li> </ul> <p><img src="https://i.ibb.co/f0PYvhZ/bms-solution-Page-1-2.png" alt=""></p> <h1>Phase1: Image captioning training</h1> <p>One important aspect of this competition was the difference in data trends between the training and test data. At first, I struggled with the difference between CV and LB, but then we noticed that the test data had a higher amount of salt and pepper noise. Therefore, we solved the gap between CV and LB by adding augmentation of the salt and pepper noise during training.</p> <p><img src="https://i.ibb.co/j31fQQB/bms-solution-Page-2-3.png" alt=""></p> <h1>Phase2: InChI candidates generation</h1> <p>In the phase3, the likelihood for pairs of an image and an InChI candidate are estimated by multiple models, and the InChI candidate having the highest likelihood is used as the final output. Therefore, in this phase, it is necessary to generate a variety of InChI candidates with good quality.<br> We generated a wide variety of InChI candidates by using various models as shown below and by using beam search:</p> <ul> <li>Swin Transformer + BERT Decoder (by KF &amp; lyakaap)</li> <li>Transformer in Transformer + BERT Decoder (by KF)</li> <li>EfficientNet-v2 followed by ViT + BERT Decoder (by lyakaap)</li> <li>EfficientNet-B4 + Transformer Decoder (image_size:416x736) (by camaro)</li> <li>EfficientNet-B4 + Transformer Encoder/Decoder(image_size:300x600) (by camaro)</li> </ul> <h1>Phase3: InChI candidates reranking</h1> <p>One of the key parts of our solution is reranking. In fact, most of the single model results are around LB0.9~0.8, but we were able to reach the final result (LB0.54) by reranking the generated results of multiple models using the logic shown below.</p> <ul> <li>rdkit.Chem.MolFromInchi function is used to validate for each InChI candidate. (is_valid)</li> <li>For each InChI candidate, calculate the loss (cross entropy / focal loss) used for training in multiple models, and average it across models. (loss)</li> <li>Sort the candidates in descending order of โ€œis_validโ€ โ†’ ascending order of โ€œlossโ€, and the InChI with the highest score is the final output.</li> </ul> <h1>What didn't work</h1> <ul> <li>Object detection for atoms and bonds w/ yolov5<ul> <li>Reconstruct inchis like Dacon.ai's 1st place solution</li> <li>Use detection results as additional input channels</li></ul></li> <li>MLM pre-training using extra_approved_InChIs.csv</li> <li>Rank learning of InChI candidates (pointwise)<ul> <li>Perform real/fake classification and levenshteinโ€™s distance prediction using a fake made by beam search as a negative example.</li></ul></li> </ul> <p>Thank you!</p> <p>Thanks!<br> My best model takes a week with A100 to learn 10epoch.<br> Since I mainly used A100 preeptible instance ($0.88/hour) on GCP, it would have cost me about $150 to complete the training of one model ๐Ÿ˜…</p> <p>Thanks!<br> We knew your team would always come out on top, so we were afraid of your team till the end.<br> Your noise injection idea was awesome. I had failed to learn the 12-layer Transformer Decoder (just use 3 layers), but it could have helped.</p> <p>Thanks!<br> Your starter notebooks are always helpful ๐Ÿ˜Š</p> <p>Thanks for interest!<br> Competition-specific code can be shared, but since it relies on my private experiment management library (a customized version of Catalyst), it might be difficult to share executable code ๐Ÿ˜</p>
Bristol-Myers Squibb โ€“ Molecular Translation
47th Place Solution & Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/talktocharles" target="_blank">@talktocharles</a> great to hear that, hope it will be useful for you future projects. Thanks!</p> <p>Thanks <a href="https://www.kaggle.com/alexlwh" target="_blank">@alexlwh</a>! I am glad you found it helpful.</p> <h3>Summary</h3> <p>First, let me congratulate all the winners and thank the competition hosts - it was a really interesting challenge! It was great to be a part of a competition requiring both CV and NLP skills and having such a large dataset with a very minimal shakeup. </p> <p>The (not so) funny thing is that I was confident that the competition deadline is actually two days later, 05.06. I mistakingly created a calendar entry for the deadline and never checked if it's actually correct. What a surprise for me was to check out the competition page yesterday and realize that it was just 10 hours before the end! ๐Ÿ˜…</p> <p>My solution is an ensemble of seven CNN-LSTM Encoder-Decoder models. All models are implemented in PyTorch and trained on a local machine with Quadro RTX 6000 GPU.</p> <h3>Code</h3> <ul> <li><a href="https://www.kaggle.com/kozodoi/47th-place-solution-bms-ensembling" target="_blank">Kaggle notebook</a> reproducing my submission and ensembling pipeline</li> <li><a href="https://github.com/kozodoi/BMS_Molecular_Translation" target="_blank">GitHub repo</a> with the complete training codes</li> </ul> <h3>Data</h3> <ul> <li>single 80/20 train/test split stratified by the molecule length</li> <li>creating 3M extra images from <code>extra_approved_InChIs</code> using RDKit (thanks <a href="https://www.kaggle.com/tuckerarrants" target="_blank">@tuckerarrants</a>)</li> <li>on each epoch, I was taking all train images + random 1M extra images for training</li> </ul> <h3>Tokenizer</h3> <ul> <li>my tokenizer is very similar to the one proposed by <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> in his great pipeline</li> <li>using training + extra data to fit the tokenizer, so it ended up having about 20 more tokens due to more distinct molecules in the data</li> </ul> <h3>Image augmentations</h3> <ul> <li>making lines and letters thicker with cv2 morphological transformations:</li> </ul> <pre><code>image = cv2.morphologyEx(image, cv2.MORPH_OPEN, np.ones((2, 2))) image = cv2.erode(image, np.ones((2, 2))) </code></pre> <ul> <li>small image rotations during training:</li> </ul> <pre><code>ShiftScaleRotate(0.01, 0.01, 0.10) </code></pre> <ul> <li>cropping pictures with large empty borders (thanks <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a>)</li> <li>rotating test images with height &gt; width (thanks <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>)<br> <img src="https://i.postimg.cc/t4KqLMNC/inchi.jpg" alt="images"></li> </ul> <h3>Base models</h3> <p>All models used EfficientNet-based CNN encoder and LSTM-based RNN decoder. The table below from Neptune.ai provides the main architecture and training parameters of my top models:<br> <img src="https://i.postimg.cc/cLrTp1Pc/Screen-2021-06-04-at-10-17-02.jpg" alt="models"></p> <h3>Ensembling &amp; post-processing</h3> <ul> <li>using beam search with k = 5 on most of the base models (thanks <a href="https://www.kaggle.com/tugstugi" target="_blank">@tugstugi</a>)</li> <li>normalizing each model predictions with RDKit (thanks <a href="https://www.kaggle.com/nofreewill" target="_blank">@nofreewill</a>)</li> <li>ensembling predictions from base models using majority voting</li> <li>setting "unconfident" predictions to the ones produced by the lowest-CV model that has a valid prediction according to RDKit (thanks <a href="https://www.kaggle.com/nofreewill" target="_blank">@nofreewill</a> again)</li> </ul> <h3>What else I planned to try</h3> <ul> <li>adding image meta-data such as pixel resolution and height/width ratio as input features</li> <li>iteratively predicting different parts of the InChI (formula, /c and /h parts) with separate models that would take image and predictions of the previous molecule part as input</li> <li>switching to transformers. I only started working on transformer pipeline a few days ago and did not have enough time to train a heavy model due to limited resources</li> </ul> <p>The final solution achieves <strong>1.32</strong> on the public and <strong>1.31</strong> on the private LB (47th place). I hope this summary was useful for some of you. Happy to answer any questions in the comments and see you in the next competitions! ๐Ÿ˜Š</p>
Bristol-Myers Squibb โ€“ Molecular Translation
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, your tokenizer saved me a lot of work</p> <p>Thanks to the kaggle and competition host for this competition. And also thanks to my teammate <a href="https://www.kaggle.com/haochensui" target="_blank">@haochensui</a>. We spent one month in this competition and in the first 2 weeks, we were doing experiments on 10% of the data. In the last two week, we rent several V100 and 3090 and switch to larger image size.</p> <p>Our solution is very simple and mainly composed of 4 models, plus some early models for post processing with rdkit. We use tokenizer from <a href="https://www.kaggle.com/yasufuminakama/inchi-preprocess-2" target="_blank">https://www.kaggle.com/yasufuminakama/inchi-preprocess-2</a>. Thanks for the amazing work.</p> <h4>Model Structures</h4> <p>CNN -&gt; Encoder -&gt; Decoder.<br> We use resnest101 and efficientnetv2_m as our CNN backbones, and use 6 layers in transformer encoder and 9 layers in transformer decoder. We project embeddings from CNN into 512 channels to match the dimension of transformer. In the encoder, we inject sinusoid positional embedding into each layer before key and query, pretty similar with DETR. In the decoder, we use relative positional embedding as in T5.</p> <h4>Model Training</h4> <p>Because of GPU limits, we did not train large scale models from scratch. We first train the model on 416x416 images and then increase the image resolutions and finetune the model. We reserve 2% data, ~48k for validation.</p> <p>We also do pseudo labeling here and find it quite useful. We used a 0.64CV/0.78LB submission(rdkit ensemble of efv2 on 416 and 640) to generate pseudo labels, and selected around 1.3M samples from it. We concatenate these samples with train samples. </p> <h6>Training order</h6> <p>efficientnetv2_m: 416(10 epochs) -&gt; 640(5 epochs) -&gt;704(5 epochs + pseudo labels) </p> <p>โ€‹ -&gt; 384x768 (5 epochs + pseudo labels)</p> <p>resnest101: 416(10 epochs) -&gt; 640(5 epochs + pseudo labels)</p> <p>โ€‹ -&gt; 384x768 (5 epochs + pseudo labels) </p> <h4>Results</h4> <table> <thead> <tr> <th>Model</th> <th>Size</th> <th>CV(no norm)</th> <th>LB(with norm)</th> </tr> </thead> <tbody> <tr> <td>efficientnetv2-m</td> <td>416</td> <td>0.99</td> <td>1.18</td> </tr> <tr> <td>efficientnetv2-m</td> <td>640</td> <td>0.74</td> <td>0.9</td> </tr> <tr> <td>efficientnetv2-m</td> <td>706</td> <td>0.65</td> <td>0.67</td> </tr> <tr> <td>efficientnetv2-m</td> <td>384x768</td> <td>0.68</td> <td>0.67</td> </tr> <tr> <td>resnest101</td> <td>640</td> <td>0.71</td> <td>0.66</td> </tr> <tr> <td>resnest101</td> <td>384x768</td> <td>0.74</td> <td>0.67</td> </tr> </tbody> </table> <h4>Ensemble</h4> <p>We use 4 models with 0.66-0.67 LB to make a step-wise logit ensemble and use a beam search with beam size 3. Instead of choosing the prediction with maximum likelihood, we choose prediction with rdkit. We choose the first valid prediction from rdkit. In this way, we get around 0.02 improvement in CV. The ensemble of 4 models achieves 0.56 on LB. It should have CV around 0.45. </p> <p>In addition to that, we merge our early submissions(scores &gt;0.65) with the 0.56 one according to rdkit and got 0.55.</p> <p>Finally, we select the 3500 invalid prediction in the submission and did a beam search with size 32. This reduces the LB score to 0.54.</p> <p>pip install rdkit-pypi<br> I use this one to install </p> <p>That's a nice solution, but in my case there is only 0-2 segmentation faults. So I just mannually rerun the code when it fails.</p> <p>Oh, the white space is not displayed well here. <br> For efficientnetv2, I train from 416 to 640 and then I finetune it on two sizes, 704 and 384x768.<br> For resnest101, I trained 416 and then finetune on two sizes, 640 and 384x768</p> <p>But I just change to a new size and tune the model for 5 more epochs. The loss went down very quickly despite the encoding changed. <br> Positional encodings on CNN output has small effects on the results, since CNN can learn positional information from zero padding. </p> <p>I use this class to track the scores, sequences and caches of the transformer.</p> <pre><code>class BeamSearcher_Transformer_Ensemble(): def __init__(self,batch_size,beam_size,sos_idx,eos_idx,device): self.beam_size=beam_size self.batch_size=batch_size self.eos_idx=eos_idx self.sos_idx=sos_idx self.device=device self.current_sequence,self.current_cache=self.init_input(batch_size) self.current_scores= torch.zeros([batch_size,beam_size],dtype=torch.float32,device=self.device) self.finish_mask= torch.zeros([batch_size,beam_size],dtype=torch.float32,device=self.device).bool() self.sentence_length= torch.zeros([batch_size,beam_size],dtype=torch.float32,device=self.device) def init_input(self,batch_size): init_decoded_tokens = torch.LongTensor([self.sos_idx]).to(self.device).reshape(1,1,1).repeat(1,batch_size,self.beam_size) # [1, bs,beam_size] init_cache = None return init_decoded_tokens,init_cache def get_input(self): #current_sequence is a [T,bs,beam_size] LongTensor return self.current_sequence,self.current_cache def rearrange_cache(self,beam_group,cache): ''' Args: beam_group: [batch_size,beam_size] cache: list[batch_size*beam_size,T,hdim] Returns: list[batch_size*beam_size,T,hdim] ''' new_cache=[] batch_idx = torch.arange(self.batch_size, device=cache[0].device) for c in cache: c=c.unflatten(0,[self.batch_size,self.beam_size]) new_c=torch.stack([c[batch_idx, beam_group[:, x]] for x in range(beam_group.shape[1])], dim=1) new_cache.append(new_c.flatten(0,1)) return new_cache def prepare_next_input(self,predictions,new_cache,first_step): ''' Args: predictions: [bs*beam_size,V] logits new_cache: list[list[bs*beam_size,T,Hidden]] first_step: if True, this is the first step, a sentence is copied for n times, we need to mask out copies Returns: finished: if True, all sequences is finished ''' V=predictions.shape[-1] predictions=predictions.view(self.batch_size,self.beam_size,V) loglikelihood=torch.log_softmax(predictions,-1) #[bs,beam_size,V] if first_step: loglikelihood[:,1:]=-float('inf') loglikelihood[self.finish_mask]=-float('inf') finished=torch.nonzero(self.finish_mask,as_tuple=True) loglikelihood[(finished[0],finished[1],torch.ones_like(finished[0])*self.eos_idx)]=0. new_scores= self.current_scores.unsqueeze(-1)+loglikelihood #[bs,beam_size,V] new_length= self.sentence_length + 1-self.finish_mask.float() #[bs,beam_size] descible length of sentence normalized_scores= (new_scores/new_length.unsqueeze(-1)).view(self.batch_size,self.beam_size*V) #[bs,beam_size*V] #normalized_scores = (new_scores).view(self.batch_size,self.beam_size * V) # [bs,beam_size*V] _,top_idx=torch.topk(normalized_scores,dim=-1,largest=True,k=self.beam_size) beam_group=top_idx//V #[bs,beam_size] token_idx=top_idx%V #[bs,beam_size] #index of new tokens #update scores, mask, length self.sentence_length=beam_index_select(new_length,beam_group) self.current_scores=torch.gather(new_scores.view(self.batch_size,self.beam_size*V),dim=1,index=top_idx) self.finish_mask=torch.logical_or(beam_index_select(self.finish_mask,beam_group),token_idx==self.eos_idx) self.current_sequence= beam_index_select(self.current_sequence.permute(1,2,0),beam_group).permute(2,0,1) self.current_sequence=torch.cat([self.current_sequence,token_idx.unsqueeze(0)],dim=0) self.current_cache=[self.rearrange_cache(beam_group,x) for x in new_cache] return torch.all(self.finish_mask) </code></pre>
Bristol-Myers Squibb โ€“ Molecular Translation
50th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The source code is available in <a href="https://github.com/affjljoo3581/BMS-Molecular-Translation" target="_blank">the github repository</a>.</p> <p>Hi all. Thanks to the kaggle and competition host. It was a greate experience. We tried hard and we barely hold the silver medal ๐Ÿคฃ.</p> <p>Our approach is to handle this competition as a translation problem, not image-captioning or OCR. Because the <strong>ViT</strong> model showed that the images can be treated like sentences, we focused on the translation task (from <strong>image</strong> language to <strong>InChI</strong> language). Actually, the model structure and training procedures are not pretty different to other approaches, but the detailed configurations (e.g. hyperparameter tuning) are based on the translation papers.</p> <p>Our basic strategy is to pretrain the transformer model with large-scale images from <code>extra_approved_InChIs.csv</code>, and then finetune the model with the original images.</p> <h2>Model Structure</h2> <p>We use <strong>ViT</strong> as an encoder and <strong>the original transformer</strong> as a decoder. According to <a href="https://arxiv.org/abs/1906.01787" target="_blank">this paper</a>, it is more effective to increase the encoder depth rather than the decoder one. So we make a base version of our model to have 12-layer encoder, 6-layer decoder and a large version to have 24-layer encoder.</p> <h2>Dataset</h2> <p>As I mentioned above, we create the external dataset from <code>extra_approved_InChIs.csv</code> by synthesize the molecular images through <strong>rdkit</strong>. The number of original images is about <strong>2.4M</strong> and the number of entire pretraining images is about <strong>12M</strong>. It is comparable to the ImageNet-21k, mention in ViT paper. The authors of ViT paper showed that the performance of the image transformer is highly related to the dataset scale. And the <strong>12M</strong> images are indeed helpful.</p> <h2>Training</h2> <p>We train the base model for about 15 epochs with 224x224 <strong>12M</strong> images. And we finetune with 384x384 <strong>2.4M</strong> images. The CVs are 0.99 and 0.83 and LBs are 1.82 and 1.55 respectively.</p> <p>And we increase the model size by copying the 12 layers and stacking to the end of the model. So the large version of our model has 24 layers. We pretrain the model for 5 epochs and finetune for 15 epochs. It achieves 0.74 CV and 1.44 LB.</p> <p>Basically, we use single A100 GPU on GCP. To accelerate the training, we use <a href="https://github.com/NVIDIA/apex" target="_blank">apex</a>'s fused layers and O2 amp mode.<br> Thanks to <a href="https://jarvislabs.ai/" target="_blank">jarvislabs.ai</a>, we can use 8xRTX5000s. And interestingly, we found that the multi-GPUs environment with DDP (DataDistributedParallel) and <strong>apex amp</strong> do not work correctly. I don't know why it happens, but it is well-known problem in <strong>PyTorch Lightning</strong>. So in 8xRTX5000 environment, we use native AMP and <a href="https://github.com/microsoft/DeepSpeed" target="_blank">ZeRO</a> to increase the computing performance.</p> <h2>Prediction</h2> <p>We use <strong>greedy search</strong>, instead of <strong>beam search</strong>. That's because the inference time is really long (large model with 384x384 spent about 7 hours).<br> Thanks to <a href="https://www.kaggle.com/nofreewill" target="_blank">@nofreewill</a>, we applied <a href="https://www.kaggle.com/nofreewill/normalize-your-predictions" target="_blank">rdkit normalization</a> and the performance is indeed enhanced to 1.37.</p> <p>Lastly, specially thanks to <a href="https://jarvislabs.ai/" target="_blank">jarvislabs.ai</a> to support some credits ๐Ÿ‘๐Ÿ‘</p>
Bristol-Myers Squibb โ€“ Molecular Translation
5th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to you, too!<br> We use 20~25% in our training.</p> <p>We didn't have time to implement a correct beam search, so here our top 12 is only based on the original prediction confidence for all characters. Then pick up 12 characters with lowerest confidence, replace them with second highest confidence character then do the rest decoding, one by one. As a result we got 12 predicted sequence for each invalid sample.</p> <p>It decreased our score by 10% or more when we were doing experiments by small backbones and small images.</p> <p>Hi all,</p> <p>This is absolutely a tough one, so congratulations to all those who persevered in this competition until the end.</p> <p>Thanks to the organizers and congrats to all the winners and my wonderful teammates <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> and we must thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> and <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> we couldn't have achieved this without your sharing. Thank you so much!</p> <h1>Summary</h1> <ul> <li>We didn't use Transformer Encoder, our architecture is just CNN (384x384 &amp; 512x512) + Transformer Decoder (12~16 layers)</li> <li>We trained 5 backbones. EffNet B3/B5/B7, ResNet200D, eca-nfnet-l0</li> <li>Noise Injection for regularization.</li> <li>Training with some ext data which generated by rdkit. (But not very helpful)</li> <li>Ensemble during decode phase.</li> <li>Handling invalid predictions (most crucial trick for going under 0.6).</li> </ul> <h1>Noise Injection</h1> <p>This is a very old regularization technique, if I remember correctly someone mentioned it in a paper before 2000. In the field of Image Caption, the accuracy of prediction for the next character is quite high while ensuring that the previous sequence is exactly correct. But once a character is predicted incorrectly, then the next prediction failure rate gradually increases until it runs completely off.</p> <p>So I came up with noise injection, which is done by randomly replacing GT characters with other characters during training. The replaced characters are ignored in the calculation of the loss, but this does not mean that this technique has no effect on the prediction. Although the character itself is not computed as part of the loss, the model is then forced to correctly predict the next character on the basis that the previous one was wrong.</p> <p>In addition to this, the technique itself has a regularization effect. This allows us to train 12-16 layers of transformer decoder.</p> <h1>Official Ext Data</h1> <p>The host released 10 million InChI without images as ext data during the competition.</p> <p>We found that rdkit can use InChI to generate images, so we generated some of the images (about 1~2 million) as ext data and added them to the training. But later we found that this part of data did not bring much improvement. However, since it did not degrade the performance either, we kept it.</p> <h1>Handling Invalid Predictions</h1> <p>After training all models we ensemble them in the decode phase.</p> <p>The LB of the original prediction is 0.62, then we use rdkit to norm the prediction and get LB 0.60</p> <p>At this point we find that there are about 14000 rows in the predicted InChI that are not valid. To replace these invalid predictions, we use 3 methods to find their valid predictions:</p> <ul> <li>Prediction generated by single model (fixed 5k~ rows)</li> <li>Search for top 12 predictions by ensemble (fixed 2k5~ rows)</li> <li>For the numerical part of the chemical formula, we use top2 predicted values to decode (fixed 500~ rows)<ul> <li>For example, if the original prediction is C12H3, and the top2 predictions for the second and fourth characters are 11 and 4, then we replace them with C11H3, and C12H4 and then decode the next part.</li></ul></li> </ul> <p>We discovered that replacing invalid predictions with valid predictions could significantly improve LB the night before the competition deadline, due to lack of time, we only had time to dealing with 8000 rows out of the 14000 invalid predictions in the original prediction, which improved our LB score from 0.60 to 0.57</p> <h1>Acknowledge</h1> <p>Special Thanks to Z by HP &amp; NVIDIA for sponsoring me a Z8G4 Workstation with dual RTX6000 GPU and a ZBook with RTX5000 GPU.<br> This is without a doubt a competition with the biggest dataset in the last half years in kaggle.<br> So I tried pytorch's DDP parallel training on my dual RTX6000 GPUs and the experience was great, basically it can be twice as fast as single GPU training.</p>
Bristol-Myers Squibb โ€“ Molecular Translation
6th Place Solution - General Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>MODEL:</p> <p>Our basic models are similar to the general modeling that has been used in this competition: a combination of high-end vision models with either a recurrent net or a transformer for the chemical formula output. Early on we have experimented with several combination of the two, but for our final effort weโ€™ve used the following three main combinations:</p> <p>Efficientnetv1(b5,b7) + transformer <br> Efficientnetv2(m,l) + transformer <br> B7 + lstm</p> <p>When blending, we found that blending lstm with transformers-based models did particularly well, oftentimes giving us an overall boost up to 0.2 on CV/LB.</p> <p>In the early stages of the competition, we blended probabilities, but subsequently this approach became unwieldy and we never managed to get it fully working for the entire solution. </p> <p>The kind of โ€œblendingโ€ that <em>did</em> work was blending based on the correct form of the formula. It was noticed early on that the correctness of the prediction is proportional to the number of the formulas that are encoded correctly, </p> <p>SOLUTION:</p> <p>Training schedule:</p> <p>Initially we had used a simple cosine annealing for our training schedule, but then we switched to a modified semi-manual plateau training schedule with restarts. This is something that Bojan had used before in a few previous competitions (Bengali and Alaska), and found it very effective. The automated version of this training involves just the โ€œordinaryโ€ plateau schedule, with retraining once the training stops. For the retraining we would re-load the best weights, and start the training process again with the biggest learning rate. This was a very time-consuming training regiment, as one epoch on DGX-1/DGX-Station would take anywhere between 2 and 12 hours, depending on the model architecture and the resolution. </p> <p>Local validation scheme:</p> <p>We had set up our training with a 5-fold CV in mind, but due to the extremely long training time, we never managed to train more than one fold. So most of our models were based on the same 80-20 split of the training data. During the last week we retrained our models on the 96% of the data, with the 4% left out for validation, This worked well initially, but due to some quirks with the sampling of the 4% that we had used, the validation score started becoming unreasonably low (0.23) at one point, and from then on it was not possible to use local validation any more.<br> We had noticed that our local oof validation and the LB score were very close (within less than 0.01 of each other), so we decided to mostly rely on our LB score for validation. This was not too risky in this competition, as it was a) an image competition, with b) an enormous dataset, and c) synthetic data. As there was very little shakeup in the end, our assumptions proved to be justified.</p> <p>1.Train a rotate detector. And fix the orientation in the test set. It makes CV-LB correlated very well.</p> <ol> <li><p>Dataprocess<br> 2.1 We cut the edge to get more effective pixels, The width of the edge varies between 26 and 41 pixels in both train and test set, so we decided to uniformly cut 25 pixels in training and inference:<br> edge = 25<br> image_raw = image_raw[edge:-edge, edge:-edge]<br> 2.2 We noticed that most of the validation errors were for the formulas that were longer than 150 characters, so in our training we decided to oversample those data points. We had used different oversampling ratex, for instance for len(Inchi)&gt;150, len(Inchi)&gt;100, len(Inchi) &gt; 200, len(Inchi) &gt; 250, etc.<br> 2.3 For most of our models we had kept the same h/w ratio, but later on in the competition we also included models that were trained on the square images.<br> 3.4 Test set is more noisy than the train set, and itโ€™s salt and pepper noise.<br> def addSaltNoise(self,image,SNR=0.99):<br> if SNR==1:<br> return image<br> h,w = image.shape<br> noiseSize = int(h*w * (1 - SNR))<br> for k in range(0, noiseSize):<br> xi = int(random.uniform(0, image.shape[1]))<br> xj = int(random.uniform(0, image.shape[0]))<br> image[xj, xi] = 0<br> return image</p></li> <li><p>Use different resolutions (256x256, 256x384,384x512,512x512,640x640).</p></li> </ol> <p>4.Check InCHi valid status. And try to get as many valid InCHi as possible. </p> <p>The final blend was created from 20+ predictions results. We ordered the predictions by local CV. For one image, find the first valid result, if there is no valid prediction in these 20+ results, then set the result with the lowest CV prediction.</p> <ol> <li><p>Use a different number of transformer layers. Weโ€™ve tried three, four, five and six layers. Our single best model used three layers, but it is conceivable that the V2 640x640 model with four layers could have done even better, but we ran out of time to fully tune it. </p></li> <li><p>Weights checkpoint blending. For our best single model (B7, 3 x transformer layers) we managed to blend about 6 different weight checkpoints. That helped our score by about 0.04. We also tried blending three different checkpoints for the B7 lstm model, but that only had a marginal impact on our model accuracy. </p></li> </ol> <p>WHAT DIDNโ€™T WORK:</p> <p>Harder augmentation<br> VFlip, HFlip, Rotate<br> RandomLines augmentation<br> Pseudolabeling<br> Training more image rotation detectors<br> Preprocess images to close some lines and remove salt noise:<br> img = cv2.imread(fn,0)<br> kernel = np.ones((2,2),np.uint8)<br> erosion = cv2.fastNlMeansDenoising(255-img, h=25)<br> erosion = cv2.dilate(erosion,kernel,iterations = 1)<br> erosion = 255-cv2.erode(erosion,kernel,iterations = 1)</p> <p>Multi-model probability blend had minimum boost in score.<br> Finetune a model specialized in small (384 pixels in any size).<br> Finetune a model specialized in small InChI strings (=150).</p> <p>OTHER THINGS WE WISH WE COULD HAVE DONE:</p> <p>Train all 5 folds<br> Train models with different โ€œmildโ€ augmentations and blend those<br> TTA</p> <p>HARDWARE:</p> <p>We had been fortunate to have access to very generous hardware resources provided to us by Nvidia. Most of the models were trained on one DGX Station and up to three DGX-1 servers. Despite what our teamโ€™s name may suggest, we DID NOT have access to 1024 A100s. :) Halfway through the competition we got access to one DGX A100 station, which has 4 A100 GPUs, and close to the end of the competition we had access to another one of those machines. Full disclosure: Bojan is working with the Nvidia DGX A100 sales team as an influencer. These are awesome machines, and Iโ€™ll post another brief post about them soon.</p>
Bristol-Myers Squibb โ€“ Molecular Translation
70th place solution and code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all!</p> <p>Thanks to the competition host and for higly useful advices in the discussion! Itโ€™s my first competition and I am glad to get a bronze :)</p> <p>My solution is very simple and based on smile targets. It was hard to make enough experiments due to I have had only a single 2080ti. To some speed up the training, I used torch amp.</p> <p><strong>Summary of the solution</strong></p> <p>The main pipeline is similar to <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> great <a href="https://www.kaggle.com/yasufuminakama/inchi-resnet-lstm-with-attention-starter" target="_blank">notebook</a>, but as encoder I use EfficienNet_B3. The main difference - instead of using InChI-target I use <a href="https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system" target="_blank">Smile</a> notation. On inference I convert Smile back to InChI via <a href="https://github.com/kuelumbus/rdkit_platform_wheels" target="_blank">rdkit</a> and as tokenizer I use <a href="https://github.com/XinhaoLi74/SmilesPE" target="_blank">XinhaoLi74/SmilesPE</a>. Smiles is much more simpler for prediction - shorter notation and less token-classes - and this gave me great improvement in accuracy (from <strong>6.31</strong> to <strong>2.22</strong>).</p> <p><strong>Data</strong></p> <p>I use syntheic images and extra_approved_InChIs.csv of course - about 1 million images (thanks for the great <a href="https://www.kaggle.com/tuckerarrants/inchi-allowed-external-data" target="_blank">notebook</a> by <a href="https://www.kaggle.com/tuckerarrants" target="_blank">@tuckerarrants</a> and <a href="https://www.kaggle.com/stainsby/improved-synthetic-data-for-bms-competition-v3" target="_blank">notebook</a> by <a href="https://www.kaggle.com/stainsby" target="_blank">@stainsby</a>)</p> <p>I have make a lot of experiments - in hyperparameters, batchsampler, backbones, etc., but not as many as I wanted because training was soooo long. I ended up with 288x288 images size, as a compromise between accuracy and training speed.</p> <p>I made an adaptive batchsampler that was quite helpful - it increase probability for a sample to be added in batch by sample's loss during training. So the model was trained on "harder" examples, and samples with almost zero loss wasnโ€™t put in batch often.</p> <p><strong>Some other notes</strong></p> <p>There was some little accuracy improvement (about 1.8 -&gt; 1.6) due to freeze encoder and train LSTM-decored only. It significantly speed up training and allowed to increase batch size in final stage of the training.</p> <p>Also as optimizer I use AdamW with weight_decay 0.01 (as <a href="https://www.fast.ai/2018/07/02/adam-weight-decay/" target="_blank">article</a> suggest) but don't sure that it get any influence at all :) As image augmentations were used only gaussian blur and various rotations (by randomly transpose and vertical/horizontal flipping).</p> <p><strong>Other experiments I wanted to try</strong></p> <ul> <li>To split InChI up to 8 indepenpent string layers (separated by the "/" notation: "/b", "/t", "/m", "/s", etc.) and than train individual models for each layer. Each layer in the InChI describe different information about a molecule, and several models, trained on separate inchi-layers, presumably can get good results.</li> <li>To add rotation transoform to any angles (it might help the model to catch some useful feature on images)</li> <li>To add beam search and some models ensembling.</li> <li>Trying Transformers instead of LSTM</li> </ul> <p>The source code (pytorch) is available in the github <a href="https://github.com/skalinin/BMS-competition" target="_blank">repository</a>. </p> <p>Congrat to all who took a part in the competition:)</p>
Bristol-Myers Squibb โ€“ Molecular Translation
7th Place Solution : My First Gold And I am in Tears
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all, </p> <p>We would like to thank kaggle and the organizers for such a lovely competition. My learning in this has been immense , while a lot of people might criticize this competition to be a hardware struggle , for me it more of a learning experience than a GPU pain mainly due to my lovely teammates <a href="https://www.kaggle.com/pheadrus" target="_blank">@pheadrus</a> <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a> <a href="https://www.kaggle.com/atsunorifujita" target="_blank">@atsunorifujita</a> <a href="https://www.kaggle.com/nishcaydnk" target="_blank">@nishcaydnk</a> who allowed and encouraged me to explore more and more.</p> <p>I always thought that I understood transformers completely but all that was shaken in this competition. We would like to thank <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for being so humble in sharing the transformer to transformer baseline with us , without that this medal would not have been possible . The code was easy to understand and a lot of our concepts got cleared during this competition . There are a lot of things we were still not able to implement like Beam search for our transformer model , but at the end of the day we are very happy with the result.</p> <p>We ( <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> , me and <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a>) started with the CNN encoder with LSTM decoder like everyone else , quickly made our way to GRU , then we were joined by Rajneesh who brought in the idea of transformer to transformer . We fought hard to get to 1.30 then we were joined <a href="https://www.kaggle.com/atsunorifujita" target="_blank">@atsunorifujita</a> and since then it has been an a rolercoaster ride . </p> <p>The last day was completely intense for us , we were at 0.65 and we knew we need something other than rdkit post-processing to get into gold , the only thing left was to do logits ensemble at every time step , we were unsuccesful at that before but nevertheless we tried it and it worked , our ensemble gave (0.66 lb 0.71cv) submitted at <b>11:30 UTC </b> and then after rdkit post-process we went to (0.60lb cv 0.66) submitted at <b> 11:45 UTC</b> and jumped to 7th place where we ended as well.</p> <p>Our final solution consists four major parts :</p> <ul> <li>Training transformer to transformer effectively </li> <li>Logit Level Ensemble at each time step</li> <li>Rdkit Post Processing</li> <li>Most Important : Analysis of all submissions and validation INCHI's</li> </ul> <h2>Models Used :</h2> <p>Our final ensemble consisted of </p> <ul> <li>Encoder - 2 ViT 384 Base, Decoder - Vanilla transformer ( Cv-0.85 , Lb -0.76) and (Cv-0.86 , lb-0.77) respectively</li> <li>Encoder - 1 Swin Transformer 384 , Decoder - Vanilla transformer (Cv - 0.99 , LB -0.96)</li> </ul> <h2>Model Training :</h2> <p>We started ViT Training similar to heng's using lookahead optimizer and manually decreasing LR's . We made use of increasing resolution training and trained first with 224 Img_size , then fine-tuned the same model to 384 then finally to 448. This gave us a good amount of boost and we were able to reach 1.27 lb but it still wasn't enough . After analyzing the valid predictions we realized that the models kept failing on very long chained molecules and molecules with noises , our hypothesis was it was overfitting to the normal ones . Owing to the noisy images and our findings we decided to add Label Smoothing and Cutout to training and it worked , we were able to achieve (lb 0.77 ) with the same ViT model . After that we tried with 448 Image size with the same strategy however it didn't work so well and hence was excluded from final ensemble . We trained another ViT model with different LR and increased Random Scale as augmentation and were able to scoer (lb 0.76)</p> <p>We then started the search for diversifying our model zoo and moved on train SWIN and transformer-in-transformer with the same training Strategy , however Swin Beat tnt and scored (0.96 lb) as compared to tnt (1.07 lb)</p> <h2>RDkit Post-Processing</h2> <p>Everybody was normalizing the predicitons and we were no different , but only after reading <a href="https://www.kaggle.com/nofreewill" target="_blank">@nofreewill</a> post we dug deeper and realized what we were missing.</p> <blockquote> <p>def normalize_inchi(inchi):<br> try:<br> mol = Chem.MolFromInchi(inchi)<br> except:<br> pass<br> if mol is None:<br> return inchi,'invalid'<br> else:<br> try: <br> norm = Chem.MolToInchi(mol)<br> if norm == inchi:<br> return norm,'valid'<br> else:<br> return norm,'modified'<br> except: return inchi,'segmentation_failure'</p> </blockquote> <p>After adding this code to the normalization script we were able to get the status as well . We tested out the hypothesis that the places where INCHI were suggested invalid by rdkit were really wrong hence to replace them with valid predictions from different other model's prediction were a sure shot to improving score . We didn't stop here and dug even deeper to realize that the invalid INCHI's were coming from where the INCHI length was very long . Hence we fine-tuned our best tnt model on images having INCHi length greater than 150 and it gave great cv (0.99 previously 1.09) however due to lack of time we didn';t test lb and decided to add in post-processing to replace invalid INCHI's only</p> <h2>Models used in Post-Process to replace Invalid INCHI's</h2> <ul> <li>Tnt 336 Model </li> <li>Vit 448 Model</li> <li>Tnt 448 Model</li> <li>prunedeffnetb3 + GRU Model</li> <li>Effnetb4 + LSTM Model</li> <li>Effnetb7 + GRU Model</li> </ul> <h2>Logits Ensemble</h2> <p>I started coding Logit Ensemble on the last day and by the time it was ready only 9 hours were left in the competition , it gave a cv of 0.71 and hence we gave it our best to make it complete before 2 hours of deadline . Things fell into place and we were able to complete it in time.</p> <p>We realized that in heng's code the final logit gave an output of <b>(bs,vocab_size) , so untill the max_length and the vocab size of models are same</b> we can at every time step ensemble the predictions , and since we didn't have a lot of time , we hard-coded it , something like this :</p> <blockquote> <p>class EnsembleNet(nn.Module):</p> </blockquote> <pre><code>def __init__(self, model_1, model_2, model_3): super(EnsembleNet, self).__init__() self.model_1 = model_1 self.model_2 = model_2 self.model_3 = model_3 self.model_1.load_state_dict(torch.load(initial_checkpoint1)['state_dict'], strict=True) self.model_2.load_state_dict(torch.load(initial_checkpoint2)['state_dict'], strict=True) self.model_3.load_state_dict(torch.load(initial_checkpoint3)['state_dict'], strict=True) #self.model_1 = torch.jit.script(self.model_1) #self.model_2 = torch.jit.script(self.model_2) def forward(self, image): image_dim = 768 text_dim = 768 decoder_dim = 768 num_layer = 4 #3 num_head = 8 ff_dim = 2048#1024 STOI = { '&lt;sos&gt;': 190, '&lt;eos&gt;': 191, '&lt;pad&gt;': 192, #'&lt;mask&gt;': 193, } image_size = 384 vocab_size = 193#194 max_length = 280 #300 # 275 device = image.device batch_size = len(image) image_embed_1 = self.model_1.cnn(image) #(bs,img_len,image_dim) image_embed_1 = self.model_1.image_encode(image_embed_1).permute(1, 0, 2).contiguous() # (img_len,bs,image_dim) image_embed_2 = self.model_2.cnn(image) #(bs,img_len,image_dim) image_embed_2 = self.model_2.image_encode(image_embed_2).permute(1, 0, 2).contiguous() # (img_len,bs,image_dim) image_embed_3 = self.model_3.cnn(image) #(bs,img_len,image_dim) image_embed_3 = self.model_3.image_encode(image_embed_3).permute(1, 0, 2).contiguous() # (img_len,bs,image_dim) token = torch.full((batch_size, max_length), STOI['&lt;pad&gt;'], dtype=torch.long, device=device) # (batch_size,max_len) text_pos_1 = self.model_1.text_pos.pos #(1,sequence_len,text_dim) torch.zeros(1, max_length, dim) text_pos_2 = self.model_2.text_pos.pos #(1,sequence_len,text_dim) torch.zeros(1, max_length, dim) text_pos_3 = self.model_3.text_pos.pos #(1,sequence_len,text_dim) torch.zeros(1, max_length, dim) token[:, 0] = STOI['&lt;sos&gt;'] # ------------------------------------- eos = STOI['&lt;eos&gt;'] pad = STOI['&lt;pad&gt;'] # fast version if 1: # incremental_state = {} incremental_state1 = torch.jit.annotate( Dict[str, Dict[str, Optional[torch.Tensor]]], torch.jit.annotate(Dict[str, Dict[str, Optional[torch.Tensor]]], {}), ) incremental_state2 = torch.jit.annotate( Dict[str, Dict[str, Optional[torch.Tensor]]], torch.jit.annotate(Dict[str, Dict[str, Optional[torch.Tensor]]], {}), ) incremental_state3 = torch.jit.annotate( Dict[str, Dict[str, Optional[torch.Tensor]]], torch.jit.annotate(Dict[str, Dict[str, Optional[torch.Tensor]]], {}), ) for t in range(max_length - 1): last_token_1 = token[:, t] # take the whole batch's t'th token text_embed_1 = self.model_1.token_embed(last_token_1) #[bs,text_dim] Generate embedding for the t'th token text_embed_1 = text_embed_1 + text_pos_1[:, t] #[bs,text_dim] Combine with pos embed for t'th token text_embed_1 = text_embed_1.reshape(1, batch_size, 768) last_token_2 = token[:, t] # take the whole batch's t'th token text_embed_2 = self.model_2.token_embed(last_token_2) #[bs,text_dim] Generate embedding for the t'th token text_embed_2 = text_embed_2 + text_pos_2[:, t] #[bs,text_dim] Combine with pos embed for t'th token text_embed_2 = text_embed_2.reshape(1, batch_size, 768) last_token_3 = token[:, t] # take the whole batch's t'th token text_embed_3 = self.model_3.token_embed(last_token_3) #[bs,text_dim] Generate embedding for the t'th token text_embed_3 = text_embed_3 + text_pos_3[:, t] #[bs,text_dim] Combine with pos embed for t'th token text_embed_3 = text_embed_3.reshape(1, batch_size, 1024) #text_embed ---&gt; 1,bs,text_dim(768) #image_embed ---&gt; img_pos_embed,bs,image_im x_1 = self.model_1.text_decode.forward_one(text_embed_1, image_embed_1, incremental_state1) x_2 = self.model_2.text_decode.forward_one(text_embed_2, image_embed_2, incremental_state2) x_3 = self.model_3.text_decode.forward_one(text_embed_3, image_embed_3, incremental_state3) ## x -----&gt; (1,bs,text_dim) x_1 = x_1.reshape(batch_size, 768) x_2 = x_2.reshape(batch_size, 768) x_3 = x_3.reshape(batch_size, 1024) ## x -----&gt; (bs,decoder_dim) l_1 = self.model_1.logit(x_1) l_2 = self.model_2.logit(x_2) l_3 = self.model_3.logit(x_3) l = (l_1 + l_2 + l_3)/3 ## l -----&gt; (bs,num_classes) k = torch.argmax(l, -1) token[:, t + 1] = k if ((k == eos) | (k == pad)).all(): break predict = token[:, 1:] return predict </code></pre> <h2>Final Hard-Voting</h2> <p>One Last Layer that we also added was hard-voting using all our submission (11 subs) . After doing RDkit Post Processing , we found that approx 11k invalid still remained and since we were not able to implement beam search for those invalid images we applied hard voting using all our model's submission ( <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> 's Idea)</p> <h1>Gratitude</h1> <p>This competition will remain closest to my heart , we have stayed up all night , its 6 AM in India right now , I will never forget how we passed the time from 1 to 4 while the ensemble inference was running . Congrats to <a href="https://www.kaggle.com/pheadrus" target="_blank">@pheadrus</a> <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a> for becoming a master and <a href="https://www.kaggle.com/atsunorifujita" target="_blank">@atsunorifujita</a> <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> for their second gold . It was a collective team effort and I couldn;t thank you all enough for taking me in your team . </p> <p>Its late , I will clean up the code soon and post here very soon . To be continued โ€ฆ.<br> I hope you liked our solution . Thanks for reading </p> <p>Thanks a lot <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , it feels great to be appreciated by you . I hope to continue this further </p> <p>Thanks a lot <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> </p> <p>Thanks a lot <a href="https://www.kaggle.com/jonathanbesomi" target="_blank">@jonathanbesomi</a> , label smoothing was fujita's idea and we were also surprised of how good it worked</p> <p>Yes , that really surprised us even , but even more crucial was the logits ensemble which increased our best score without PP from 0.77 ----&gt; 0.66 </p> <p>The thing that helped us the most was constantly monitoring and analyzing the validation INCHI and trying to figure out what and where the models were doing wrong</p> <p>It always feels good when your idol appreciates you ๐Ÿ˜‰ <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> , thanks a lot <br> Hungry for gold has a long way to go </p> <p>Thank you <a href="https://www.kaggle.com/atharva" target="_blank">@atharva</a></p> <p>Thanks bro , it sure took long but finally it happened ๐Ÿ˜Š</p> <p>Thank you <a href="https://www.kaggle.com/kamal" target="_blank">@kamal</a> , we finally got our gold</p> <p>Thanks a lot <a href="https://www.kaggle.com/Gao" target="_blank">@Gao</a></p> <p>Thank you </p> <p>Thanks a lot <a href="https://www.kaggle.com/heroseo" target="_blank">@heroseo</a> , its time to move on to SETI I guess ๐Ÿ˜œ</p> <p>Thanks <a href="https://www.kaggle.com/rafiko1" target="_blank">@rafiko1</a> </p> <p>Yes <a href="https://www.kaggle.com/prashantkramadhari" target="_blank">@prashantkramadhari</a> transformer to transformer means transformer type encoder (like vision transformer , Swin or tnt ) and a transformer decoder (like vanilla transformer,BERT,etc)</p>
Bristol-Myers Squibb โ€“ Molecular Translation
8th Place Solution๐Ÿ”บ
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>They are large molecules and have long InChI.</p> <p>I think rotation and flip augmentation was useful to reduce LB/CV gap (also used for vertically long images in test inference). I also used multiple model voting but it does not help in my case maybe because for most records, predictions of models are all valid (same) or all invalid.</p> <p>I think rdkit validation for topK predictions by beam search is also useful but there was no left time to do that.</p> <p>When my score was higher than 1.0, rdkit validation improve LB score about 0.07 but in the later stages, the improvement was 0.02.</p> <p>Congratulations to all the winners and thank the competition hosts for holding such a wonderful competition.<br> And especially thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for sharing the great ideas and code. I think his post made this competition very active.</p> <p>This competition was very tough for a solo kaggler like me and I was very inspired by other solo kagglers ( <a href="https://www.kaggle.com/fergusoci" target="_blank">@fergusoci</a>, <a href="https://www.kaggle.com/AkiraSosa" target="_blank">@AkiraSosa</a>, <a href="https://www.kaggle.com/nofreewill" target="_blank">@nofreewill</a>) on LB. Finally, I got solo gold and was able to become a Grandmaster at the expense of health and many holidays!</p> <h1>Models</h1> <p>I chose two models to train, SwinTransformer with image size 384x384 and patchwise TNT and I implemented it by referring to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s codes. <br> I used hyper parameter embed_dim=768, in_dim=48 for TNT and this is larger than TNT-B.</p> <p>SwinTransformer and TNT both achieved LB~0.8 as single models and their CV were LD=0.80/0.80, CELoss=0.0030/0.0020</p> <h1>Data</h1> <p>I generated about 10M images using all the InChIs in <code>extra_approved_InChIs.csv</code> using <a href="https://www.kaggle.com/stainsby" target="_blank">@stainsby</a>'s <a href="https://www.kaggle.com/stainsby/improved-synthetic-data-for-bms-competition-v3" target="_blank">notebook</a>. About 13M images are used in training. In the early stage of training, the loss for extra images was bad, but after long training, the loss for extra images and valid score improved. I trained SwinTransformer 15 epochs and TNT 20 epochs. SwinTransformer took 20 hours for 1 epoch and TNT took 12 hours for 1 epoch with 8 x V100.</p> <h1>Ensemble</h1> <p>I ensembled 8 models in each decoder step (weighted average of predict_proba). Models are trained competition data+extra data or finetuned by only competition data after that or different fold. It seems important to set total weight of SwinTransformer and TNT equally. CV score of ensemble model was LD=0.60 and CELoss=0.0013 and LB:0.63</p> <h1>Beam Search</h1> <p>On the last day of the competition, I implemented beam search by referring to fairseq <a href="https://github.com/pytorch/fairseq/blob/master/fairseq/sequence_generator.py" target="_blank">code</a>. <br> By performing beam search with beam size 16, I could get LB:0.60.<br> In the later stages of the competition, I found that the prediction discrepancies of the different models of test data were only about 20k. Therefore, beam search or ensemble inference was performed only on 20k images.</p> <h1>PostProcess</h1> <p>I always used the rdkit validation postprocess shared by <a href="https://www.kaggle.com/nofreewill" target="_blank">@nofreewill</a>.<br> Post-processing improvement diminished with score improvement, but a little improvement remained even at the end.</p> <h1>What didn't work</h1> <ul> <li>I trained a generative model by using InChIs from the competition training data and generated 8M images(molecules), but it did not help improve CV.</li> <li>I trained the object detection model to detect atoms and bonds like the DACON solution and overlaid bboxes on the images, but it did not help improve CV.</li> <li>TTA(rotation, flip)</li> </ul> <p>Thank you for your comment!<br> My code is too dirty to share, but the beam search implementation is mostly a copy of fairseq repo and I think what you should implement yourself is reordering incremental states part. It is very easy to implement reordering like this.</p> <pre><code>def reorder_incremental_state(self, incremental_state, new_order): for layer in self.text_decode.layer: layer.self_attn.reorder_incremental_state(incremental_state, new_order) layer.encoder_attn.reorder_incremental_state(incremental_state, new_order) </code></pre>
Bristol-Myers Squibb โ€“ Molecular Translation
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bristol-Myers Squibb โ€“ Molecular Translation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thanks the organizers for organizing such an exciting competition. Second I would like to thank my teammates. We had a great working environment with everybody working hard, and last one week, we had to stay up until 3 am local times. Below I will try to summarize the main idea of our best-performing model.</p> <h2>Image Preprocessing</h2> <p>One of the challenging things about this competition was that the chemical had very different structures, some were super long, and others were very short. This also meant that images were highly diverse in size and shapes (e.g., super long in 1 axis or super small). Therefore it was essential to preprocess these images correctly. After some initial discussion we decided to try to use two methods:<br><br> <code>1) simple rescaling</code><br> plus - we will get all the information (kind off) <br> minus - feature of the images were become distorted (extreme shrinking across one axis)<br><br> <code>2) rescaling with preserving the aspect ratio</code><br> plus - This method will preserve the feature<br> minus - bigger images, when downscaled, will have extremely small features <br><br> Both of these methods have some advantages and disadvantages, but we reasoned that it would be best that each of our team members pick one of these methods and stick to it. We also found this an excellent article showing that most modern computer vision libraries employ very buggy resizing methods (<a href="https://arxiv.org/pdf/2104.11222.pdf)" target="_blank">https://arxiv.org/pdf/2104.11222.pdf)</a>. Since we are dealing with a lot of lines when downscaling, it was essential to preserve most of the information (tl dr: use PIL)</p> <h2>Image Augmentation.</h2> <p>We decided to go with natural image augmentations that were found in the dataset. But to make things a little bit diverse, we decided that everyone should train on a slightly different set of augmentations.<br><br> Set 1:<br> <code>RandomRotate90</code><br><br> Set 2</p> <pre><code>RandomBorderCropPixel (cropping 5-20 pixel at random from the border of the image) RandomRotate90 RandomNoiseAugment </code></pre> <p>Set 3</p> <pre><code>RandomRotate90 RandomBorderCropPixel (cropping 5-20 pixel at random from the border of the image) RandomNoiseAugment RandomLineRemoval RandomPiexlThreshold (some of the chemical lines had different sheds of black pixel values so we perform) </code></pre> <pre><code>thr = random.randint(200, 250) img[img &lt; thr] = 0 img[img &gt;= thr] = 255 </code></pre> <h2>Tokenizers</h2> <p><a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> tokenizers was great, but we were wondering if there is a better way to tokenize. We found a lot of research on different tokenization strategies using <code>SMILES</code> but almost no information about <code>InChI</code>. After some thinking, we decided to try <code>huggingface</code> <code>BPE</code> tokenization. </p> <pre><code>vocab = 128 # can be any tokenizer = Tokenizer(BPE(unk_token="[&lt;unk&gt;]", fuse_unk=False)) pre_tokenizer = pre_tokenizers.Sequence([Whitespace(), Digits()]) tokenizer.pre_tokenizer = pre_tokenizer trainer = BpeTrainer(vocab = max_len, special_tokens=["[&lt;pad&gt;]", "[&lt;start&gt;]", "[&lt;end&gt;]", "[&lt;unk&gt;]"]) </code></pre> <p>We run a simple small-scale experiment using <code>resnet34</code> and several <code>BPE</code> <code>tokenizers</code> using different <code>vocab</code> sizes. Below are results</p> <pre><code>42 character tokenzation - 9.7 64 BPE tokenization - 8.9 128 BPE tokenization - 8.01 246 BPE tokenization - 6.9 </code></pre> <p>It clearly shows that more vocab size leads to better-learned embeddings. As per usual, in the end, we decided to use three different tokenizers to bring more diversity. <br> 1) Y.nakama tokenizer<br> 2) <code>BPE - 512</code><br> 3) <code>BPE - 1024</code></p> <h2>Models</h2> <p>We decided to train several different models with everything mentioned above, with progressive pseudo labeling on test data (take only valid inchis). In total, we had the following modules<br><br> 1)<code>effnetv2 -&gt; encoder(6) -&gt; decoder(8, 10, 12)</code><br> 2)<code>effnet3 -&gt; encoder(3) -&gt; decoder(3)</code><br> 3)<code>vit/cait/swin</code><br><br> For vision transformers, we used hengs decoders.<br><br> For hybrid CNN transformers to increase the diversity of our models, we used both <code>Encoder/Decoder</code> with attention layer with following modifications:<br><br> 1) gating with <code>GLUE</code> (<a href="https://arxiv.org/abs/2002.05202)" target="_blank">https://arxiv.org/abs/2002.05202)</a>. In feed-forward layer, use <code>GLUE</code> in place of the first linear transformation and the activation function.<br><br> 2) like 4th place solutions, we use relative positional embedding as in <code>T5</code><br></p> <h2>Ensembling:</h2> <p>We trained our model, everything looks great, but we needed to come with an ensembling strategy because everyone was using different <code>tokenizers</code>, so combining probabilities was not an option. Luckily <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> wrote a custom ensemble script that operates on string levels. It deserves its own post, which he will share later. Using <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> script, we were able to get from weakly scoring models <code>(0.74 - 0.90) ~0.67</code></p> <h2>Classification for m0/m1</h2> <p>We noticed that most of our valid predictions were different only by <code>m0/m1</code>. To solve this problem, we build a simple <code>resnet50</code> model classifying images into <code>3</code> classes (<code>no m0/m1</code>, <code>m0</code>,<code>m1</code>). Using this classifier to post-process our <code>InChI</code>, we gain a boost from <code>0.67</code> to <code>0.65</code>.</p> <h2>Finetuning model only for long InChIs.</h2> <p>We observed that our models tend to make a more wrong prediction for long <code>InChIs</code>. As a result last <code>4</code> days we decided to finetune our model only on long <code>InChIs</code> (bigger then <code>180</code>). After finetuning, we took the longest 100k <code>InChI</code> in our test and made new predictions. We also applied <code>TTA</code> to images and got those predictions as well. Once we have those predictions, we used a modified version of the zfturbo script to make an ensemble only on the longest <code>InChI</code>. This eventually brought us to our final <code>LB</code> standing <code>0.65 -&gt; 0.61</code>.</p> <h2>Conclusion</h2> <p>I would like to thank my teammates for being a great teamates and working really hard <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a>, <a href="https://www.kaggle.com/youhanlee" target="_blank">@youhanlee</a> and new GM <a href="https://www.kaggle.com/tugstugi" target="_blank">@tugstugi</a> =) <br> P.S also a big congratulations to all new GMs =) </p> <p>Thank you, I missed this article! </p>
CAFA 5 Protein Function Prediction
1st Place Solution for the CAFA5
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>A. MODEL SUMMARY</h1> <h2>A.1 Background on you/your team</h2> <p>Competition Name: CAFA5 Protein Function Prediction</p> <p>Team Name: GOCurator</p> <p>Private Leaderboard Score: 0.61623</p> <p>Private Leaderboard Place: 1</p> <p>Name: Shaojun Wang</p> <p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p> <p>Email: <a href="mailto:20110850027@fudan.edu.cn">20110850027@fudan.edu.cn</a></p> <p>Name: Weiqi Zhai</p> <p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p> <p>Email: <a href="mailto:20110850010@fudan.edu.cn">20110850010@fudan.edu.cn</a></p> <p>Name: Wei Liu</p> <p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p> <p>Email: <a href="mailto:21110850007@m.fudan.edu.cn">21110850007@m.fudan.edu.cn</a></p> <p>Name: Tianyang Huang</p> <p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p> <p>Email: <a href="mailto:tyhuang22@m.fudan.edu.cn">tyhuang22@m.fudan.edu.cn</a></p> <p>Name: Huiying Yan</p> <p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p> <p>Email: <a href="mailto:yanhuiyingwangyi@163.com">yanhuiyingwangyi@163.com</a></p> <h2>A2. Background on you/your team</h2> <p>Our team consists of five PhD students working on bioinformatics, especially automated protein function prediction, and biomedical text mining from ZhuLab, which is led by Prof. Shanfeng Zhu at Fudan University. Our lab has participated in the CAFA3 and CAFA4 and achieved the top performance. Our team developed GOCurator by considering all kinds of information including protein sequence, network, structure, and textual information. During the competition, S.W. was responsible for collecting data, building the framework, and training the model. W.L developed a component method based on FoldSeek. T.H and H.Y were mainly engaged in biomedical text mining for protein annotation. W.Z submitted the prediction result, and performed the result analysis.</p> <h2>A3. Summary</h2> <p>Our pipeline, GOCurator, was developed based on NetGO 3.0. In addition, we also introduced new component methods to extract functional information from protein 3D structure, protein textual description, and scientific literature. Thanks to the learning to rank model, we eventually integrated multiple component methods effectively. Figure 1 showed the score of component methods achieved in public leaderboard.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16029558%2F27233f10b1cbcfd1e5aec8ed8a5b85c5%2Fsolution1.png?generation=1704887698109856&amp;alt=media" alt="The performance of component methods in public leaderboard"></p> <ul> <li>LR-MEM, it combined protein description, scientific literature, and protein sequence together based on the annotation from GO and GOA. Specifically, it concatenated representations from different information source as protein representations. Then we trained logistic regression classifiers based on it. We integrated this method into NetGO 3.0 and ranked in the first place for the first time on public leaderboard. </li> <li>FoldSeek-KNN, it made prediction according to the structure information. By comparing the structural similarities between the test proteins and the proteins in the training set, we calculated the prediction score as weighted voting by similar annotated proteins.</li> <li>GOXML, an extreme multi-label classifier based on literature information. Similar to DeepText2GO, it used AttentionXML to achieve fast high-precision large-scale multi-label classification without training logistic regression classifiers for each GO terms.</li> <li>GORetrieval, a novel two-stage deep learning-based framework based on the protein description and protein literature. Given a target protein, it first retrieved candidate Gene Ontology (GO) terms from annotated proteins with similar descriptions. In the second stage, the retrieved GO terms were reranked based on semantic matching between the GO descriptions and the literature information of the target protein. Moreover, the second stage can also be used to enhance the prediction from other method via reranking.</li> </ul> <p>When doing ensemble learning, the input to the model contains not only the predicted scores from each method, but also a 20-dim one-hot vector to characterize the protein species information.</p> <h2>A4. Features Selections / Engineering</h2> <p>According to the public leaderboard scores, we believed that protein language models and textual information are useful for improving accuracy. For protein language models, we have also tried the ESM2 and ProtT5 to obtain protein embeddings. Similar to LR-ESM (based on ESM-1b), we trained logistic regression classifiers for ESM2 and ProtT5 embedding and the concatenation of all three embeddings. We found little difference in the scores achieved by these methods, but the best performance was achieved when LR-ESM was integrated as a component method into the GOCurator. Therefore, we finally used ESM-1b to generate protein embeddings. For textual information, how to generate text representations is important. Following to DeepText2GO, we used document to vector (D2V) and TF-IDF to represent scientific literature. In GORetrieval and GOXML, PubMedBERT was used to generate text embedding.</p> <h2>A5. Training Method(s)</h2> <p>We collected functional annotation information from three database SwissProt, GOA, and GO, according to the CAFA5 evidence code. Unlike the temporal split in NetGO, we selected proteins for validation based on the species distribution of proteins in the CAFA5 test set. Specifically, we calculate the proportion of proteins from each species in the CAFA5 test set, and sampled proteins for the corresponding species from the dataset to form a validation dataset of 1000 proteins. We have improved the performance of our methods as much as possible in this way.</p> <h2>A6. Interesting findings</h2> <p>We found that the Net-KNN based on protein-protein interaction performs poorly in predicting proteins from those species with less functional annotation information. When we integrated all the component methods together, we even got better results by not fusing Net-KNN into it. However, according to our previous studies (NetGO series), Net-KNN is a powerful method for BPO and CCO. Therefore, we tried to filter the prediction from Net-KNN by the species. Specifically, we considered the predictions for a target protein to be valid if the number of the annotated proteins in the target proteinโ€™s species is larger. As shown in the figure, the number of valid species has an impact on the prediction performance. Finally, we treated Net-KNN predictions for the top 15 species as effective results.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16029558%2Fc669b39b4fb0047b017c6b75a9440223%2Fsolution2.png?generation=1704888407902780&amp;alt=media" alt=""></p> <h2>A7. Simple Features and Methods</h2> <p>According to the performance of different source information, we believe that incorporating a component method based on textual or structural information into NetGO 3.0 can achieve 90% of the final performance.</p> <h2>A8. Model Execution Time</h2> <p>Firstly, we used InterProScan and ESM-1b to get protein domain/family/motif information and protein embeddings, which took about 1-2 days, where most of the time was spent on InterProScan. Secondly, we trained each component method separately on several servers, spending roughly 10-15 hours per method. Then, the learning to rank model used for ensemble learning could be trained within 50 mins. For inference, we first used component methods to generate initial prediction, which took about 2 hours for CAFA5 test proteins. Finally, it took about 1.5 hours for ensemble learning on all test proteins.</p> <h2>A9. References</h2> <ol> <li>You R, Zhang Z, Xiong Y, et al. GOLabeler: improving sequence-based large-scale protein function prediction by learning to rank. Bioinformatics, Volume 34, Issue 14, 2018, 2465โ€“2473.</li> <li>You R, Huang X, Zhu S. DeepText2GO: Improving large-scale protein function prediction with deep semantic text representation. Methods, 2018, 145: 82-90.</li> <li>You R, Yao S, Xiong Y, et al. NetGO: improving large-scale protein function prediction with massive network information. Nucleic Acids Research, Volume 47, Issue W1, 2019, W379โ€“W387.</li> <li>Yao S, You R, Wang S, et al. NetGO 2.0: improving large-scale protein function prediction with massive sequence, text, domain, family and network information. Nucleic Acids Research, Volume 49, Issue W1, 2021, W469-W475</li> <li>Gu Y, Tinn R, Cheng H, et al. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1โ€“23, 2021</li> <li>Rives A, et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. PNAS, 118(15), 2021</li> <li>You R, Yao S, Mamitsuka H, et al. DeepGraphGO: graph neural network for large-scale, multispecies protein function prediction. Bioinformatics (ISMB 2021), Volume 37, Supplement_1, 2021, i262โ€“i271.</li> <li>Wang S, You R, Liu Y, et al. NetGO 3.0: Protein Language Model Improves Large-scale Functional Annotations. Genomics, Proteomics &amp; Bioinformatics, Volume 21, Issue 2, 2023, 349-358.</li> </ol> <ol> <li>For the first question, are you referring to how to obtain the representation of proteins from LLMs? Protein LLMs are multi-layered and can generate representations for each amino acid. Typically, we perform mean pooling on the output of the last layer for each amino acid, and generate the feature representation of the protein.</li> <li>To predict the GO terms for proteins, we collected large-scale functional annotation information, where each GO term is associated with numerous proteins. We treated these proteins as positive instances and considered the remaining proteins as negative instances. Then we trained a classifier for each GO term, aiming to predict the association scores between proteins and GO term.</li> </ol> <ol> <li>We splited the dataset into training and validation sets, and used the training set to train component methods. Then we used the trained component methods to predict function for validation proteins. Finally, we trained the ensemble model based on the prediction on validation sets. For more details, you can find in "GOLabeler: improving sequence-based large-scale protein function prediction by learning to rank".</li> <li>Yep, we used psi-blast to find similar proteins for target proteins.</li> <li>LR-Interpro is also a component method derived from GOLabeler. It used InterProScan to extract protein families, domains, and motifs. Then we built a binary feature vector for each proteins based on the InterProScan results, where the '1' in j-th element indicates that the j-th feature belongs to target protein. Finally, we used these vectors to train logistic regression classifiers and make prediction.</li> <li>InterProScan is updated irregularly and collects more functional information. So, we believe that using InterProScan directly is more helpful.</li> </ol>
CAFA 5 Protein Function Prediction
3rd Place Solution for the CAFA 5 Protein Function Prediction
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and hosts for organising such an interesting competition.<br> This competition task:"Predict the function of this protein using any available data" was linked to the fundamental of biology, making it a very exciting experience for me.<br> Thanks also to the community, the discussions and notebooks were very helpful.</p> <h2>Overview</h2> <p>I used the following NN models.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2Fbf4221217029bc9d2ae30b78bf0d577d%2F.png?generation=1703936744527010&amp;alt=media" alt="overview"></p> <h2>Protein Sequence</h2> <p>I have extracted embeddings using protein language models from a protein sequence.<br> The models I used include T5, ESM2t36, ESM2t48, and combinations of these.</p> <h2>Taxonomy Data</h2> <p>I one-hot encoded the taxonomy data and combined it with embeddings generated by protein language models. <br> During this process, only the 90 taxonomy IDs included in the test data were used.</p> <h2>Non-Experimental Annotation</h2> <p>The <a href="https://geneontology.org/docs/guide-go-evidence-codes/" target="_blank">evidence codes</a> of UniProt GOA annotations consist of 13 experimental labels and 11 non-experimental labels.<br> Experimental labels are used as GT and non-experimental labels are not used in training data.<br> So I extracted non-experimental labels from UniProt GOA data and used it as additional features. <br> The non-experimental labels were one-hot encoded and transformed into a tensor of dimensions (batch size, 11, number of GT labels). <br> Then this tensor is processed using a 1D-CNN which kernel_size is 1.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2F86368def7cbc9707ec8c8fb5c77f6217%2Fnon-exp-feat.png?generation=1704986539311314&amp;alt=media"></p> <h2>Validation Data</h2> <p>The training data was created from experimental annotations in UniProtKB as of November 17, 2022, and the test data (private leaderboard) consists of data experimentally verified after August 22, 2023. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2F6ba93a1b07cbdcdd3004f88a8ebb7952%2Fvalidation.png?generation=1703936790356554&amp;alt=media"><br> Therefore, I approached the validation strategy as if it were time-series data.<br> Validation data was created by extracting experimental annotations from the latest UniProtKB data at the time of the competition, and any sub-ontology data which wrere annotated in the training data was excluded from the validation data.</p> <p>Hi <a href="https://www.kaggle.com/danofer" target="_blank">@danofer</a> </p> <p>Good question!</p> <p>I wanted to use MLP here.<br> But Non-Experimental Annotations are predictions about what functions each protein has.<br> And when these are one-hot encoded, they will have a dimensionality of number of GT labels.<br> To manage this tensors with size (batch size, 11, number of GT labels), I utilized the characteristic that 1D-CNN with kernel size 1 is the same as a fully connected layer.</p> <p>I added a picture for this.</p> <p>Hi <a href="https://www.kaggle.com/danofer" target="_blank">@danofer</a> </p> <p>I made 4 types of DL models for the different PLM. Embeddings are concatenated in type3 or type4. These 4 type of models are used for final ensemble (simple average ensemble).</p> <pre>type1 ESM2t36 type2 ESM2t48 type3 ESM2t36+T5 type4 ESM2t48+T5 </pre> <p>I conducted a holdout validation using the "Validation Data" described above.</p>
CAFA 5 Protein Function Prediction
5th Place Solution for the CAFA 5 Protein Function Prediction Challenge
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>A. MODEL SUMMARY</strong><br> <strong>A1. Background on you/your team</strong><br> Competition Name: CAFA 5 Protein Function Prediction<br> Team Name: hfm7zc<br> Private Leaderboard Score: 0.56171 (maximum weighted F-measure, wFmax)<br> Private Leaderboard Place: 5</p> <p>Name: Chengxin Zhang<br> Location: Department of Computational Medicine and Bioinformatics, University of Michigan โ€“ Ann Arbor, MI, USA<br> Email: <a href="mailto:zcx@umich.edu">zcx@umich.edu</a> </p> <p>Name: P Lydia Freddolino<br> Location: Department of Biological Chemistry, University of Michigan โ€“ Ann Arbor, MI, USA<br> Email: <a href="mailto:lydsf@umich.edu">lydsf@umich.edu</a></p> <p><strong>A2. Background on you/your team</strong><br> Our team consists of two faculty members working on algorithm development of protein function annotation tools as well as their proteome-wide application to microbes and human. We have previously participated in the CAFA3 challenge as team Zhang-Freddolino lab, which was ranked first in the CAFA3 limited knowledge Biological Process category in CAFA3 [1]. We participated in CAFA5 to benchmark our latest protein function prediction methods, with a particular focus on the assessment of utility of structure templates and deep learning in function prediction. C.Z. and P.L.F. conceived the project and designed the pipeline. C.Z. developed the method, performed the CAFA5 prediction and submitted the prediction result.</p> <p><strong>A3. Summary</strong><br> Our workflow, StarFunc, first independently generates five sets of Gene Ontology (GO) predictions for a query protein using five component methods:</p> <ul> <li>Sequence homolog search through the UniProt-GOA database by BLASTp [2].</li> <li>Structure alignment between the AlphaFold2 model [3] of the query protein and template structure from the BioLiP2 [4] and AlphaFold databases [5] by Foldseek [6] plus TM-align [7].</li> <li>Protein-protein interaction (PPI) partners recorded by the STRING database [8].</li> <li>Logistic regression models (one model per GO term) which use Pfam family matches as the input features.</li> <li>Prediction from a deep learning model (SPROF-GO) [9].<br> The prediction scores from these five components, as well as the background frequency of the GO term in the UniProt-GOA database (i.e., the โ€œNaรฏveโ€ method in CAFA3 evaluation [1]), are used as six input features to train three random forest models (one model per GO aspect) to derive the consensus score used for final submission. </li> </ul> <p><strong>A4. Features Selection / Engineering</strong><br> Based on wFmax of the component methods in an independent test set, the features (i.e., predictions from component methods) ranked in descending order of importance are sequence, deep learning, structure, Pfam, PPI and Naรฏve (Figure 1). <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16315279%2Ffe2b66c5b505f9bad5465314041c8a2d%2FFeatureImportance.png?generation=1703270675891555&amp;alt=media" alt=""><br> <strong>Figure 1.</strong> Importance of different features, as measured by the wFmax of GO prediction on independent test set.</p> <p>A major reason for the high performance of the sequence-based component method is the introduction of a new scoring function that weights each BLASTp hit by both the bit-score and the sequence identity [10]. <br> For the component methods and the consensus prediction, only experimental GO terms (evidence codes EXP, IDA, IPI, IMP, IGI, IEP, HTP, HDA, HMP, HGI, HEP, TAS, and IC) are included in the training dataset. On the other hand, during training, if a protein has already has a non-experimental GO annotation in UniProt-GOA, this GO annotation is also included in the final result, where the prediction score is calculated as one minus the error rate associated with the evidence code, as determined by our previous study [11]. Note that GO annotations with IBA evidence are always excluded as they are found to have higher error rate than other types of GO annotations [11].</p> <p><strong>A5. Training Method(s)</strong><br> We tested traditional Gradient Boosting Decision Tree (GBDT), Random Forest (RF), and Dropouts meet Multiple Additive Regression Trees (DART) implemented by LightGBM to ensemble the predictions from the five component methods. On an independent test set based on UniProt-GOA release 2023-05-18, we found that comparable results can be achieved by GBDT with 1000 trees and RF with 4000 trees, both of which are more accurate than DART. We eventually selected RF to derives the consensus prediction.</p> <p><strong>A6. Interesting findings</strong><br> The treatment of term GO:0005515 โ€œprotein bindingโ€ is different from previous CAFA challenges. In previous CAFA challenges (CAFA1 to CAFA3), proteins whose only Molecular Function (MF) leaf term is "protein binding" will be excluded from MF evaluation. According to the CAFA3 paper, "protein binding is a highly generalized function description, does not provide more specific information about the actual function of a protein, and in many cases may indicate a non-functional, non-specific binding. If it is the only annotation that a protein has gained, โ€ฆ we deleted these annotations from our benchmark set." <br> However, the CAFA5 Kaggle challenge does not treat the โ€œprotein bindingโ€ term differently, while it is unclear whether the โ€œprotein bindingโ€ term will be treated differently in the academic paper reporting CAFA5 official assessment. Due to these discrepancies, we submitted two different models using the same architecture to CAFA5, where model โ€œzcxโ€ exclude protein binding-only proteins when training the MF prediction model while model โ€œhfm7zcโ€ include these training proteins. Model โ€œhfm7zcโ€ seems to achieve better Leaderboard score (0.56171, ranked 5) than โ€œzcxโ€ (0.55539, ranked 8), showing that different treatment of โ€œprotein bindingโ€ can give a significant impact on the final Leaderboard ranking. We would like to register our opinion, however, that for the academic CAFA5 assessment, the exclusion of protein binding as a leaf term would be more appropriate and biologically useful . <br> We also note, in contrast to the findings reported below in section A7, that based on our prior experience we expect that for more challenging annotation targets, it is likely that the non-sequence pipelines will be more important [12, 13].</p> <p><strong>A7. Simple Features and Methods</strong><br> Based on independent test, 98.7% of the performance (in terms of wFmax) can be achieved by the sequence-based component alone.</p> <p><strong>A8. Model Execution Time</strong><br> The random forest models as well as the logistic regression models mentioned in section A3 can be trained within &lt; 1 day using the CAFA5 training set. To perform inference, the overall pipeline takes a few minutes per protein, where most of the time is spent by the structure template alignment component. If only the sequence-based component is run, it takes at most a few seconds per protein in large-scale inference.</p> <p><strong>A9. Acknowledgement</strong><br> We thank Quancheng Liu and Dr Xiaoqiong Wei for technical supports and insightful discussions. This work used the Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.</p> <p><strong>A10. References</strong></p> <ol> <li>Zhou NH, et al: The CAFA challenge reports improved protein function prediction and new functional annotations for hundreds of genes through experimental screens. Genome Biology 2019, 20.</li> <li>Altschul SF, Madden TL, Schaffer AA, Zhang J, Zhang Z, Miller W, Lipman DJ: Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res 1997, 25:3389-3402.</li> <li>Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Zidek A, Potapenko A, et al: Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596:583-589.</li> <li>Zhang C, Zhang X, Freddolino PL, Zhang Y: BioLiP2: an updated structure database for biologically relevant ligand-protein interactions. Nucleic Acids Res 2023.</li> <li>Varadi M, Anyango S, Deshpande M, Nair S, Natassia C, Yordanova G, Yuan D, Stroe O, Wood G, Laydon A, et al: AlphaFold Protein Structure Database: massively expanding the structural coverage of protein-sequence space with high-accuracy models. Nucleic Acids Res 2022, 50:D439-D444.</li> <li>van Kempen M, Kim SS, Tumescheit C, Mirdita M, Lee J, Gilchrist CLM, Soding J, Steinegger M: Fast and accurate protein structure search with Foldseek. Nat Biotechnol 2023.</li> <li>Zhang Y, Skolnick J: TM-align: a protein structure alignment algorithm based on the TM-score. Nucleic Acids Res 2005, 33:2302-2309.</li> <li>Szklarczyk D, Kirsch R, Koutrouli M, Nastou K, Mehryary F, Hachilif R, Gable AL, Fang T, Doncheva NT, Pyysalo S, et al: The STRING database in 2023: protein-protein association networks and functional enrichment analyses for any sequenced genome of interest. Nucleic Acids Research 2023, 51:D638-D646.</li> <li>Yuan Q, Xie J, Xie J, Zhao H, Yang Y: Fast and accurate protein function prediction from sequence through pretrained language model and homology-based label diffusion. Briefings in Bioinformatics 2023, 24.</li> <li>Zhang C, Freddolino PL: A large-scale assessment of sequence database search tools for homology-based protein function prediction. bioRxiv 2023:2023.2011. 2014.567021.</li> <li>Wei X, Zhang C, Freddolino PL, Zhang Y: Detecting Gene Ontology misannotations using taxon-specific rate ratio comparisons. Bioinformatics 2020, 36:4383-4388.</li> <li>Zhang C, Freddolino PL, Zhang Y: COFACTOR: improved protein function prediction by combining structure, sequence and protein-protein interaction information. Nucleic Acids Res 2017, 45:W291-W299.</li> <li>Zhang CX, Zheng W, Freddolino PL, Zhang Y: MetaGO: Predicting Gene Ontology of Non-homologous Proteins Through Low-Resolution Protein Structure Prediction and Protein Protein Network Mapping. Journal of Molecular Biology 2018, 430:2256-2265.</li> </ol>
CAFA 5 Protein Function Prediction
6th Place Solution for the CAFA 5 Protein Function Prediction Challenge
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>A. MODEL SUMMARY</p> <p>A1. Background on you/your team<br> Competition Name: CAFA 5 Protein Function Prediction<br> Team Name: Evans<br> Private Leaderboard Score: 0.56076<br> Private Leaderboard Place: 6<br> Name: Quancheng Liu<br> Location: Department of Computational Medicine and Bioinformatics, University of Michigan โ€“ Ann Arbor, MI, USA<br> Email: <a href="mailto:quanch@umich.edu">quanch@umich.edu</a></p> <p>A2. Background on you/your team<br> I am a PhD student currently working under the Freddolino Lab, engaged in developing algorithms for protein function annotation tools, with a specific focus on leveraging deep learning techniques. My participation in the CAFA 5 competition was motivated by the objective to benchmark my method. This competition offered a valuable opportunity to evaluate the efficacy of my approach within a competitive, real-world context.</p> <p>A3. Summary<br> InterlabelGO integrates a sequence alignment-based method with a neural network-based method.<br> In the neural network component, the model leverages the last three layers from the ESM model as input features. The architecture of this model includes several feedforward layers.<br> The sequence alignment-based method utilizes blastp/diamond to query the UniProt-GOA database. Key metrics such as bitscore and sequence identity are used to compute a weighted average score from the search hits.<br> For the neural network models in InterlabelGO, training each model takes approximately 3 minutes on an RTX 4090 GPU</p> <p>A4. Features Selection / Engineering<br> The primary feature utilized in InterlabelGO is the embedding generated from the ESM model. These embeddings are critical as they encapsulate rich, contextual information about protein sequences, which is essential for accurate function annotation.</p> <p>A5. Training Method(s)<br> InterLabelGO used 5-fold random splits of the training data and trained 5 different models for each GO aspect; the final prediction for each GO term was mean score across 5 models.<br> Instead of using BCE loss as is often the case in comparable methods, InterLabelGO applied IA weighted soft f1 loss across both protein level and GO term level. Additionally, InterLabelGO also uses a ranked loss (Figure 1) which is the logsumexp of zero term minus positive terms plus negative term minus zero term. This rank loss was meant to capture the label correlation. During training, these loss functions are combined through simple multiplication, integrating their individual contributions to the model's learning process.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13769621%2Fe4d50771db7119d7105b2548b8dfd95e%2FPicture1.png?generation=1704901831021262&amp;alt=media" alt=""><br> Finally, the prediction from neural network method is linearly combined with the alignment-based method. The optimal weight was obtained from our validation dataset.</p> <p>A6. Interesting findings<br> One of the most significant discoveries in the development of InterlabelGO was the application of the IA weighted soft F1 loss, which led to a substantial improvement in model performance compared to the traditional BCE loss. This finding is particularly noteworthy as it may point a direction for solving the prevalent issue of label imbalance in protein function prediction.</p> <p>A7. Simple Features and Methods<br> The neural network models, utilizing ESM embeddings, and one model for each aspect, can achieve approximately 90-95% of the full model's performance, offering a simplified yet effective solution.</p> <p>A8. Model Execution Time<br> It takes around 60 minutes to train the model for 3 aspects (5 model pre aspect)<br> It takes less than 10 minutes to generate prediction for around 2000 target proteins for neural network method, while most of time is ESM feature extraction.<br> It takes less than 2 minutes to generate predictions for around 2000 target proteins for alignment-based method using diamond.<br> All above performance was achieved on Ubuntu 23.10 with RTX4090 and i9 13900k.</p> <p>A9. Acknowledgement<br> I would like to extend my sincere thanks to Dr. Chengxin Zhang and Dr. Lydia Freddolino for their insightful discussions.</p> <p>A10. Reference<br> Su, J. et al. (2022) ZLPR: A novel loss for multi-label classification, arXiv.org. Available at: <a href="https://arxiv.org/abs/2208.02955#:~:text=Compared%20to%20other%20rank%2Dbased,the%20label%20powerset%20(LP)" target="_blank">https://arxiv.org/abs/2208.02955#:~:text=Compared%20to%20other%20rank%2Dbased,the%20label%20powerset%20(LP)</a>. </p>
CAFA 5 Protein Function Prediction
9th place solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and Kaggle for an interesting competition, along with a major thanks to the community for many helpful ideas and notebooks throughout the submission period. Also thanks to my amazing teammates <a href="https://www.kaggle.com/aypyaypy" target="_blank">@aypyaypy</a> and <a href="https://www.kaggle.com/ahmedelfazouan" target="_blank">@ahmedelfazouan</a>.</p> <p>Below a short summary of our approach. The approach is an ensemble of two different ideas, that turned out to complement eachother well after merging.</p> <h1>Feed-forward NN</h1> <ol> <li>As input features, we went with: Prot-T5 embeddings, ESM2 embeddings, Taxo features and the (arguably more risky) QuickGO annotations.</li> <li>We predict the most common x_amount of annnotations as a target after propagation. In the final solutions, we ensemble models which predict different amounts of most common annotations, varying from 2000 to 3500 targets with steps of 500. This turned out to result in an important leap in terms of public score.</li> <li><strong>Modeling</strong>: skip layers significantly increased CV and public score. In particular, each dense layer output is forwarded as input to a last layer, which receives a concatenated set of these layers.</li> <li>Basic 10-fold cv scheme with seed ensembling for a slight score boost.</li> </ol> <p>Our best single model reached 0.605.</p> <p>Below a representation of the shift by using different targets + seed ensembling compared to single model. The mixed sub predicts pred=1 more confidently.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11232701%2F16b9f779c022f3cf68f5d6c7f01c5cbe%2Fmixedtargets.png?generation=1703099790462286&amp;alt=media" alt=""></p> <h2>Propagation</h2> <p>The second part consists of a less ML-based approach. </p> <ol> <li><p>Merge SprofGO and QuickGO using the method introduced by kirillโ€™s public NB (i.e. after taking the average, select the top45 of each aspect; <a href="https://www.kaggle.com/code/kirilldubovik/cafa5-tuning-merge-datasets)" target="_blank">https://www.kaggle.com/code/kirilldubovik/cafa5-tuning-merge-datasets)</a>.</p></li> <li><p>Propagation process. If there is a gene ontology among the children whose probability is expected to be larger than its own, the probability of the parent GO is replaced by the maximum probability of the child GO. This operation is performed in such a way that it propagates from the leaf GO to the root.</p></li> </ol> <p>A visualization of the propagation process result is visualized below. As may be expected, a large amount of probabilities are set to 1. The individual public score of this approach reaches 0.597.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11232701%2Ff0cbdd2fa1189dad1768f258b4516cc5%2Fpropagation1.png?generation=1703100755063416&amp;alt=media" alt=""></p> <h3>Extra</h3> <p>We have also tried some other approaches, but sadly can not report on their performance due to the private score being calculated for the selected submissions only, which is understandable from a computational perspective.</p> <p>Some of the stuff we tried:</p> <ul> <li>Correct for the overlap in train/test set by including a nested CV with pseudo labels</li> <li>Replace test predictions in the rows that already exist in the training test set by using the OOF values</li> <li>CNNs (did not perform well at all)</li> <li>Any other sort of embeddings reduced model performance</li> </ul>
CAFA 5 Protein Function Prediction
A brief overview of 16th-place LB solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It was an incredible adventure and the first competition I ever took part in! I was very lucky to find an experienced teammate - <a href="https://www.kaggle.com/ogurtsov" target="_blank">@ogurtsov</a> , who was the leader of our team. This journey was not just about algorithms and data; it was a journey of exploration, curiosity, and learning, so thanks to the organizers, fellow participants, and the Kaggle community!</p> <p>Here is a brief overview of our 16th-place LB solution:</p> <p><strong>1) Data:</strong><br> We used the initial train set and also annotations from the UniProt and QuickGO databases (filtered terms with NOT qualifier, evidence code "ND", obsolete terms, and those that are not present in the competition OBO file) in two ways: to update the train set (proteins with experimentally confirmed terms only) and to merge with predictions (test proteins with terms never experimentally confirmed).</p> <p><strong>2) Features</strong></p> <p>ESM2 3B protein sequence embeddings kindly shared by <a href="https://www.kaggle.com/andreylalaley" target="_blank">@andreylalaley</a> with Z-score normalization (means and SDs of train data applied to train, validation, and test).</p> <p><strong>3) Targets:</strong><br> Our target selection involved selecting the top 500 GO terms based on their frequency, or terms with a frequency above a certain threshold (BP &gt; 250, MF and CC &gt; 50).</p> <p><strong>4) Neural Networks:</strong></p> <p>The modeling strategy involved multiple Multi-Layer Perceptrons (MLPs) with different hyperparameters and layers, written with Torch or Tensorflow, R language. Each was trained on the initial or updated training set and different sets of targets. To ensure robustness, we employed cross-validation (CV) with 10 folds (one for validation and 9 for training) followed by averaging the predictions. For some models, we found it more effective to validate on train proteins only, in these cases, we used randomized 10-fold CV to avoid reducing the validation set. Importantly, we trained a separate MLP for each ontology, each time only on these proteins that have at least one term of the selected top.</p> <p>UPD: For all MLPs we used AdamW optimizer, <a href="https://torch.mlverse.org/docs/reference/lr_one_cycle.html" target="_blank">once cycle learning rate</a> and early stopping callback (max 100 epoch, monitored metric - valid loss, with patience 1, 3 or 5) or <a href="https://rdrr.io/cran/keras/man/callback_reduce_lr_on_plateau.html" target="_blank">callback reducing the learning rate</a> by a factor of 0.5 (patience = 1)</p> <p><strong>5) Post-processing:</strong><br> First, we filtered the predictions using taxon constraints to eliminate terms incompatible with certain species. Also, after each step, we made corrections to ensure that the predictions of parent terms were not smaller than the predictions of child terms (in such cases we propagated max prediction of child terms). Finally, before the final merge with GO annotations, the predictions were filtered again to include only terms in the top 450 by frequency based on the original training set and also rare terms if they had predictions &gt; 0.5.</p> <p><strong>6) Ensemble:</strong><br> Predictions from 6 MLPs were simply averaged. The resulting predictions were averaged again with exported GO annotations (the set of test proteins with terms never experimentally confirmed).</p> <p>For the secondary solution, we did weighted averaging with GO annotations and selected the top 35 predictions for each ontology.</p> <p>I hope that this will be beneficial for others, serving as an example of what can be done (if we shake up) or what should be avoided (if we shake down at the end) ๐Ÿ˜†.</p>
CAFA 5 Protein Function Prediction
First Kaggle competition : 10th place public leaderboard; Nth place private leaderboard
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>when I reduced the dimensions of ProtBERT to 3dims using UMAP/tSNE and added it, the score improved. I think this was because the information was compressed.</p> </blockquote> <p>That's a very nice approach, something I should definitely have experimented more with!</p> <p>Thank you for the kind words. Using text based information from paper abstracts is a method we borrowed directly from that of Zhu lab's NetGO model but it really only helps you with proteins that are adequately described in the literature!</p> <h2><strong>Introduction</strong></h2> <p>Firstly I want to thank the organizers for a very nice competition - having it based on Kaggle opened up submissions to a wider range of people compared to previous iterations when mainly specialist labs showed results so the wider net of people being exposed to CAFA is really gratifying to see. My team worked hard on this problem and although we fell short in the rankings it is humbling to look back at the progress we made. Both of us are molecular biologists with only limited bioinformatics and machine learning experience - so this competition was something of a trial by fire and we learned a lot from taking part.</p> <h2><strong>Approach</strong></h2> <p>We used these datasets in our model: <br> 1) Prot-T5, ESM2, and Ankh Protein Language Model (PLM) embeddings. We carried out no further modifications or finetuning on the output of PLMs, only conversion to float32 to save memory. <br> 2) A single binary matrix representing species taxonomy for each protein.<br> 3) Text information obtained by tf-idf of abstract information from academic papers associated with each protein</p> <p>For GO labels, we separately identified 1500 BPO, 800 CCO, and 800 MFO terms from each ontology for each model to classify by taking the top N labels sorted by IA*frequency. </p> <p>Datasets were sorted in alphabetical order to ensure identical ordering across datasets as well as removal of single duplicate row in test data. </p> <p>We made a simple dense neural network in keras that takes these as separate inputs before concatenating them into a final layer that attempts to predict every label from a single GO ontology/domain. We used binary cross entropy (BCE) as our loss function and Adam optimizer with lr=0.0003 and a simple lr scheduler. We further used the IA weights as class_weight when calling model.fit(). We added dropout and batch normalization layers to our model as well.</p> <p>Our model was trained with KFold = 5 across 5 different random splits of the training data before a simple average of predictions was made for each ontology/domain and the results concatenated. </p> <p>One interesting thing we observed is that Ridge regression sometimes performed better on PLM embeddings compared to a neural network for some ontologies, which was unexpected and might indicate underfitting for the neural network. </p> <h2><strong>What didn't work</strong></h2> <p>ProtBert embeddings performed poorly and were not utilized at all. <br> Data from STRING did not prove helpful in the final model. <br> We attempted to use a separate transformer layer that takes in label embeddings created with Anc2Vec (<a href="https://academic.oup.com/bib/article/23/2/bbac003/6523148" target="_blank">https://academic.oup.com/bib/article/23/2/bbac003/6523148</a>) but it did not improve model performance.<br> We attempted to use a hierarchy aware loss function (<a href="https://papers.nips.cc/paper/2020/file/6dd4e10e3296fa63738371ec0d5df818-Paper.pdf" target="_blank">https://papers.nips.cc/paper/2020/file/6dd4e10e3296fa63738371ec0d5df818-Paper.pdf</a>) but it did not perform better than BCE.</p> <h2><strong>Scores</strong></h2> <p>Using merely the PLM embeddings along with taxonomic information we obtained a public leaderboard (PL) score of about 0.55. Including text information increases the PL score to ~0.58.<br> Ensembling with a public ensemble notebook (<a href="https://www.kaggle.com/code/adaluodao/merge-datasets" target="_blank">https://www.kaggle.com/code/adaluodao/merge-datasets</a>) increases PL score to ~0.62.</p> <h2><strong>EDIT</strong></h2> <p>4th place overall! <br> Code used to generate our models can be found here:<br> <a href="https://github.com/zongmingchua/cafa5" target="_blank">https://github.com/zongmingchua/cafa5</a> <br> Final ensembling is shown here:<br> <a href="https://www.kaggle.com/code/zmcxjt/merge-datasets" target="_blank">https://www.kaggle.com/code/zmcxjt/merge-datasets</a></p> <blockquote> <p>Were you submitting all 1500+800+800 for each protein or did further pre-selection?</p> </blockquote> <p>In order to minimize our submission size, we actually only selected the top 5% of predictions based on probability assigned by our model. The final submission file ends up in the range of 500mb in size. </p> <p>Due to the rarity and low IA weight of many GO terms, most proteins will have drastically lower than 1500 GO terms predicted for them.</p> <blockquote> <p>Did you experiment with other schemes to weigh positive labels? (maybe IA*frequency that you mentioned above)?</p> </blockquote> <p>Unfortunately we did not! But I think the effects would be subtle. </p> <blockquote> <p>I've seen people arguing against combining batch norm and dropout citing this paper but your stellar results suggest that potential drop in performance is limited after all!</p> </blockquote> <p>Due to the large number of input data being simultaneously fed in, the model quite rapidly begins to overfit which was not something we ever managed to completely overcome. We attempted many ways to regularize but were not too successful in doing so.</p>
CAFA 5 Protein Function Prediction
Private 2nd/Public 5th solution: Py-Boost and GCN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your feedback. Yes, applying the graph transformations was a little bit tricky and I spent a lot of time to learn it better and write the efficient implementations. So, I am not surpised, it could be difficult to understand, but I have no idea, how to explain it more clear unfortunately.</p> <p>I used only taxon frequencies to select. I made one-hot features only for the taxons, that has frequency above the threshold (I don't remember exactly, I just saved the IDs at the begining after selection, but it is around 80) in both train and test set. </p> <p>Hi everyone!</p> <p>First, thanks to Kaggle and CAFA teams for the competition. The task we were solving was both important and challenging and even though there were some frustrating moments such as low protein amount in the LB and unobvious evaluation code, we enjoyed a lot while solving this problem.</p> <p>Here is a brief overview of public 5th place solution:</p> <h3>1) Sequence embedding</h3> <p>We tried to use the following embedding list: T5, esm2-large, ankh-large. Finally, the most of the models used only T5, but some of them used concat of T5+ESM. In addition, we concatenated one-hot taxon features to embedding. We selected only taxons that are good enough represented in both train and test features, around 30 totally, other taxons where merged into single group.</p> <h3>2) Base models</h3> <h5>Validation</h5> <p>We were not able to create a validation scheme better than a simple 5 fold CV. We had some experiments on the topic but other CV schemes lead to the models with less LB score.</p> <h5>Py-boost</h5> <p>The best performed models on both CV and LB are from Gradient Boosting family. We used my own GBDT implementation called <code>py-boost</code>. I made it few years ago especially to deal with extreme multi-output datasets since it was my main research area at that time. It works on GPU only and it is able to train multi-label tens on even hundreds times faster than popular well known implementations. You can check <code>py-boost</code> on the <a href="https://github.com/sb-ai-lab/Py-Boost" target="_blank">github</a> or read our NeurIPS <a href="https://arxiv.org/pdf/2211.12858.pdf" target="_blank">paper</a> where we explain all the strategies to speed-up multi-output training.</p> <p>Locally I was able to fit 4.5ะบ output (3000/1000/500) <code>py-boost</code> on a single V100 32GB GPU and it takes about 1.5 hour for a single fold.</p> <h5>Logistic Regression</h5> <p>We also trained a simple 13ะบ output Logistic Regression. It shows much less performance than GBDT on the popular terms, but is able to perform on rare outputs.</p> <h5>Neural networks</h5> <p>In the ensemble of models, we used a slightly modified version of the public notebook <a href="https://www.kaggle.com/code/alexandervc/pytorch-keras-etc-3-blend-cafa-metric-etc#Optimizer-%22Sophia%22-sometimes-better-than-Adam" target="_blank">Pytorch,Keras,Etc 3 Blend, CAFA metric, etc</a>. The only difference was the best cross-validated combination of hyperparameters averaged many times.</p> <h3>3) Alternative modelling approach - predicting the conditional probabilities</h3> <p>This approach was discovered at the beginning of the competition thats why it makes some wrong assumptions about the data. But somehow it becomes useful for us for both CV and LB. The main advantage of this approach is utilising the OBO graph on the inference phase and it helps to make a prediction even for terms that were not used in training. Here are the main points:</p> <ul> <li>We assume, that term can exist for the protein only <strong>if at least one of its parents exist</strong>. This is wrong. In real, if term exists, all its parents exist too because of the propagation rules.</li> <li>We reformulate classic multi-label scheme where target matrix of shape (n_protein, n_terms) consists of 0 and 1 to the new scheme. Now targets can be 0, 1 and NaN. Term for the protein will have NaN value in matrix if there is no parent term with value 1. </li> <li>During model training phase NaN cells in the target matrix are masked and ignored</li> <li>Now, our model outputs <strong>the conditional probabilities of term in case of at least one of its parents exist</strong>. On the inference phase we need to transform it back to the raw probabilities</li> <li>Transformation is made in the order defined by graph. When we process the term, all its parents are already processed and have raw probabilities. All terms are included to the scheme, even they are not used in training. For the terms that were not used for training, we used prior mean.</li> <li>While processing the term, we make another wrong assumption, that parents probabilities for the term are independent. But if we assume that, according to [1], we can calculate raw probability for term as<br> <code>p_term_raw = p_term_cond * (1 - (1 - p_parent_0_raw) * (1 - p_parent_1_raw) * ... * (1 - p_parent_N_raw))</code>. <br> Remember that while we processing the term, all its parents already have raw probabilities calculated.</li> </ul> <p>That approach scores better on both CV and LB, but not to much. The main advantage of it is that models are very different from classic multi-label approach and combining it all together boosts our score a lot. We apply this technique for GBDTs and LogRegs, so finally we had 2 GBDTs, 2 LogRegs and single blended NN as the base models</p> <h3>4) Stacking with GCN</h3> <p>We used graph convolution network to aggregate all the predictions. It is trained for the node classification task where each node is a term and each protein is a graph (but all the proteins have the same adjastency matrix). As the node features we used base models predictions together with node embedding trained from scratch. We also added GO annotations features described in the next section.</p> <p>One interesting feature we discovered by just making a mistake. On the inference stage we have wrong model ids but still get a good score, so we made a kind of test time augmentations by averaging the predictions with shuffled models.</p> <p>Another important thing I should mention is about the metric. As far as I understand, we are evaluated only on protein/ontology pairs that are experimentally found. So, if ones we predict a term from ontology that does not exist, we will not get any penalty at all! That means, that we need actually to estimate <strong>the conditional probability of term in case when its ontology exists</strong>. So the correct way to fit a final stacker model for ontology X is to truncate the sample and take only the proteins that contain the terms from ontology X. It gives some small boost to the score and speed-up the computations. </p> <h3>5) GO annotations</h3> <p>We discovered the GO annotations dataset provided by the <a href="http://ftp.ebi.ac.uk/pub/databases/GO/goa/UNIPROT/goa_uniprot_all.gaf.gz" target="_blank">link</a>. They provide not only labelling but also the evidence codes of each term. We separated the codes that Kaggle suppose to be experimental (hope we understood it correctly) from the electronic codes. So we can use electronic labelling as the features to predict experimental given by Kaggle. From our analysis we discovered that about 30% of electronic labels becomes experimental, so using it as model feature performs better than just adding it as is. Experimental labelling was added as is for about 500 proteins we were able to find in this dataset. We also added raw labelling that MT provided if we can not observe it in our dataset. The last step almost didn't change our score, so that sources are almost the same. </p> <h3>6) Postprocessing</h3> <p>We also used the OBO graph to make a post processing. The problem of ML model prediction of protein terms is that it is inconsistent. Following to propagation rule that is applied to the target (I checked it in the CAFA-Evaluator <a href="https://github.com/BioComputingUP/CAFA-evaluator" target="_blank">repo</a>) if term exists, all its parents assumed to exist too. So, consistent model will never predict the probability for parent lower than term probability. But our models don't care about it at all. So we can manually fix this situation. Our final term prediction is <strong>the average of term probability, maximum propagated children probability, and minimum propagated parents probability</strong>. That trick boosts score just a little on the LB but hope it makes model a little bit more stable.</p> <h3>P.S.</h3> <p>We wish a good luck for all the competitors since of course as many of you we expect a big shake up this time. I hope all our work will not be wasted :). Looking forward to see the final results</p> <p><strong>Update</strong> All the solution code is open source now and available at <a href="https://github.com/btbpanda/CAFA5-protein-function-prediction-2nd-place" target="_blank">Github</a></p>
CONNECTOMICS
Deep CNN for Time Series Correlation Measurement
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CONNECTOMICS <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Dear Julian,</p> <p>thank you very much for your comment. I have drawn a figure to explain it better. In this figure you can see subsequent layers details and their dimensions. Numbers in bold font are the CNN parameters, and the other ones are inferred from them.</p> <p>Since in CNN we use feature maps, I wrote the dimensions of a filter applied to one feature map, however, the feature maps are fully-connected, i.e. a single value in a particular feature map is computed based on NUM_FEATURE_MAPS_PREVIOUS_LAYER x FILTER_SHAPE.</p> <p>In the figure there are also some extra visualisations, e.g. the small green square represents a single value in a single feature map, calculated by applying a filter of 18 x 2 x 5 weights (which are this feature map's parameters to be learnt during training) to all of the feature maps in the previous layer.</p> <p>The total number of parameters in convolutional layers to be learnt is therefore:<br>2x18x1x2x5 = 360 <br>2x40x18x2x5 = 14400<br>2x15x40x1x1 = 1200</p> <p>Total 15960 parameters (first '2x' stands for an edge weight and bias) + without last layer's, but time spent for learning parameters in this usual layer is negligible.</p> <p>Below link to the picture. Yes, there is 1 input of 3 x 330&nbsp; (2D) dimension.</p> <p>https://drive.google.com/file/d/0B98iXdxQRL0TRF9xbW1fUE8tSm8</p> <p>Regarding your question, when the activity row was added also as adjacent to Fragment_cell1, the result was almost the same. Much higher accuracy gain was achieved by increasing the training size by 33% than the input itself (assuming limited memory). It seems that the CNN was already able to learn properly, since that row was already given to it. Regarding activation functions, firstly I used only tanh as a default, and then tested other ones, since positive indications of correlation should be additive it occurred that this improved the result, it is said that ReLU is &quot;argued to be more biologically plausible&quot;, furthermore I don't want to let any accidental negative correlation indication pattern to decrease the correlation confidence.</p> <p>Best regards,</p> <p>Lukasz</p> <p>Hello,</p> <p>here is a brief description of my solution:</p> <p>The solution is a deep convolutional neural network (4th place, 93.96% AUC score), and is a novelty in terms of application of CNN to time series correlation measurement. It takes as input two fragments of activity recordings (derivative) and detects patterns indicating correlations between them. One significant improvement was developed by providing one additional row to these two - an overall brain network activity increase in each timeframe, allowing the CNN to learn the level of noise and different behavior depending on the network activity level. Moreover, the recordings are firstly frame-filtered based on optimal threshold of minimum activity increase in a given time frame, keeping fragments of high activity and decreasing the input size to around 1% of the initial input length.</p> <p>Training uses the same number of positive and negative examples, total 1.2 million. The code was developed in Python with Theano and is designed for running on a GPU. The initial ideas of the network structure were based on Lenet5 Network for digit classification (by Y. LeCun). The network behavior has been inspected deeply, including exporting activation values of Theano hidden layer units and classifying them with SVM (libSVM, libLinear), allowing to adopt the network architecture specifically for the task of correlation detection.</p> <p>Network architecture:</p> <p><br>Input [3x330]<br>&nbsp;&nbsp; = [Fragment_cell1,<br>&nbsp;&nbsp; &nbsp; &nbsp; Fragment_cell2,<br>&nbsp;&nbsp; &nbsp; &nbsp; Network_activity]</p> <p>&nbsp;&nbsp; Please note that a length of 330 covers around 20% of total simulation time before frame-filtering.</p> <p><br>Convolutional, 18 units (= feature maps for Convolutional layers), tanh, filter shape = [2x5]<br>Convolutional, 40 units, ReLU, filter shape = [2x5]<br>Convolutional, 15 units, ReLU, filter shape = [1x1], max pooling [1x10]<br>Fully Connected, 100 units, tanh<br>Softmax</p> <p>Best regards,<br>Lukasz Romaszko</p>
CONNECTOMICS
Fast Matlab code to get a score of .93985
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CONNECTOMICS <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Oops, thanks. &nbsp;Let me know if adding this to the path doesn't work:</p> <p>function y = sigm(x)<br>y = 1./(1+exp(-x));<br>end</p> <p>This runs in 13 minutes total on my laptop, most of that time is just reading in the data and writing out the submission. &nbsp;Thanks to the winning team for making me take a harder look at the&nbsp;signal processing prior to inverse covariance estimation.</p>
CONNECTOMICS
Network Deconvolution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CONNECTOMICS <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Neurotheorist- Thank you for bringing up this discussion. I think that it is certainly worth mentioning . I'm definitely not in a position to take either side of the discussion, but here are my 2 cents. As I understand the criticism, the main point of Prof. Pachter, was that there was no (or almost no) mentioning of the alpha (fraction of edges of the observed dependency matrix to be kept in the deconvolution process) and beta (scaling parameter) parameters in the paper. As for this competition and our needs here at kaggle, it is really an empirical question of what works and what does not (I'm not taking sides on what was or was not written in the Nature article). Tunable parameters are of course a factor which you need to deal with in a lot of algorithms (to name a few &#8211; GB, RF, SVM, NN and many more). In that respect network&nbsp;deconvolution (ND) is not different. In the end &#8211; I can only report what we have found in our analysis. ND improved almost every measurement we took (almost &#8211; not all) depanding on the beta parameter (alpha was set to 1, as any value below it decreased the performance). I'm attaching a top 100 features importance list which was produced by scikit-learn RF taken from one of our initial analyses (this was not our final set, many features were dropped and others added, but it gives a good intuition on the matter;ND=Network deconvolution). In the end, I think Kaggle should not be chauvinist against any method, as it is the perfect venue to test them &#8211; if it works for you, use it. I'm not sure that ND will work for the winning solutions as many factors can influence the outcome, but seeing it only takes about 10 seconds for 1 ND iteration (several iterations are needed to tune the beta parameter) the gain may be substantial for a very small price.</p> <p>One of the best procedures which we've used to&nbsp;distinguish direct&nbsp;dependencies in the networks was the recently published Network Deconvolution method (http://www.nature.com/nbt/journal/v31/n8/full/nbt.2635.html)</p> <p>A matlab version was published by the authors -http://compbio.mit.edu/nd/code.html</p> <p>Furthermore, for the purpose of the competition I've implemented a python version as well and published it on github (with the kind authorization of the authors):&nbsp;https://github.com/gidonro/Network-Deconvolution .</p> <p>This algorithm improved almost every adjacency matrix which we've used. If not already used by the winning solutions, it may&nbsp;improve the results very&nbsp;easily.</p> <p>We'll describe the full method which we've used in our&nbsp;submission in a separate post.</p>
CONNECTOMICS
Our solution is available :)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CONNECTOMICS <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi George,</p> <p>You're right! We determined our parameters by&nbsp;coordinate ascent on AUC (mainly on normal-1 and normal-4 datasets) and especially for the power factors in the weighting part.&nbsp;</p> <p>Do you need more informations ? We'll be glad to help you.</p> <p>Our solution is available here : &nbsp;<a href="https://github.com/asutera/kaggle-connectomics">https://github.com/asutera/kaggle-connectomics</a>.</p> <p>We basically preprocess the data and then use partial correlation coefficient to estimate the strength of a link between two neurons.&nbsp;</p>
CONNECTOMICS
Solution team IM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CONNECTOMICS <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello,</p> <p>I have attached a zip fiile with my solution (ildefonsV2.zip):</p> <p>Follow the following steps to create the solution:</p> <p>Requirements:<br>------------<br>*OS: Windows 7<br>*R and matlab installed<br>*R packages: R.matlab, gbm, randomForest<br>*matlab open source packages: oopsi, ND (included in the zip)</p> <p>Steps:<br>-----<br>0)uncompress ildefons.zip. This will create a folder &quot;ildefons&quot;. <br>*You must update all R files included in this zip according to this path. <br>E.g.:<br>###path<br>mypath&lt;-&quot;C:/Users/ildefons/vrije/ildefons&quot;<br>*The file &quot;makeFeatures.R&quot; has a second place to update:<br>evaluate(matlab, &quot;addpath(genpath('C:/Users/ildefons/vrije/ildefons'))&quot;)</p> <p>1)Copy sampleSubmission, fluorescense and connection files to the same folder that contain the scripts and this file(README.txt) &quot;YOURPATH/ildefons&quot;<br>*files to copy: &quot;sampleSubmission&quot;, &quot;fluorescence_normal-1&quot;, &quot;fluorescence_normal-2&quot;, &quot;fluorescence_valid&quot;, &quot;fluorescence_test&quot;, &quot;network_normal-1&quot;, &quot;network_normal-2&quot;</p> <p>2)Execute &quot;makeMat.R&quot;. This R script will generate .mat files with fluorescense data from normal-1, normal-2, valid and test files</p> <p>3)Execute &quot;makeoo.m&quot;. This matlab script will read each .mat file generated in step 2 and return for each case a csv file with 1000 spike trains.<br>*Pre-consitions: make sure that the folder and sub-folders &quot;oopsi-master&quot; is in your matlab path<br>*input: .mat files computed in step 2)<br>*output: 4 csv files: &quot;diff2n1a40b15csv.csv&quot;,&quot;diff2n2a40b15csv.csv&quot;,&quot;diff2valida40b15csv.csv&quot;,&quot;diff2testa40b15csv.csv&quot;<br>*Note 1:this script runs in about 4 hours</p> <p>4)Execute &quot;makeFeatures.R&quot;. This R script builds unnormalized features. Each feature correspond to a correlation between 2 spike trains as computed in step 3. Before applying the correlation operator we filter out spikes below a threshold1 and we filter out rows that has more valid spikes than threshold2. the Correlation matrix for each combination of (threshold1, threshold2) is then transformed using the ND algorithm. We store both the original correlation and ND. We do that for many combinations of (threshold1, threshold2). <br>*input: spike train files computed in step 3).<br>*output: 4 RData files: &quot;featuresn1plus22.RData&quot;,&quot;featuresn2plus22.RData&quot;,&quot;featuresvalidplus22.RData&quot;,&quot;featurestestplus22.RData&quot;<br>*Note 1: after running line 9 (matlab$startServer() ), please wait 1 or 2 minutes untill the matlab server is running.<br>*Note 2: this step requires a fast computer with &gt;32Gbyte RAM<br>*Note 3: this is the most time consumming script (~ 24 to 48 h)</p> <p>5)Execute &quot;normalizeFeatures.R&quot;. This R scripts Z-normalize all features. <br>*input: feature files computed in step 4) <br>*output: 4 RData files: &quot;n1norm.RData&quot;,&quot;n2norm.RData&quot;,&quot;validnorm.RData&quot;,&quot;testnorm.RData&quot;</p> <p>6)Execute &quot;fitModels.R&quot;. This R script fits 4 supervised models: random forest with normal-1 data, gbm with normal-1 data, random forest with normal-2 data and gbm with normal-2 data.<br>*input: &quot;n1norm.RData&quot;,&quot;n2norm.RData&quot;<br>*output: &quot;mymodels.RData&quot;</p> <p>7)Execute &quot;createSolution.R&quot;. This R script builds the solution file.<br>*input: &quot;mymodels.RData&quot;,&quot;validnorm.RData&quot;,&quot;testnorm.RData&quot;<br>*output: &quot;submissionref.csv&quot;</p> <p>Cheers,</p> <p>Ildefons</p>
CONNECTOMICS
We just grew (really) lots of trees....
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CONNECTOMICS <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Selfish Gene;44108]</p> <p>I also encountered&nbsp;a similar problem of feature normalization across different nets with many of my features. what improved things for me (after trying&nbsp;whitening and min max scaling) was making a linear piece-wise normalization, such that several chosen percentiles of each feature were transformed to the same numeric value (and in between these percentiles, linear mapping).</p> <p>this allowed me to move on from a linear&nbsp;regressor (that is a little&nbsp;more immune to these kinds of problems) to more complex learners like RF with a significant improvement. but it wasn't perfect, of course, I had to use very large leaf nodes for the trees to &quot;absorb&quot; some of that inconsistency across networks.</p> <p>[/quote]</p> <p>In our case leafes had at least 400 samples each, which helped with such phenomen. We did not try to remap values (which seems like a nice idea) but rather added (as stated before) Minkowsky-based features, which seemed to give RF an ability to filter out some of this network-related inconsistency. In fact most of our contribution was to find many complementary topological features which brought us from ~91% (&quot;pure&quot; RF on corr/igg/ige) to nearly 94%. We were tempted to use Oblique Forest instead, as it can build a linear model in each node (and so, as Selfish Gene stated) deal (better) with this problems. However, computational complexity, as well as lack of good (efficient) implementation hold us back.&nbsp;</p> <p>We used basic MinMaxScaling per feature, which was just fine (we did test alternative schemes, but with worse results), although we did perform numerous feature prunings in order to remove the ones which were useless for the process, as well as forced the trees regularization by weighting samples, limiting the minimum leaf size and maximum number of features included in the split decision. Combination of those elements led to the reasonable results (for forests consisting of hundreads of trees). The &quot;ratios&quot; features are somewhat self-normalizing, as diving the particular feature by the maximum closure always results in the number from [0,1]. Although we did play a bit with Minkowsky-like scaling of the denominator, so we used not only f[i,j]/sum( f[i,k]f[k,j] ) but also f[i,j]/sum( (f[i,k]f[k,j])^(3/2) )^(1/3)<br><br>We were estimating our generalization capabilities using leave one out on the network level (training on networks 1-3 and testing on 4, and so on), and those estimations were very close (up to 0.2% error) from the ones on the validation set.</p> <p>Our solution (93.8%, which is around 0.3% from the wining team) was quite simple from the ideological point of view. We focused on the finding nice feature-based representation of the data, while leaving the whole classification process to the (nicely tuned) random forest. We used correlation, igg and ige (with multiple thresholds for burst filtering) as base methods of connections indication (each pair of neurons was one data point). The most significant increase in our scores were based on topological scores including (but not limited to):</p> <ul> <li><span style="line-height: 1.4">closure features - given feature f[i,j] we also computed max( sqrt(f[i,k]f[k,j]) ) which measured the potential other neuron k being responsible for &quot;correlation&quot; between ith and jth</span></li> <li><span style="line-height: 1.4">relation based - ie. f[i,j]/max(f[:,j]) measuring how strong is the relation in comparision with the strongest one</span></li> <li><span style="line-height: 1.4">combination of the above, like f[i,j] / sum( f[i,k]f[k,j] ) which more or less measures the ratio of the indicatior strength as compared to the markov closure of length 1 (assuming f[i,j] is some kind of transition probability estimate)</span></li> </ul> <p>We tested dozens of such features, most of which we documented on the github (which we will publish soon).</p>
COVID19 Global Forecasting (Week 4)
4th Place Neural Network Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><code> def get_custom_loss(q): def custom_loss(y_true, y_pred): return pinball_loss(y_true, y_pred, q) return custom_loss </code></p> <p><code> for loss, lr, epoch in [(get_custom_loss([0.15, 0.50, 0.85]), 0.0001, epochs[0]), (get_custom_loss([0.05, 0.50, 0.95]), 0.00005, epochs[1])]: model.compile(loss=loss, optimizer=SWA(Nadam(lr=lr), average_period=2)) model.fit(X_hist_train, get_output(train_df), sample_weight=get_weight(train_df), shuffle=True, batch_size=128, epochs=epoch, verbose=0) </code></p> <p>Competition metric is pinball loss with [0.05, 0.50, 0.95]. Instead of directly optimizing for that, I first optimized for a narrower uncertainty ([0.15, 0.50, 0.85]) to avoid a very cautious local minimum.</p> <p>I think the models we build are too late for being useful in this pandemic. They can hopefully (but not hopefully) be used in another pandemic. Therefore I wanted to have a re-usable Machine Learning solution. So far, it performs okay but it is early to talk about it. Anyway, I wanted to explain it before I forget the details. I have 2 notebooks, one for training and one for inference: <a href="https://www.kaggle.com/aerdem4/covid-19-w5-training">https://www.kaggle.com/aerdem4/covid-19-w5-training</a> <a href="https://www.kaggle.com/aerdem4/covid-19-w5-pipeline">https://www.kaggle.com/aerdem4/covid-19-w5-pipeline</a></p> <p>It is CNN Model trained on last 2 weeks of data. In last covid-19 competitions, my NN had diverged by time. I have learned from my mistakes and didn't want to do recursive predictions for this one. I have trained different model for each n day ahead. Here is the model architecture:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F471945%2Fa274824b8ac4631b021d4bcbbcc7aac2%2Fmodel.PNG?generation=1590745456988152&amp;alt=media" alt=""></p> <p>Some tricks that worked: - Cumulative sum over relu as the last layer guarantees that I have non-zero predictions and quantiles are always in order. - Training directly with pinball loss. In initial epochs, I have trained for 0.15, 0.50 and 0.85 quantiles. Then late epochs are trained with actual quantiles (0.05, 0.50, 0.95). Without this, model starts being cautious very early and gets stuck in a local minimum. - Bagging 5 NN models for each day to reduce randomness - Smoothing predictions for each day n: <code>y(n) = 0.2*y(n-1) + 0.6*y(n) + 0.2*y(n+1)</code> - Rounding predictions to integers. This probably helps a little with zero cases.</p>
COVID19 Global Forecasting (Week 4)
9th place solution (a lot of LGBMs blended)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My approach was quite straightforward: I have trained a lot of very simple LGBM models, blended their predictions, and applied an additional ~linear transformation to optimize the final metric.</p> <p><a href="https://www.kaggle.com/ilialar/w5-solution-2">Notebook with my final solution</a></p> <p><strong>Overview</strong></p> <ul> <li><p>Each model takes as input numbers of new cases and fatalities during the last N days divided by the maximum value of cases/fatalities for this region up to the last day in input and predicts the number of cases or fatalities on Kth day from the last input day (again divided by max value up to date).</p></li> <li><p>Each model is trained using all available data: all regions and all possible N+K days frames. But different days had different weights โ€“ the later the higher. The weight for each particular day is proportional to <code>(0.9)**(-n)</code> which means, that last days in the train set had much greater significance during training.</p></li> <li><p>Each model is just <code>lightgbm.LGBMRegressor()</code> with default parameters. I have trained a lot of different models iterating over N (<code>range(5, 40, 5)</code>) and K (<code>range(1,32)</code>). </p></li> <li><p>Then I have used all models to make all possible predictions for each day in the test or validation sets (I have used the last 30 days for validation) and computed min, max, and average prediction for each day.</p></li> <li><p>It turned out, these predictions were quite pessimistic โ€“ predicted values were very high. Thatโ€™s why I have applied additional linear transformation and deviation factor and computed the final predictions using formulas below (coefficients are optimized on validation). </p></li> </ul> <p>``` pred_0.5 = a[0] * mean + a[1] * min + a[2] * max pred_0.05 = pred_0.5 + (b[0] * mean + b[1] * min + b[2] * max) * dev(lamb_min) pred_0.95 = pred_0.5 + (c[0] * mean + c[1] * min + c[2] * max) * dev(lamb_max)</p> <p><code>`` Where</code>a<code>,</code>b<code>, and</code>c<code>are just linear coefficients. </code>dev(lamb) = np.array([lamb * n for n in range(num_days_to_predict)])` โ€“ is used to make predictions for later days less (or more) confident.</p> <p>What I have not done but probably should (I have joined quite late and just did not have enough time): - Optimize hyperparameters of LGBM - Smooth predictions (I see that I have a lot of outliers at some days) by (for example) using 5% and 95% percentiles of predicted values instead of min and max. - Use other information as inputs or somehow utilize information about connections between different regions.</p>
COVID19 Global Forecasting (Week 4)
Linear Regression Is All you Need (9th solution week 4)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. Note that I no longer rule discussion ranking ;)</p> <p>I said I'll describe my solution if it ended up in top 10, so here it is. </p> <h1>Acknowledgements</h1> <p>First of all, I'd like to thank Kaggle for this challenging series of forecasting competitions. I say challenging for two reasons. First, this is about people being sick and people dying. It is hard to abstract from that dire reality. Second, one week sprints are too short for me. I need more time to immerse in a given machine learning problem. In this case it took me 4 weeks and 2 days to find a model I was happy with.</p> <p>I also want to thanks my teams mates for first 3 weeks, Ahmet and Giba. They preferred to go solo last week but I would have been happy to stick as a team. We used very different models (NN for Ahmet, xgb/lgb for Giba, lr for me) with he hope that together it would work well. It did actually in week2 but we didn't select it at the last minute because of some cv issue. It would have landed us at rank 8th unless mistaken. Week 3 was a bit disappointing for us. </p> <p>I also warmly thank those who curated the additional datasets I used, especially Vopani. Using these made a difference.</p> <p>Last but not least I want to congratulate those who did consistently well in these competitions, and especially those who did well in week 4. <a href="/david1013">@david1013</a> seems to have great models, even when not using an extra day of data if I get his recent posts correctly. It would be interesting to know Lockdown and Kaz model score when not using an extra day as well. Paulo Pinto performance in week 4 is awesome. I can't name all but let me mentions PDD team who did well in several weeks too. </p> <h1>Modeling Goal</h1> <p>I decided early on to not approach this as I usually build forecasting models. Reason is we are modeling a physical phenomenon where people get infected, then infect others for some time, then either recover or die. I therefore studied <a href="https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology">compartmental models used in epidemiology</a>, SIR and the like. These models rely on two time series: cases and recoveries/deaths. If we have accurate values for both then we can fit these models and get reasonably accurate predictions. </p> <p>Issue is we don't have these series. </p> <p>For cases we have a proxy, confirmed cases. This is a proxy in many ways: - It depends on the testing policy of each geography. Some test a lot, and confirmed cases are close to all cases. - A large fraction of sick people are asymptomatic, hence are easily missed by testing. - Testing does not happen when people get infected or contagious, it often happens with a delay. For all these reasons the confirmed case nubers we get is a distorted view of actual cases.</p> <p>For fatalities the numbers aren't accurate either; - In some geos we only get deaths test at hospital, in other geos it includes fatalities from nursing homes. - We don't have recoveries data. The latter can be fixed by grabbing recovery data from other online source. This has been done by some top competitors, I wish I had done it.</p> <p>Despite all these caveat, I assumed that we still have some form of SIR model at play with the two series we have at hand: fatalities depend on cases detected some while ago. That led to my first model.</p> <h1>Fatalities Prediction Via Linear Regression</h1> <p>If we assume that fatalities are a proportion of cases from, say, 8 days ago, then the proportion can be computed from log scaled data. See examples below for some countries. Green curve is the difference between confirmed cases (blue) and fatalities 8 days later (orange)</p> <p>Italy:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fd840d692feb332ba0ba753ab56577def%2Fitaly.png?generation=1585856510539884&amp;alt=media" alt=""></p> <p>France:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F148776fdca0ba1cc6dde63dbc7c720b9%2Ffrance.png?generation=1585856534028494&amp;alt=media" alt=""></p> <p>Spain: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F4a4e692a327677c254dd0a6a402caff1%2Fspain.png?generation=1585856554679533&amp;alt=media" alt=""></p> <p>Then running a linear regression on log of past cases as input, and log of fatalities as output should capture this dependency. I published a simplified version in <a href="https://www.kaggle.com/cpmpml/fatalities-prediction-via-linear-regression">a notebook</a>. The above pictures are taken from it.</p> <p>The fact that the proportion depends on the geography can be captured via additional input features. In practice I found that log fatalities increase is a linear regression of cases from between 15 to 5 days ago. These limits were tuned via cross validation. They match what researchers say about Covid.</p> <h1>Cross Validation</h1> <p>We used time based cross validation for our models. We selected 3 to 5 folds, depending on the week competition. Each fold uses data until a given data for training models, and data after for validation. One fold was defined by the actual test data: validation starts at the first day of test data, even though train and test data overlap.</p> <p>Models in our team were trained to predict next day. To predict on a longer period we append the prediction to train data, then apply to model on it shifted by one day to get the prediction for the second day. We repeat for all remaining days in validation or test. Other teams trained one model per prediction day. I didn't thought of it but it is definitely doable given how fast training my model is.</p> <h1>Predicting cases</h1> <p>For cases I used an auto regressive model, i.e. linear regression using past values as input. That model was not that bad, but it turned out to be pessimistic. The flattening of curves we observed wasn't captured correctly. I spent the last 2 weeks thinking about how to model this. I tried to revisit the epidemiology models, but couldn't find a way to reliably fit them for all geos. I didn't want to add an arbitrary decay, or fit a curve either. I therefore turned back to data exploration. And the day before deadline I found that a log of the decay in daily cases was very interesting, see some plots below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fdf16b94f837e4ed920730f7375df613a%2Fweek4.png?generation=1587030089243556&amp;alt=media" alt=""></p> <p>It means that daily cases decrease linearly in the log space. It means that the number of daily cases is of the form N - exp(a t) where N is a starting population, a a constant, and t the number of days. I explain this derivation <a href="https://www.kaggle.com/c/covid19-global-forecasting-week-5/discussion/143791">here</a>.</p> <p>The model is therefore fitting a linear model on the log of cases over time. </p> <p>This model works well in geos where the number of cases is high, but fails miserably in geos where epidemic starts. I therefore blended it with the previous one, with weights depending on the number of cases a while ago, 12 days if I remember well. This model improved CV scores tremendously compared to the previous one. It was modeling curve flattening rather well, at last.</p> <p>I then tried the same with fatalities: They indeed decrease linearly in the log space, but the data is sparse: I could fit a linear regression reliably only for geos with a rather large number of cases. Fortunately for people and unfortunately for my model, there were not that many geos like this. Then I realized I could use the same trick as in my very first model: log of fatalities depend linearly on past log cases. I could therefore fit a linear regression on past log cases as input, and log of fatalities as output. I used the same past period, i.e. cases between 15 to 5 days ago more or less (I don't remember the exact figures). My fatalities model is therefore a blend of the original linear model from past cases, a linear regression fit on past log fatalities, and a linear regression fit on past log cases. </p> <p>That's what I submitted. </p> <h1>Post Deadline</h1> <p>I am rather pleased to see that a simple blend of linear models was competitive. I wasn't sure I could handle differences in geos via adaptive weights when blending, but it seems it was good enough.</p> <p>After one day of rest after deadline I realized I could have done a better job. If fatalities depend on the slope of the log of cases, and if that slope is constant, then why estimate it on days from 15 to 5 days before, and not on days from, say 10 days to current day? Using more recent data is always better, then why not here? I implemented this small change and used late submission. This improved model score was similar to the one I submitted in time until last week where it started to outperform it. Its final private Lb score is 0.41840. Bottom line is it took me 4 weeks and 2 days to get the right model...</p> <h1>Week 5</h1> <p>I contemplated entering week 5 but didn't for 3 reasons. One reason is I was bothered by the extra day data from world odometer. I saw people who used it in week 4 ask for it to be forbidden. It would have been a smart move IMHO. Another reason, more important, is that a one week sprint with changes in competition setting was too short for me. Last reason may be even more important. I think my model works better when there are high level of cases and fatalities. The inclusion of smaller grained geos like US counties probably mean that my model would not be a great fit.</p> <p>I admire those who entered week 5 and I'm looking forward to their writeups when the competition completes!</p> <p>One week was too short for me.</p> <p>Thanks!</p> <p>Thanks, and congrats on the consistent performance across weeks.</p> <p>Simple models are often overlooked in Kaggle competitions ;)</p>
COVID19 Global Forecasting (Week 4)
My Solution for week 5
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Wow Vitalii, thanks a lot !! Congratulations for your performance as well! </p> <p>You're welcome! </p> <p>Thanks! </p> <p>You are welcome! </p> <p>Hi! The one you can get if you click on my position in the leaderboard ๐Ÿ˜‰</p> <p>Hi everybody, this week I did not have time to adapt my previous approach so I simply used a public baseline and then added some code for quantiles. Basically I used poisson distribution to estimate 0.05 and 0.95 quantile.</p>
COVID19 Global Forecasting (Week 4)
PDD Writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my teammates @dott1718 and @christofhenkel for collaborating throughout the four competitions. As even the last week competition is already close to half-time in terms of scoring, we want to give a short overview over out solutions. We are really happy with our results across the four weeks, specifically as more optimistic models (predicting fewer cases and fatalities) performed better and our most pessimistic models in week 2 performed the worst.</p> <p><strong>Week 1</strong></p> <p>Everyone went into this competition with a lot of fear and uncertainty about covid-19 and I think this also reflects in the models that were produced. Very little data was available and most countries were in earlier periods of severe exponential growth. We quite quickly figured that utilizing any ML model does not seem appropriate here due to lack of historical data. We thus crafted the basis for our main model that we utilized throughout all four weeks. For predicting confirmed cases, the model uses a constant growth rate that is reduced by a decay that is accelerating over time. For fatalities, we multiply predicted cases of a given time lag with a fatality rate for a given country, and also add a constant growth.</p> <p>This model allowed us some flexibility also in choice of (hyper-)parameters. We ended up deciding on one optimistic and one pessimistic submission in week 1, where the optimistic one has lower growth rates and higher acceleration of the decay of this growth rate and the pessimistic one has it the other way around. For countries where we already have "sufficient" data above a certain threshold, we learn the growth over a certain time period, and for others we employ default growth rates.</p> <p>Optimistic sub: <a href="https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-09?scriptVersionId=30833479">https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-09?scriptVersionId=30833479</a> Pessimistic sub: <a href="https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-03-per-5?scriptVersionId=30833633">https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-03-per-5?scriptVersionId=30833633</a></p> <p>The optimistic sub predicted globally 2.6 mio cases after 30 days and 120k fatalities after 30 days as of 22nd of April. The pessimistic one was way more doomsday and predicted 24 mio cases and 430k fatalities. The optimistic one ended 2nd on private LB and scored 1.07716 while the pessimistic one scored still 1.29940 and would have been 5th rank. This also gives you a good idea of the problems of this metric, because even if you are off that far globally from the actual numbers, if you get some low-case countries right, the metric improves drastically.</p> <p>Now looking back, I have to say that our global estimation here was really, really precise. I just checked worldometer data, and on 23nd of April the actual confirmed cases were <strong>2,722,000</strong> and what we predicted was <strong>2,650,252</strong>. I would have not thought that we can be so precise there, even if individual countries can be off. </p> <p><strong>Week 2</strong></p> <p>Even more exponential short-term growth was observed in the pre-ceding week. We mostly re-utilized our models from week 1 and adapted them a bit and overfitted them thankfully on short-term growth patterns which made them quite pessimistic in retrospective. The second submissions blends a power law fit with exponential cutoff to the model and it is actually our better sub here. </p> <p>Sub 1: <a href="https://www.kaggle.com/dott1718/cv19w2-2-sub?scriptVersionId=31270684">https://www.kaggle.com/dott1718/cv19w2-2-sub?scriptVersionId=31270684</a> Sub 2: <a href="https://www.kaggle.com/philippsinger/cv19w2-pl4-sub?scriptVersionId=31271930">https://www.kaggle.com/philippsinger/cv19w2-pl4-sub?scriptVersionId=31271930</a></p> <p><strong>Week 3</strong></p> <p>We saw that machine learning solutions slowly start to shine, specifically for short-term predictions and decided to one the one hand further tune our model and sub it, and as a second submission blend some of the public ML solutions in. In detail, we used subs from @cpmpml (+team), @gaborfodor and @osciiart - so thanks to them! We heavily overweighted our model and the one from oscii in the final blend, so the weights were 8-8-1-1.</p> <p>Single model sub: <a href="https://www.kaggle.com/philippsinger/cv19w3-2-v2-play-2-v3fix-sub-last6dayopt?scriptVersionId=31654736">https://www.kaggle.com/philippsinger/cv19w3-2-v2-play-2-v3fix-sub-last6dayopt?scriptVersionId=31654736</a> Blend sub: <a href="https://www.kaggle.com/christofhenkel/cv19w3-v3fix-cpmp-oscii-belug-full-8118?scriptVersionId=31655753">https://www.kaggle.com/christofhenkel/cv19w3-v3fix-cpmp-oscii-belug-full-8118?scriptVersionId=31655753</a></p> <p>Our blend sub predicts 3,668,603 confirmed cases for 7th of May and 295,911 fatalities which is ~10 days in the future. I think globally this solution will also come quite close to reality. We are also currently ranked 3rd in the competition with trending upwards, even though the gap to 2nd might be too large.</p> <p><strong>Week 4</strong></p> <p>Again, we decided to improve our model and blend it with other solutions. The big change we did to our model was that we do not rely for fatality predictions anylonger on predictions on confirmed cases but rather also take the individual fatality curves for future predictions. We also blend our model now with solutions from @osciiart, @rohanrao and @david1013. Our two subs this time have different weights for the blends, usually overweighting our model. We also fully rely on our model predictions for Chinese countries.</p> <p>Sub 1: <a href="https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub1?scriptVersionId=32072839">https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub1?scriptVersionId=32072839</a> Sub 2: <a href="https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub2?scriptVersionId=32072928">https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub2?scriptVersionId=32072928</a></p> <p>Sub 1 predicts globally on 14th of May 3,898,447 cases and 278,848 fatalities while sub 2 predicts 3,802,472 and 224,623 respectively. In general it seems that public subs tend to agree way more on cases and fatalities seems to drive most of the score differences. We are currently positioned 5th, but trending down a bit. Will be interesting to see how scores evolve.</p> <p><strong>Learnings</strong></p> <p>I am writing this write-up while hearing the news that Austria (the country I live in) removes many of the restrictions in place from May 1st onward. We have had one of the best curve flattenings globally, even though having huge exponential growth in the beginning and we quite overpredicted Austria from the start thankfully. The country implemented strong restrictions very early on and I believe they have helped a ton. This is something that we can learn from the most I feel like. Active cases (from worldemeters) looks like that for Austria:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F37166%2F198aec01a918b144c789672f5e97d980%2FSelection_041.png?generation=1588072057446301&amp;alt=media" alt=""></p> <p>I really hope that this trend will not reverse again, many other countries also look that promising and we can only hope for a bright future.</p> <p>Predicting these pandemics is very hard, mostly due to lack of data, very large uncertainty, difference in reporting, differences in public behavior, and so forth. Still, I believe the models produced in these competitions can be helpful in the future, even if it is only due to our better understanding how hard it is to predict it. A lot of personal judgement flows into these models and predictions, making them also very subjective and thus combining multiple models and opinions seems to be also worth-wile to follow up on in future.</p> <p>Thanks <a href="/rohanrao">@rohanrao</a> - totally agree that errors cancel each other out, getting the global estimates right is still based on a combination of luck, personal tuning, and modeling.</p> <p>Thanks for your contributions throughout the weeks, your model appears to be very strong and we borrowed parts of it for our blend :)</p> <p>Your week 4 model looks very good. Will very likely overtake us and some others soon.</p>
COVID19 Global Forecasting (Week 4)
Some ML, A lot of judgement and luck
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Intro</h1> <p>The title may not seem very inspiring when you try to solve something so serious, but it is what worked semi-OK (for the first 2 weeks at least ) in this competition series( given a large horizon). I think predictions of that length (e.g 30 days ahead) require a lot of guessing and luck if you are not an expert epidemiologist (and quite possibly even if you are one too).</p> <p>There are so many things which are unknown when you try to predict that far ahead, like:</p> <p>Will the vaccine come out? (ok this one might not really be the case here - I think we are still early for that ) or an efficient drug? Are the people really following the social distancing rules or are they going to? what rules will the governments put in place? How strict will the government be at enforcing them? How will the infection rate differ based on how people socialise in different countries/areas? etc.</p> <p>Needless to say that the data is not perfect.</p> <p>Short term predictions (maybe within 3-5 days or so) could be modelled reasonably well. I hope you don't mind me <a href="https://www.h2o.ai/blog/modelling-currently-infected-cases-of-covid-19-using-h2o-driverless-ai/">sharing a blog</a> that summarises some ideas on how this could be done for short horizons on the same underlying data for active cases - many of the tricks mentioned there were applied here as well particularly in the <strong>pre-processing of the data</strong> and <strong>post-processing</strong> of the predictions . </p> <p>I still believe that an ML practitioner in tandem with an experienced epidemiologist as a combo can work better than any individual on her/his own, particularly if you try to make predictions at scale for many countries/counties/cities at the same time. </p> <p>The inference kernel (with commented code for training ) <a href="https://www.kaggle.com/kazanova/script-with-commented-train-counts-for-zeros?scriptVersionId=32084418">for week 4 is here</a></p> <p>I have the same version with <a href="https://www.kaggle.com/kazanova/fork-of-script-with-commented-train-counts-for-zer?scriptVersionId=32113592">training included here</a> :</p> <p>(It scores a tiny bit differently 0.02549 versus 0.02551 than my uploaded models although I have used same seeds - maybe because I run on a windows laptop - or different lightgbm version - not sure) . </p> <h1>The ML part</h1> <p>The Power Growth model has worked quite well to map epidemics in the past and I think many people have made good implementations here. </p> <p>I have tried a simple implementation myself - the problem was that adjusting the growth was challenging without expert knowledge (at least for me)- so so I tried to model growth specifically. Instead of predicting counts, or log counts, what I tried to predict was:</p> <p>" with what value do I need to multiply the previous cumulative value every day". For weeks 2,3, I modelled 60 separate models. One model for each plus+x day for the confirmed and fatalities. </p> <p>I used <a href="https://lightgbm.readthedocs.io/en/latest/">lightgbm </a>to build the models.</p> <p>The features were : - In how many days ago was X cumulative count reached. Where X for confirmed cases was <code>[1,5,10,20,50,100,250,500,1000]</code> days and for fatalities <code>[1,2,5,10,20,50]</code> - up to 10 lags of 3-size-window average rates for both confirmed and fatalities - the actual confirmed and fatalities as row counts for day 0.</p> <p>The target is the average-3-day rate for each one of the +x days. So instead of modelling the rate of <code>future1/current</code>, i took the average of <code>future1/current + future2/future1 + future3/future2</code> when predicting the target for day+1. That is because it has many spikes, so the average smoothness the target and (I think) controls for overfitting. </p> <p>For week 2, I only validated through observation - no cv. For week 3 I validated based on the week 2 results and for week 4 on both week2 and 3. A <strong>fundamental FLAW</strong> of both week2 and week3 models is that they ignore the 0s. The model requires an initial value of &gt;=1 to add the growth , so I expect the models down the stretch to do much worse (unless I am lucky - both me but most importantly the world and we don't have new deaths). The rules I used to initialise values for the zero cases were poor (to say the least) . </p> <p>In week 4, I have added a model that tries to predict the 0 cases modelling the difference in log counts between days .The model setup and features are the same, the only difference is that instead of only rates, the model also uses difference of log counts for the same windows. In that model I have also added USA county level data - i saw a tiny improvement from adding that (probably because it is heavily weighted towards 0 as the cumulative values are generally smaller). Other than that, I have found no other external resource useful (for longer term predictions). I have tried adding geographical data and recoveries with no success. </p> <p>Sadly these models are not very powerful - but maybe not too bad either. I would have been much further down without <strong>post processing</strong> and I would be drifting down very heavily. </p> <p>The problem with this approach is that it completely ignores the metric and a small error in the rate prediction can make a series go berserk. I saw quite a few series that for example started with 20 confirmed cases and my model was predicting it will reach to more than 30k (which was obviously a bit too dangerous to accept). Another problem is that the models are heavily influenced by what happened in China - it is the area that has the most future points available (to compute the target). In China the treatment of the outbreak was handled very differently than other areas and generally the models expected a much quicker plateauing (and different initial growth rates) than what is actually happening in many other parts of the world. This is where <strong>post-processing</strong> helped.</p> <h1>Post Processing</h1> <p><strong>Before I elaborate on this - I heavily applaud and congratulate people that have modelled this without much judgement or post-processing. They have definitely added much more than this solution ever will</strong>. I found not doing the post processing very challenging to get a highly competitive score (for the first 2 weeks at least). </p> <p>Back to the post-processing, this is the most important part of my solution. The ML model gave a curve to use as a basis and make decisions on it. The model was not too bad on cases with stable growth rates and counts of over 5,000 (approx). I created certain rules after observing most of the series individually. <strong>All the decisions I made on the post processing rules, were based on judgement</strong> - nothing coded or something that could be reused as a logic safely without observing the series. To manage time, I have ignored cases with larger counts (which is very bad from an epidemiological point of view ) subject to RMSLE (that I generally don't like for this problem) .</p> <p>The reasoning behind the rules: </p> <ul> <li><p>"first_10_then_1_" : This rule was mainly used in China provinces for cases that still had some minor increments. it applies rule (e.g decay) in the first 10 days and then it stops the growth. I anticipated that even if there are some new cases , China is now very efficient at containing the outbreak with the surveillance measures and the health app system.</p></li> <li><p>"_last_12_linear_inter_" : I am an optimistic person. if for some reason the model was not picking up that the rate at some point needs to start decreasing , I was forcing a linear decay in the last 12 days. I have no idea how this will play out. because we have not reached that point yet in any of the competitions. for example if the rate at day 19 is 1.12. In day 20 it will become 1.11 , then 1.10, etc. I do not have a great scientific explanation to tell you why I do this, other than I compared some of the curves against some others from the Chinese provinces at similar count levels and saw that 18 days was around the average the rates were still going up. Maybe not a great assumption given it was based on a geographical area only - but this is all I had at that point.</p></li> <li><p>" decay_x" : This rule is the most common one. I apply various decays in the predicted rates. A lot of it is based on expectations and comparing series with what happened to other countries/states when the counts were at similar levels , preferably close in geography. I also used my knowledge of what rules/measures have been applied to the countries when making these judgements . </p></li> <li><p>"combination of all the above" : you can apply any decay on the first 2 rules</p></li> <li><p>"accelerate" :Being an optimistic person, generally, I have avoided using that. It is the opposite of decay. </p></li> <li><p>"stay_same": rate=1 - mostly applied to Chinese Provinces, low count countries, Diamond princess.</p></li> <li><p>"normal" : Don't change the rate - trust the model as is</p></li> <li><p>"fatality" : Rule applies only on fatalities - it excludes confirmed. For week 2 model, I completely ignored fatalities when made judgements. From week 3 onward, I started making special judgements for fatalities.</p></li> </ul> <p>Now that I have a sample of making these decisions - when the competition is over, I am hopping to find a pattern that can be applied without judgement. E.g evaluate the judgements and find what would have been the correct decay. </p> <p>For the last week. I created a simple file that has the predictions of the new model without applying most of the rules versus the predictions of the model of the previous week and saw how over/under I was. I used this to aid my decisions. I have a screenshot attached.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F111640%2F27591e50f9509eba9c71e2813c745fad%2FComparing_predictions.png?generation=1587056981562287&amp;alt=media" alt=""></p> <h1>Luck/guessing</h1> <p>I have taken many guesses in this competition. For example:</p> <ul> <li>China won't generally have new fatalities or many new confirmed cases especially in the smaller provinces. Same with Diamond Princess. </li> <li>Tibet will remain with that single 1 confirmed case . If this goes to 1000 -, maybe I am out of the competition.</li> <li>For 2,3 weeks that generally 0 fatalities -will remain 0 unless confirmed cases exceed 400. </li> <li>News and Dashboards for recent information and trends (from what people have posted on the forums)</li> <li>Countries that enter later are more prepared/cautious about the virus and are better equipped to deal with this, since they have an idea for what measures might be more effective and/or how serious this can be. I have applied stronger decays to countries that enter now. </li> </ul> <p>Happy to answer any questions. </p> <p>The ML part took me more time for week 2. Maybe 2 days. I had to make the setup. Post processing always started with the moment the new data came in - so around 20 hours .</p> <p>For weeks 3,4 , ML was 3 hours each (because it is the same approach ) and Post processing around 20 hours each </p>
COVID19 Global Forecasting (Week 4)
W5 LB scorecard & analysis update 0611
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Update 0607 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fdf26c0fdb482000dd5927d4ffe96d5d4%2F67_1.jpg?generation=1591570668725292&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd155f25dd468ef67c4489591bf91beb5%2F67_2.png?generation=1591570668631761&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1f356d83c55a3d4ccb66907d48f6623a%2F67_3.png?generation=1591570668660355&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb6b2f6bf3aab3372dfb97386bfdcd787%2Fnedladdning%20(38" alt="">.png?generation=1591570712872617&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7d2f972ae3dc022c7e5ccbeae2156ac3%2Fnedladdning%20(39" alt="">.png?generation=1591570711698441&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2debff51b0e8c6dae0b9f19da98a8446%2Fnedladdning%20(40" alt="">.png?generation=1591570711094114&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fac2a4e9edafdda5b2f72bc01e8710ab8%2Fnedladdning%20(42" alt="">.png?generation=1591570753800488&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4a4a832de3eec4b56651ffdb48fb3d93%2Fnedladdning%20(41" alt="">.png?generation=1591570856410573&amp;alt=media)</p> <p>Update 0608 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F34821042ba2e7cd6efd4542d22533aa9%2F68_1.jpg?generation=1591659224336278&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F745a16dd6d90eb7a7c8ac9a1c24867df%2F68_2.png?generation=1591659222389372&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F14feaf2fcc8a2a77a199429d3cf1d32f%2F68_3.png?generation=1591659225481371&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4af6454cd4c37c0e6fbb638c7e391f01%2Fnedladdning%20(43" alt="">.png?generation=1591659267032290&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fcf34da06fd548119af33c40e097f5f4e%2Fnedladdning%20(44" alt="">.png?generation=1591659265455708&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0ad7217a95cf89f2ce2672aaaf4bed85%2Fnedladdning%20(45" alt="">.png?generation=1591659266694443&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb3e360e168aa87840c7065248c4a333c%2Fnedladdning%20(46" alt="">.png?generation=1591659298022027&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc76038bf514ef68e052639fce29e92ff%2Fnedladdning%20(47" alt="">.png?generation=1591659322345161&amp;alt=media)</p> <p>Update 0606 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6c58dc4c645412d13ea6d51808c185ed%2F66_1.jpg?generation=1591463701269878&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ff8e72f7edfe82bc44c61dc1ae178d75f%2F66_2.png?generation=1591463701762333&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6067ea90193b5eda41cb5d012e9acb6a%2F66_3.png?generation=1591463701629022&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F177d8c954a3b155f42b4bcb7e8eb7051%2Fnedladdning%20(36" alt="">.png?generation=1591464311422864&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0827e0e44d3d015a9a2b3d7cdb4e29e8%2Fnedladdning%20(33" alt="">.png?generation=1591463760243147&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fca689f7b29f962def3113ce32cbc3239%2Fnedladdning%20(34" alt="">.png?generation=1591463761005217&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F60949a47efc13fe2b800a8fcf2b8c42b%2Fnedladdning%20(35" alt="">.png?generation=1591463808578363&amp;alt=media)</p> <p>That was the final and last update! Well done everyone with an important work, especially <a href="/gaborfodor">@gaborfodor</a> who created the #1 model! :)</p> <p>Now we hope that this horrible virus will disappear quickly and cures are created immediately. The thoughts go to everyone affected.</p> <p>Update 0611 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F75df848bc38c12d1f15abf5611f24ef2%2F611_1.jpg?generation=1591897088370287&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fbed4633a3ee32c0d0f3a0a1afa5361e5%2F611_2.png?generation=1591897088353323&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ffd038e65ee269059e85a355a87e34299%2F611_3.png?generation=1591897089015973&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F84fef648b05fc87ccd08b5742a13d3b4%2Fnedladdning%20(58" alt="">.png?generation=1591897120991013&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1a2711bd38a59c03cf174cff6cf74eff%2Fnedladdning%20(59" alt="">.png?generation=1591897121767768&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F97c5719431646ae069ebed4c3d723af3%2Fnedladdning%20(60" alt="">.png?generation=1591897120972891&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2b40b725fa431241d6d48ae896166e73%2Fnedladdning%20(61" alt="">.png?generation=1591897121662173&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fecc8bc90c616abc489ec78cda0366a8c%2Fnedladdning%20(62" alt="">.png?generation=1591897122409789&amp;alt=media)</p> <p>Update 0609 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5a3e399db7e1ab1447934c027601894a%2F69_1.jpg?generation=1591728321567328&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2b0174581c00c4f88f7dbf61f18c98a7%2F69_2.png?generation=1591728321325162&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3ee3bbebc3d46c3d9c9e1a4f89f89eb3%2F69_3.png?generation=1591728321461290&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F336d2250a6360a92cb61466bd8915c4b%2Fnedladdning%20(48" alt="">.png?generation=1591728373043015&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5c0edf5049be5fc46954021e4c71929d%2Fnedladdning%20(49" alt="">.png?generation=1591728372916590&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3260763057608b0855a3daf8a48dfe40%2Fnedladdning%20(50" alt="">.png?generation=1591728375088400&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4d522b0751ba85ed33374442c4bfeb4d%2Fnedladdning%20(51" alt="">.png?generation=1591728457203778&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb21099acc2ca544a499daee80ad26c37%2Fnedladdning%20(52" alt="">.png?generation=1591728484912030&amp;alt=media)</p> <p>Update 0610 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F90f4181db0303f0760564ae71676112b%2F610_1.jpg?generation=1591827478438941&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F82a6fc5e7a4e8bf4114eff867f5414ae%2F610_2.png?generation=1591827479359922&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7e140636af35c1d5c05a7a61cefc5973%2F610_3.png?generation=1591827478155263&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ff1789e7134aae1cbb9abe170a3fc8db8%2Fnedladdning%20(53" alt="">.png?generation=1591827776863571&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fde8e074b4619ade8808ca0141146672a%2Fnedladdning%20(54" alt="">.png?generation=1591827775838966&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F55330ebae4922f6474e331fc02ba85f5%2Fnedladdning%20(55" alt="">.png?generation=1591827775822026&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2c8d60168bbd0123b2cb102acd3e7143%2Fnedladdning%20(56" alt="">.png?generation=1591827817311542&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7e1fd86f3d79a43594c20cf42013b7da%2Fnedladdning%20(57" alt="">.png?generation=1591827835180455&amp;alt=media)</p> <p>Update 0520</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F36db30c01bcac0370a1cc0b7c568d30f%2F0520_1.jpg?generation=1589995497971429&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F20a34f8f6ff63895934b604d06e9a1e0%2F0520_2.png?generation=1589995511511685&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7d442f04f7c11e88698bd1443704dd8a%2F0520_3.png?generation=1589995525062115&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 6 day period 6d LB change Top 31-60 &amp; Top 31-60 6d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F58f8f94b54990c2dfcf9178a3ca30387%2F0520_4.jpg?generation=1589995560878618&amp;alt=media" alt=""></p> <p>Additional scorecard II - Visualizing the score gap and Dist to the top gap</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fe7c41c73935bbd4925db7332c31ab02d%2Fnedladdning%20(8" alt="">.png?generation=1589995597625688&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8bd0a2911c913c1d5abbdc71d23010ce%2Fnedladdning%20(9" alt="">.png?generation=1589995611904992&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0424fcde0081fb49ff8ada09ae21235c%2Fnedladdning%20(10" alt="">.png?generation=1589995628159425&amp;alt=media)</p> <p>-Score and LB Top 15 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc41659ad044d68ef636d3cfddeb13113%2Fnedladdning%20(11" alt="">.png?generation=1589995655144446&amp;alt=media)</p> <p>Thanks <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>! Hope we will see lots of teams again, more teams, better development, more help to those in need.</p> <p>I see your point, I think you saw mine, ment if we will see a W6 and I hope we see lots of teams again to the model dev, important contribution</p> <p>Important and important just a positive feedback on your models performance, good work with them :)</p> <p>Update 0521</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2d650ca2cf5bec29838c1f1f965b531b%2F0521_1.jpg?generation=1590070623274099&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3dccb1fbaa18ce121ce37be03dd592d5%2F0521_2.png?generation=1590070637992639&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F01ae3bf9ef11a5a2add7fc8373700e10%2F0521_3.png?generation=1590070650792254&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 7 day period 7d LB change Top 31-60 &amp; Top 31-60 7d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0573f5bda00abe1e68c8c281ef38473e%2F0521_4.jpg?generation=1590070706010424&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the score gap and Dist to the top gap</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F206153adbf2895660c18a877eae0f4e8%2Fnedladdning%20(12" alt="">.png?generation=1590071096866296&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8e03eefa83fa532857479a0de13e7700%2Fnedladdning%20(13" alt="">.png?generation=1590071114157418&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F459ad266fb81e0c80e5cc38313284f34%2Fnedladdning%20(14" alt="">.png?generation=1590071130310648&amp;alt=media)</p> <p>-Score and LB Top 15 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F66485649edddddbfb5f40d18b554db76%2Fnedladdning%20(15" alt="">.png?generation=1590071156855760&amp;alt=media)</p> <p>Thanks! :) But amazing how well the top teams are performing, but not quite surprising david1013 sasrdw aerdem4 philippsinger dott1718 kazanova christofhenkel , all the top grandmaster? Besides cpmpml whom we miss.</p> <p>After 4 days we can see that it's really even between the models in top 5 and in top 10 @bernardklinger has a good momentum followed by <a href="/neg0000273">@neg0000273</a>, great work!</p> <p>Yes it's hard to predict and follow the short trends in top/bottoms, average works better.</p> <p>Update 0518</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5579f0768153bfb782ee8e150e918305%2F0518_1.jpg?generation=1589839737617035&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F819b1eee1bb1f898c9432806ecc89472%2F0518_2.png?generation=1589839517348798&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F379443aaac68afc27e7eab6289864641%2F0518_3.png?generation=1589839534275443&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 4 day period 4d LB change Top 31-60 &amp; Top 31-60 4d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev4dscore)/4))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fbb086fdeeb238d762eedff2fe51ca764%2F0518_4.jpg?generation=1589839561810161&amp;alt=media" alt=""></p> <p>Amazing how the top 4 models are in sync with each other, and almost no difference in scores between the days, good work! Great work again with your model <a href="/moradnejad">@moradnejad</a>, this week maybe even better :)</p> <p>Look at the "Score gap top 50" line chart above, some models in the bottom 50 reduce their values โ€‹โ€‹quickly, must study what algos they use.</p> <p>Update 0519</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F118286d62b966144e6c1df1f98f810e8%2F0519_1.jpg?generation=1589904310080109&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb3f60794d458a4b0cd80652d1799ec83%2F0519_2.png?generation=1589904323455958&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fe243a88ecab9f89db8cff60221bb56de%2F0519_3.png?generation=1589904336983765&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 5 day period 5d LB change Top 31-60 &amp; Top 31-60 5d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F148b0f6981cc59a521521b2431d4c9d7%2F0519_4.jpg?generation=1589904361436285&amp;alt=media" alt=""></p> <p>Additional scorecard II - - Visualizing the score gap and Dist to the top gap <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3ff263ff309877c9c214dd5a9034d144%2Fnedladdning%20(4" alt="">.png?generation=1589904552714636&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F732af8dd585c850188aadf9bc58978fe%2Fnedladdning%20(5" alt="">.png?generation=1589904570030207&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F33e29d5197cb66e10f30e51daa8ae6b5%2Fnedladdning%20(7" alt="">.png?generation=1589904682286367&amp;alt=media)</p> <p>-Score and LB Top 15 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Faf355f709fecd0527301de7ec8179573%2Fnedladdning%20(6" alt="">.png?generation=1589904586617118&amp;alt=media)</p> <p>Additional charts II Score and LB Top 15</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fe349fa28772dd401ab51b0ac75f128f2%2Fnedladdning%20(3" alt="">.png?generation=1589897899410364&amp;alt=media)</p> <p>Additional charts</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd37afe851de30b43eb4dfbaf4b0b087f%2Fnedladdning%20(2" alt="">.png?generation=1589884516042426&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ff51c54320d4856a8ed1d9c5a9d860062%2Fnedladdning%20(1" alt="">.png?generation=1589884536367951&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F58d15f1b8b01c51661303350b670ee14%2Fnedladdning.png?generation=1589884548359851&amp;alt=media" alt=""></p> <p>Update 0523</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F70663415d5c9e745e1e2b01aa096747b%2F0523_1.jpg?generation=1590258031451646&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1f4f28797f446e4dcb78533993cc62aa%2F0523_2.png?generation=1590258044456014&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb8e78ed837705257d10350ed8b967b2a%2F0523_3.png?generation=1590258057651853&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores and Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F07addb5f2d6766bc281c4da937d0a392%2Fnedladdning%20(20" alt="">.png?generation=1590258089536013&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0516ca9e944d62e5cfae9c9cf9acef91%2Fnedladdning%20(21" alt="">.png?generation=1590258106365435&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fe4ce538788f6df0068d20d80f80fa105%2Fnedladdning%20(22" alt="">.png?generation=1590258120804038&amp;alt=media)</p> <p>-Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F43e8bda7380835382a3071b3557fc637%2Fnedladdning%20(23" alt="">.png?generation=1590258145933555&amp;alt=media)</p> <p>Update 0522 - reduced number of charts due to 38 teams missing in the top 50-88 in yesterday's LB</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4db6a653f15210560984daa3fe8c1c33%2F0522_1.jpg?generation=1590232473516496&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F478f67008cdb17bd0cdb8a8258f6eef9%2F0522_2.png?generation=1590232485337598&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3b696cad9cfe003083aebe5d34bc18fb%2F0522_3.png?generation=1590232499086793&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the score gap and Dist to the top gap <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7b7a36d5c3dd1a215a5132eedb37ece0%2Fnedladdning%20(16" alt="">.png?generation=1590232636721717&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F647e47f82904e4ae044e708db6e3fef0%2Fnedladdning%20(17" alt="">.png?generation=1590232650764779&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7ad7e4259cb24a50b13376243f535380%2Fnedladdning%20(18" alt="">.png?generation=1590232664751648&amp;alt=media)</p> <p>-Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6b9f19dd5f7355125891ef5633f577e8%2Fnedladdning%20(19" alt="">.png?generation=1590232697768089&amp;alt=media)</p> <p>Update 0524</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5d210864f615f0f78a71420919259138%2F0524_1.jpg?generation=1590347478315723&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fca59eda95e9732219a1fff4dc3e521e0%2F0524_2.png?generation=1590347493613671&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Feaea8cc5968240b9077c65cc2db6a44e%2F0524_3.png?generation=1590347507583192&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores and Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc48a947190d4143c770aaa49c2f2b1e8%2Fnedladdning%20(24" alt="">.png?generation=1590347536456116&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4f19f87cc3280292d7fed600481032a6%2Fnedladdning%20(25" alt="">.png?generation=1590347550165971&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc88ce4f221097c60ba15d509ca30ccfb%2Fnedladdning%20(26" alt="">.png?generation=1590347564927746&amp;alt=media)</p> <p>-Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4719a9d27c85465acc8e87b25ebee6e1%2Fnedladdning%20(27" alt="">.png?generation=1590347632886065&amp;alt=media)</p> <p>Additional scorecard - Best LB change Top 31-88 - 10 day period 10d LB change Top 31-60 &amp; Top 31-60 10d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6283f594ea961c992ab40548ac618db3%2F0524_4.jpg?generation=1590347693590632&amp;alt=media" alt=""></p> <p>There are 38 teams missing in the top 50-88 in todays LB so I wait with the update until I know the status.</p> <p><a href="/rohanrao">@rohanrao</a> thanks, I'll continue as long there is an interest for it :) No not many what I can see, they with larger teams look to ensemble each other's models. Otherwise most weighted average ensembles of the same model / ML.</p> <p>Update 0516</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fab3b45200263603c3c4ad2d191f2f855%2F0516_1.jpg?generation=1589645588605428&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F49b76dd42e1c6df23651ed445fb32e1d%2F0516_2.jpg?generation=1589645604774306&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F12a893513ea8d3160868519547bc7f9c%2F0516_3.png?generation=1589645625336555&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 2 day period 2d LB change Top 31-60 &amp; Top 31-60 2d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc ((TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev2dscore)/2))/2)PredDayAhead)+TodayScore(((1dAveScoreDiff+2dAveScoreDiff)/2)+1)^PredDayAhead)/2</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F754e2b51dc9ee4d7cb36a016f29b4b8b%2F0516_4.jpg?generation=1589645793785322&amp;alt=media" alt=""></p> <p>my model seems to have some momentum :) also <a href="/bernhardklinger">@bernhardklinger</a> and <a href="/rohanrao">@rohanrao</a> after 2 days.</p> <p><a href="/titericz">@titericz</a> perfect thanks, your notebooks I wasn't 100%, I just saw a private dataset in the beginning and imports of from sklearn.ensemble import GradientBoostingRegressor from sklearn.ensemble import GradientBoostingRegressor from lightgbm import LGBMRegressor import xgboost as xgb. Thought the dataset could have had pretrained models. You will see the update tomorrow :)</p> <p>Update 0517</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0bddc94ad756b73102769c6b31fe0bac%2F0517_1.jpg?generation=1589743688557080&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F02c5d9f6fd118a7a6522c4a7520c3809%2F0517_2.png?generation=1589743701436371&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc5c62bf4b4bb151a381ffc2a41d937e5%2F0517_3.png?generation=1589743715975040&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 3 day period 3d LB change Top 31-60 &amp; Top 31-60 3d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev3dscore)/3))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F659473220e376d70cb6c493f256025e7%2F0517_4.jpg?generation=1589743750645926&amp;alt=media" alt=""></p> <p>Thanks :) I think it's a good way to keep track of all movements between the models from top to bottom, identify patterns. Each team can also compare more easy and do thier own analysis based on the idea/architecture of the model.</p> <p>Please upvote if you think this is usefull to follow, then I know if it's good to continue the updates and not creating lots of spam :) And for you who click the downvote, this is not a state-of-the-art scorecard, please contribute with a new scorecard, you probably do better than me at this, I just making a good try :)</p> <p>An idea to the scorecard is also to get more use of the analysis and see some patterns that helps to create better models and predictions to those in need e.g. identify models there are better in short, medium and long term, use them thereafter and combine them.<br> Explanation to the columns:<br> Score diff - is the % difference between 2 LB scores(1d = 1 day difference)<br> Dist. to the top - distance to the leader.<br> Models - which technique(LGB, math, Fastai etc) that are used in the models, in top 20, can be interesting to have and follow. Because a team has two submission, I just add what I see, and complements ahead to the team's total.<br> Diff from last day - the difference between the distance to the leader, from yesterday, approach the leader or lose momentum.<br> KPI "1d(day) best moverโ€ฆ" - Total move(score difference) the group did against the leader and the individual contribution. Track the individual movement within the group against the leader, see the performance in that timeline regardless what speed has as a whole.<br> Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc - <br> TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p>One can calculate in many different ways to see that comparison, this was a quick way to do it.</p> <p>Also check notebooks below for more analysis.<br> <a href="https://www.kaggle.com/gaborfodor/w5-top-submissions" target="_blank">https://www.kaggle.com/gaborfodor/w5-top-submissions</a> <br> <a href="https://www.kaggle.com/david1013/covid-19-week-5-daily-predictions-top-teams/" target="_blank">https://www.kaggle.com/david1013/covid-19-week-5-daily-predictions-top-teams/</a></p> <p>Thanks! :) I'll take that in count. I think I will try a mix of prev scorecard pred calc and with some modifications, to use in this scorecard's pred columns, for 1, 3 and 5 days pred ahead. ((TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)<em>PredDayAhead)+TodayScore</em>(((1dAveScoreDiff+5dAveScoreDiff)/2)+1)^PredDayAhead)/2</p> <p>I've added which technique(LGB, math, Fastai etc) that are used in the models, in top 20, can be interesting to have and follow. Because a team has two submission, I just add what I see, and complements ahead to the team's total.</p> <p>Update 0515</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8179a7818c6f91444794c2a3e5620aa5%2F0515_1.jpg?generation=1589567937497147&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd62e50280e4710d3b0ec4eeab93a51ba%2F0515_2.jpg?generation=1589567953423085&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc93c0bc4c271c858627868393692f296%2F0515_3.png?generation=1589568365057861&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 1 day period 22d LB change Top 31-60 &amp; Top 31-60 1d ave. % score diff. Prediction Top20 1 day ahead predictions, use actual score value change added to the new #, times days.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2f0524a6de944d2350d4232c016bc3e1%2F0515_4.jpg?generation=1589585947616265&amp;alt=media" alt=""></p> <p>Update 0601</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fdee4b43ade4c2e703013c8938a00115e%2F601_1.jpg?generation=1591034610405576&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fecbe2a91b9dc71dfaec5485d668ce435%2F601_2.png?generation=1591034610006555&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5bdff27d9935e36ef9684c2d1b75798b%2F601_3.png?generation=1591034609762501&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F03f95cf4abf4987b76993a42850dfa16%2Fnedladdning%20(56" alt="">.png?generation=1591034761254106&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ffd73fac0475ac14a12abebb7b30adba1%2Fnedladdning%20(57" alt="">.png?generation=1591034757575379&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8cd8159b59ee4656e7953a2261cb1d47%2Fnedladdning%20(58" alt="">.png?generation=1591034759622582&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5dd98caecaa12076d643105bc9bebc93%2Fnedladdning%20(59" alt="">.png?generation=1591034796853624&amp;alt=media)</p> <p>Update 0530</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F45820d69a80b74874b7bde712ad22fb4%2F530_1.jpg?generation=1590909265209926&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F18dd852286233b440e9ee67802f35203%2F530_2.png?generation=1590909264637739&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F30015c2a1ee6a855aa5ec744086ef58a%2F530_3.png?generation=1590909265930537&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F594b0b03dd2997c5afa4b3307b7fc879%2Fnedladdning%20(48" alt="">.png?generation=1590909315821566&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3c9e424c9177a57f554c449a785fb46c%2Fnedladdning%20(49" alt="">.png?generation=1590909313209295&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fddbf4fe2d979887be127bad03ed3d775%2Fnedladdning%20(50" alt="">.png?generation=1590909315421603&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Feb7ae4d8e7efafa958c96809e6d168d4%2Fnedladdning%20(51" alt="">.png?generation=1590909313769494&amp;alt=media)</p> <p>Update 0531</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6e52957accb0f4ddaa5ec9100ceea89c%2F531_1.jpg?generation=1590953613062425&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ffe3619a468a951380db54a2f615cecb5%2F531_2.png?generation=1590953239744354&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ff63dd3c00369ed9f027d05a9351adcea%2F531_3.png?generation=1590953239952443&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F41a075ba364eed2de54e47a850424462%2Fnedladdning%20(52" alt="">.png?generation=1590953275475090&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fa780a68fc24c39568afaeb79e6caa339%2Fnedladdning%20(53" alt="">.png?generation=1590953276248377&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3f12cd169eac124b422343ad8ceda5a5%2Fnedladdning%20(54" alt="">.png?generation=1590953275777142&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc1fae9364068c8a4bfffaaf15ee4b5c0%2Fnedladdning%20(55" alt="">.png?generation=1590953275909618&amp;alt=media)</p> <p>Update 0604 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1988c0cd57f941b457a84e3c44047d9e%2F64_1.jpg?generation=1591279303480293&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6fb57facad69fd0f0b7043e64f90d721%2F64_2.png?generation=1591279303458333&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F340a7ce961b9f45483a0581c6b55106f%2F64_3.png?generation=1591279303204760&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fdd8331334acbc5fb16584083f1a0cca0%2Fnedladdning%20(68" alt="">.png?generation=1591279348929407&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F80ef27c19762cd0523781b7e98f3dad5%2Fnedladdning%20(69" alt="">.png?generation=1591279349624742&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F57d81957779246910927b819542d2b1f%2Fnedladdning%20(70" alt="">.png?generation=1591279349144642&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5415dda0d18772ce2cffe64774c1285e%2Fnedladdning%20(71" alt="">.png?generation=1591279349469668&amp;alt=media)</p> <p>Update 0605</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6aba1ceae04d52c99c1cd7a3e57df75e%2F65_1.jpg?generation=1591377980446317&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F06cc431c861c4c2d914f86efc8e2b9fa%2F65_2.png?generation=1591377982008419&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2e33fea0555026d5f6cf3ad835387c12%2F65_3.png?generation=1591377980157752&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2f80bbe7de5adbc53ab5a6c3f966ac4d%2Fnedladdning%20(28" alt="">.png?generation=1591378269792155&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7e66b036de2c9334679593cd30d6db15%2Fnedladdning%20(29" alt="">.png?generation=1591378269265438&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2dc09c939fac727d1b55bd0ea5f85869%2Fnedladdning%20(30" alt="">.png?generation=1591378271765835&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F490eb9f8dd0c448b732b66ab6386224d%2Fnedladdning%20(31" alt="">.png?generation=1591378307733273&amp;alt=media)</p> <p>Update 0602 Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F706b6d0ec6a5cefde8c2b98066a26b36%2Fnedladdning%20(60" alt="">.png?generation=1591111418284109&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F86166600d9f07d8c1eaf143f5e078a92%2Fnedladdning%20(61" alt="">.png?generation=1591111418692065&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd8def157fddb04acd3d7cb6382e9962e%2Fnedladdning%20(62" alt="">.png?generation=1591111420200103&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F649ecebb0da80d97c1b42d28bda50927%2Fnedladdning%20(63" alt="">.png?generation=1591111418137060&amp;alt=media)</p> <p>It will come soon :)</p> <p>Update 0603 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb41aaef73ec804bb8197169350c16915%2F63_1.jpg?generation=1591209853544136&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2954405c8f9b7ef3464b3b7e44ca0695%2F63_2.png?generation=1591209855794134&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F9bb9090e58f50b7788950adc844cef63%2F63_3.png?generation=1591209855862073&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fad3138a04e977fe99ad25379ef249896%2Fnedladdning%20(64" alt="">.png?generation=1591203009300088&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F42bb6a4189893d5ad5b977ec1701b0ab%2Fnedladdning%20(65" alt="">.png?generation=1591203009726542&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F45a97a91aeab667105b161aa9f35a135%2Fnedladdning%20(66" alt="">.png?generation=1591203009074868&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F29fbececc063d0f476eba3647fcf1a89%2Fnedladdning%20(67" alt="">.png?generation=1591203009488129&amp;alt=media)</p> <p>Update 0526</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F06c8ae039d38aa4cffd386b14f99f7e9%2F526_1.jpg?generation=1590511148341097&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F725031f2571db2514c7d7992431689c4%2F526_2.png?generation=1590511163926317&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F55a09fd12d3df4ad8d0946220ebe58e0%2F526_3.png?generation=1590511177933053&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F53de3360b1e35096e72da6cd054013be%2Fnedladdning%20(32" alt="">.png?generation=1590511320541250&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F21c1477eee160798f0c89c1dd8381706%2Fnedladdning%20(33" alt="">.png?generation=1590511337419125&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F04b7dd2be91f68e9287791662998bfa6%2Fnedladdning%20(34" alt="">.png?generation=1590511352099860&amp;alt=media)</p> <p>Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F57db7e9182629ab82ef9c07ba1c521a9%2Fnedladdning%20(35" alt="">.png?generation=1590511366180353&amp;alt=media)</p> <p>Update 0525</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F9450b320f5fd2d9684049a982618eb96%2F525_1.jpg?generation=1590435512251646&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd1aa6a1cafa6dded7689a0c16764e412%2F525_2.png?generation=1590435528126325&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4abe893dd10e65d5a1c9ab4d47cc706d%2F525_3.png?generation=1590435544023317&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores and Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7cdbb240ce3286e852b93cd378604842%2Fnedladdning%20(28" alt="">.png?generation=1590435822310254&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6e9f2102386187e8832b9b052c92699b%2Fnedladdning%20(29" alt="">.png?generation=1590435836052528&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fa8844d9191edd9ca89ab525add2ab675%2Fnedladdning%20(30" alt="">.png?generation=1590435854255671&amp;alt=media)</p> <p>-Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fa8e65528b3e58ea1fe095da58d1674ab%2Fnedladdning%20(31" alt="">.png?generation=1590435898607450&amp;alt=media)</p> <p>Additional scorecard - Best LB change Top 31-88 - 11 day period 11d LB change Top 31-60 &amp; Top 31-60 11d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F000381621c9e42878d95781dc037e350%2F525_4.jpg?generation=1590435987082963&amp;alt=media" alt=""></p> <p>Update 0529</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F52a65999d593e607ab170b31783e4991%2F529_1.jpg?generation=1590774426604795&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F28e9c77c1ee2ebf19e442e6b2d91c973%2F529_3.png?generation=1590774427412137&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fad7904737e713dd759e66501d17b8241%2F529_2.png?generation=1590774427355772&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1b332a933b7e5de1d852a0c6b0d6b7c4%2Fnedladdning%20(44" alt="">.png?generation=1590774460154868&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb10adf26956acd06533b8f61ebc2273a%2Fnedladdning%20(45" alt="">.png?generation=1590774460125443&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fbb85739ac3accfad797dc1492b432c81%2Fnedladdning%20(46" alt="">.png?generation=1590774460457662&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F883af179f4ebfd4e02570a122c401ad7%2Fnedladdning%20(47" alt="">.png?generation=1590774460161138&amp;alt=media)</p> <p>Halfway into w5 we can see some patterns. I think my model could predict with more accuracy with NN in the first days. The model also works as one of more ensemble model to balance the predicted trends with the RF average. So far in W5 it looks like an ensemble with the top 3 models from teams DiDoKaPs/disinfectant injection/Lockdown / Northquay would be the best model to use and maybe with a increasing and decreasing ensemble weight, with DiDoKaPs and also add Bernhard Klinger as the Long term models.</p> <p>Update 0527</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc43a822af53d08235fb350e2d84e5642%2F527_1.jpg?generation=1590602927264613&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3cb48b76a0f178945eff5a79415886d6%2F527_2.png?generation=1590602943333768&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F858af213fbe3c737f885add511663965%2F527_3.png?generation=1590602091760044&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1fd2b5e9c16e4d04e53d8d10cddaabaa%2Fnedladdning%20(36" alt="">.png?generation=1590602119748761&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5890d15cd1360d20ecf405f3ab2ba750%2Fnedladdning%20(37" alt="">.png?generation=1590602132315735&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0041e80675046ab528b92796c945036c%2Fnedladdning%20(38" alt="">.png?generation=1590602146614575&amp;alt=media)</p> <p>Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F9a3f7b40cf89838e90ac2a05c9a72be1%2Fnedladdning%20(39" alt="">.png?generation=1590602186808334&amp;alt=media)</p> <p>Update 0528</p> <p>Halfway into w5 we can see some patterns. So far in W5 it looks like an ensemble with the top 3 models from teams DiDoKaPs/disinfectant injection/Lockdown / Northquay would be the model to use and maybe with an increasing and decreasing ensemble weight, with DiDoKaPs and also add Bernhard Klinger and Beluga as Long term models. I think my own model could predict with more accuracy using NN the first days. The model may also work as one of more ensemble models to balance the predicted trends with the RF average.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Febee6c3a32f9ecaa1a95d07a641a4cab%2F528_1.jpg?generation=1590686401792662&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F887154bd680f71b9b34732a180c61e89%2F528_2.png?generation=1590686437268926&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F40563ac8295ddcbed93440bbe6cc9926%2F528_3.png?generation=1590686459062886&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ffa2c2c4e0b75a0fbaa84a55c89d21d25%2Fnedladdning%20(40" alt="">.png?generation=1590686488627164&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4b3aa9675cd4b863a3b9c1fc24530627%2Fnedladdning%20(41" alt="">.png?generation=1590686505247899&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2ed3d07d272a13b8565353a8f0f2a325%2Fnedladdning%20(42" alt="">.png?generation=1590686505659542&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8f9c27f10915172f14d503dbb596f100%2Fnedladdning%20(43" alt="">.png?generation=1590686504696759&amp;alt=media)</p>
COVID19 Global Forecasting (Week 4)
Week 1 Winners
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The Week 1 competition concluded on Thursday and congratulations are in order for OsciiArt, PDD, Roger Li, and other top finishers. OsciiArt in particular put together a very nice notebook showing how to use lightgbm effectively with both provided and external data, and it has become a great benchmark for all four weeks. In general I recall that first week being quite hectic and it is impressive to look back and see the quality of models put together in such a short time frame.</p>
COVID19 Global Forecasting (Week 4)
~1st Place Solution LGBM with some adjustments
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Summary</h2> <p>LGBMs with quantile regression were trained on time-series and geo features for short term predictions. Aggressive 1w average smoothing was used for long term predictions. Due to the large number of small locations the top 30 country/state had to be adjusted manually.</p> <strong>Notebooks</strong> <p><a href="https://www.kaggle.com/gaborfodor/c19w5-create-submission">https://www.kaggle.com/gaborfodor/c19w5-create-submission</a> <a href="https://www.kaggle.com/gaborfodor/c19w5-check-submission">https://www.kaggle.com/gaborfodor/c19w5-check-submission</a> <a href="https://www.kaggle.com/gaborfodor/c19w5-train-lgbs">https://www.kaggle.com/gaborfodor/c19w5-train-lgbs</a></p> <h2>Feature Extraction</h2> <ul> <li>Population</li> <li>Latitude, Longitude</li> <li>Day of week</li> <li>Share of total cases for each day of week</li> <li>Rolling mean/std for 1w, 2w, 3w</li> <li>Cumulative totals</li> <li>Confirmed - Fatality rate</li> <li>Trend from last 2-3 weeks</li> <li>Normalized features by population</li> <li>Nearby features based on the closest 5-10-20 locations <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F1a011ee06339075039995790bd86b046%2FScreenshot%202020-05-31%20at%2010.32.51.png?generation=1591101401062776&amp;alt=media" alt=""></li> </ul> <p>Rescaled and rounded features to 1-2 decimals to decrease overfitting</p> <h3>External data</h3> <p>I started with searching public us county level demographic data (age, income, population density, covid19 lockdown info etc.) I found a few useful sources though I did not have time to clean and merge them. The only external data I used was the geo encoded lat-lon coordinates for each location.</p> <h2>Modeling</h2> <p>For each target/quantile/forecast lag separate model was trained with location based 5-fold CV and early stopping based on pinball loss. Models were only trained to predict the next 1-14 days.</p> <ul> <li>Trained bunch of LGBMs with random parameters to blend</li> <li>Sample Weighting based on location weights and time decay </li> </ul> <h2>Post processing</h2> <ul> <li>Clipped negative predictions at 0 </li> <li>Made sure the 0.05 (0.95) quantile predictions are not higher (lower) than the median</li> <li>Smooth daily predictions (<code>Y[t] * 0.66 + Y[t-1] * 0.33</code>)</li> <li>For US country total used the state level rollups for median</li> <li>Manually inspected and adjusted the top 30 countries</li> <li>Flat long-term predictions based on the last predicted weekly average</li> <li>Small daily decay was added to 0.05 quantile and median</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F1788c82f1c31a3e3900073134256424a%2FScreenshot%202020-06-02%20at%2014.39.01.png?generation=1591101619319599&amp;alt=media" alt=""></p> <p>Not clear. I have the worst score for Brazil among the top submissions, excluding Brazil I would be first, focusing only on US 3rd. I have good guesses for Russia, Mexico. <a href="https://www.kaggle.com/gaborfodor/w5-top-submissions">https://www.kaggle.com/gaborfodor/w5-top-submissions</a> has a section for the unexpected regions for each team.</p> <p>It will be close for sure <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F9d4c5fb091a0ea296af174b8601d6efc%2FScreenshot%202020-06-04%20at%2015.48.39.png?generation=1591278553258974&amp;alt=media" alt=""></p> <p>Thanks, I don't think I will be able to reach you, but I hope to get closer ;)</p>
COVID19 Global Forecasting (Week 4)
~5th Place Simple Linear QuantReg Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, unfortunately late sub are not enabled :(</p> <p>The task here is a time series forecast to predict Covid19 daily numbers of Cases and Fatalities for countries and also for some US regions. But they also challenge us to predict the 5% and 95% quantiles of that numbers and the metric chosen for that is the Pinball loss and optimizing it isn't an easy task. Pinball loss guarantee that all forescasts are between 5 and 95 quantile range, every forecast outsite that limits is extremely penalized, so models needs to take care when predicting the lower and upper limits.</p> <p>For this challenge I build a simple linear model using <a href="https://www.statsmodels.org/devel/generated/statsmodels.regression.quantile_regression.QuantReg.html">QuantReg</a> regressor from statsmodels lib. QuantReg showed to be very good to estimate the quantiles forecasts out of the box. </p> <p>To train the model I used data from 2020-04-01 to 2020-04-26 and to validate from 2020-04-27 to 2020-05-10. Note that I used 2 weeks only for validation and the Private LB will be calculated using 30 days, so I expect some drifts in my forecasts for the last 15 days of private data.</p> <p>Were trained two independent models, one for Daily Cases and one for Fatalities. Both models uses similar features build from some lags and rolling windows statistics. Due the quantile bands forecast, it is logical that a good feature could be the rolling window of the standard deviation. I added rolling windows of std() with windows size 7, 14, 21 and 28, but due a typo in my code only windows 7 and 28 are being used in the final submission. I tried other kinds of rolling windows functions but everything else decreased my validation scores. All my predictions in the validation period and in Private dataset are made recursively (prediction for day t becomes the lag(1) of day t+1).</p> <p>Also to avoid issues with the 5 and 95 quantiles close to the end of the 30 days period, I added a linear margin every day in order to always increase the 95 quantile and decrease the 5 quantile forecast.</p> <p>The validation score for daily Cases is 0.2619 and for Fatalities 0.2341, ~0.248 in average for the two weeks validation period. Right now, about 15 days of Private data are already scored and my Pinball score is 0.2301, pretty close of my calculated scores. </p> <p>Here is an example of my forecasts for US Cases and Fatalities: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F54836%2F9f43bc1d7f00db208c3af8d362aafe3f%2FScreenshot%20from%202020-05-29%2015-50-52.png?generation=1590778310937696&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F54836%2F4accb58cccc80b5beb662b047397a22e%2FScreenshot%20from%202020-05-29%2015-31-44.png?generation=1590777149092734&amp;alt=media" alt=""></p> <p>Link for my notebook <a href="https://www.kaggle.com/titericz/test-1?scriptVersionId=33803268">here</a>.</p>
COVID19 Global Forecasting (Week 5)
4th Place Neural Network Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><code> def get_custom_loss(q): def custom_loss(y_true, y_pred): return pinball_loss(y_true, y_pred, q) return custom_loss </code></p> <p><code> for loss, lr, epoch in [(get_custom_loss([0.15, 0.50, 0.85]), 0.0001, epochs[0]), (get_custom_loss([0.05, 0.50, 0.95]), 0.00005, epochs[1])]: model.compile(loss=loss, optimizer=SWA(Nadam(lr=lr), average_period=2)) model.fit(X_hist_train, get_output(train_df), sample_weight=get_weight(train_df), shuffle=True, batch_size=128, epochs=epoch, verbose=0) </code></p> <p>Competition metric is pinball loss with [0.05, 0.50, 0.95]. Instead of directly optimizing for that, I first optimized for a narrower uncertainty ([0.15, 0.50, 0.85]) to avoid a very cautious local minimum.</p> <p>I think the models we build are too late for being useful in this pandemic. They can hopefully (but not hopefully) be used in another pandemic. Therefore I wanted to have a re-usable Machine Learning solution. So far, it performs okay but it is early to talk about it. Anyway, I wanted to explain it before I forget the details. I have 2 notebooks, one for training and one for inference: <a href="https://www.kaggle.com/aerdem4/covid-19-w5-training">https://www.kaggle.com/aerdem4/covid-19-w5-training</a> <a href="https://www.kaggle.com/aerdem4/covid-19-w5-pipeline">https://www.kaggle.com/aerdem4/covid-19-w5-pipeline</a></p> <p>It is CNN Model trained on last 2 weeks of data. In last covid-19 competitions, my NN had diverged by time. I have learned from my mistakes and didn't want to do recursive predictions for this one. I have trained different model for each n day ahead. Here is the model architecture:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F471945%2Fa274824b8ac4631b021d4bcbbcc7aac2%2Fmodel.PNG?generation=1590745456988152&amp;alt=media" alt=""></p> <p>Some tricks that worked: - Cumulative sum over relu as the last layer guarantees that I have non-zero predictions and quantiles are always in order. - Training directly with pinball loss. In initial epochs, I have trained for 0.15, 0.50 and 0.85 quantiles. Then late epochs are trained with actual quantiles (0.05, 0.50, 0.95). Without this, model starts being cautious very early and gets stuck in a local minimum. - Bagging 5 NN models for each day to reduce randomness - Smoothing predictions for each day n: <code>y(n) = 0.2*y(n-1) + 0.6*y(n) + 0.2*y(n+1)</code> - Rounding predictions to integers. This probably helps a little with zero cases.</p>
COVID19 Global Forecasting (Week 5)
9th place solution (a lot of LGBMs blended)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My approach was quite straightforward: I have trained a lot of very simple LGBM models, blended their predictions, and applied an additional ~linear transformation to optimize the final metric.</p> <p><a href="https://www.kaggle.com/ilialar/w5-solution-2">Notebook with my final solution</a></p> <p><strong>Overview</strong></p> <ul> <li><p>Each model takes as input numbers of new cases and fatalities during the last N days divided by the maximum value of cases/fatalities for this region up to the last day in input and predicts the number of cases or fatalities on Kth day from the last input day (again divided by max value up to date).</p></li> <li><p>Each model is trained using all available data: all regions and all possible N+K days frames. But different days had different weights โ€“ the later the higher. The weight for each particular day is proportional to <code>(0.9)**(-n)</code> which means, that last days in the train set had much greater significance during training.</p></li> <li><p>Each model is just <code>lightgbm.LGBMRegressor()</code> with default parameters. I have trained a lot of different models iterating over N (<code>range(5, 40, 5)</code>) and K (<code>range(1,32)</code>). </p></li> <li><p>Then I have used all models to make all possible predictions for each day in the test or validation sets (I have used the last 30 days for validation) and computed min, max, and average prediction for each day.</p></li> <li><p>It turned out, these predictions were quite pessimistic โ€“ predicted values were very high. Thatโ€™s why I have applied additional linear transformation and deviation factor and computed the final predictions using formulas below (coefficients are optimized on validation). </p></li> </ul> <p>``` pred_0.5 = a[0] * mean + a[1] * min + a[2] * max pred_0.05 = pred_0.5 + (b[0] * mean + b[1] * min + b[2] * max) * dev(lamb_min) pred_0.95 = pred_0.5 + (c[0] * mean + c[1] * min + c[2] * max) * dev(lamb_max)</p> <p><code>`` Where</code>a<code>,</code>b<code>, and</code>c<code>are just linear coefficients. </code>dev(lamb) = np.array([lamb * n for n in range(num_days_to_predict)])` โ€“ is used to make predictions for later days less (or more) confident.</p> <p>What I have not done but probably should (I have joined quite late and just did not have enough time): - Optimize hyperparameters of LGBM - Smooth predictions (I see that I have a lot of outliers at some days) by (for example) using 5% and 95% percentiles of predicted values instead of min and max. - Use other information as inputs or somehow utilize information about connections between different regions.</p>
COVID19 Global Forecasting (Week 5)
Linear Regression Is All you Need (9th solution week 4)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. Note that I no longer rule discussion ranking ;)</p> <p>I said I'll describe my solution if it ended up in top 10, so here it is. </p> <h1>Acknowledgements</h1> <p>First of all, I'd like to thank Kaggle for this challenging series of forecasting competitions. I say challenging for two reasons. First, this is about people being sick and people dying. It is hard to abstract from that dire reality. Second, one week sprints are too short for me. I need more time to immerse in a given machine learning problem. In this case it took me 4 weeks and 2 days to find a model I was happy with.</p> <p>I also want to thanks my teams mates for first 3 weeks, Ahmet and Giba. They preferred to go solo last week but I would have been happy to stick as a team. We used very different models (NN for Ahmet, xgb/lgb for Giba, lr for me) with he hope that together it would work well. It did actually in week2 but we didn't select it at the last minute because of some cv issue. It would have landed us at rank 8th unless mistaken. Week 3 was a bit disappointing for us. </p> <p>I also warmly thank those who curated the additional datasets I used, especially Vopani. Using these made a difference.</p> <p>Last but not least I want to congratulate those who did consistently well in these competitions, and especially those who did well in week 4. <a href="/david1013">@david1013</a> seems to have great models, even when not using an extra day of data if I get his recent posts correctly. It would be interesting to know Lockdown and Kaz model score when not using an extra day as well. Paulo Pinto performance in week 4 is awesome. I can't name all but let me mentions PDD team who did well in several weeks too. </p> <h1>Modeling Goal</h1> <p>I decided early on to not approach this as I usually build forecasting models. Reason is we are modeling a physical phenomenon where people get infected, then infect others for some time, then either recover or die. I therefore studied <a href="https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology">compartmental models used in epidemiology</a>, SIR and the like. These models rely on two time series: cases and recoveries/deaths. If we have accurate values for both then we can fit these models and get reasonably accurate predictions. </p> <p>Issue is we don't have these series. </p> <p>For cases we have a proxy, confirmed cases. This is a proxy in many ways: - It depends on the testing policy of each geography. Some test a lot, and confirmed cases are close to all cases. - A large fraction of sick people are asymptomatic, hence are easily missed by testing. - Testing does not happen when people get infected or contagious, it often happens with a delay. For all these reasons the confirmed case nubers we get is a distorted view of actual cases.</p> <p>For fatalities the numbers aren't accurate either; - In some geos we only get deaths test at hospital, in other geos it includes fatalities from nursing homes. - We don't have recoveries data. The latter can be fixed by grabbing recovery data from other online source. This has been done by some top competitors, I wish I had done it.</p> <p>Despite all these caveat, I assumed that we still have some form of SIR model at play with the two series we have at hand: fatalities depend on cases detected some while ago. That led to my first model.</p> <h1>Fatalities Prediction Via Linear Regression</h1> <p>If we assume that fatalities are a proportion of cases from, say, 8 days ago, then the proportion can be computed from log scaled data. See examples below for some countries. Green curve is the difference between confirmed cases (blue) and fatalities 8 days later (orange)</p> <p>Italy:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fd840d692feb332ba0ba753ab56577def%2Fitaly.png?generation=1585856510539884&amp;alt=media" alt=""></p> <p>France:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F148776fdca0ba1cc6dde63dbc7c720b9%2Ffrance.png?generation=1585856534028494&amp;alt=media" alt=""></p> <p>Spain: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F4a4e692a327677c254dd0a6a402caff1%2Fspain.png?generation=1585856554679533&amp;alt=media" alt=""></p> <p>Then running a linear regression on log of past cases as input, and log of fatalities as output should capture this dependency. I published a simplified version in <a href="https://www.kaggle.com/cpmpml/fatalities-prediction-via-linear-regression">a notebook</a>. The above pictures are taken from it.</p> <p>The fact that the proportion depends on the geography can be captured via additional input features. In practice I found that log fatalities increase is a linear regression of cases from between 15 to 5 days ago. These limits were tuned via cross validation. They match what researchers say about Covid.</p> <h1>Cross Validation</h1> <p>We used time based cross validation for our models. We selected 3 to 5 folds, depending on the week competition. Each fold uses data until a given data for training models, and data after for validation. One fold was defined by the actual test data: validation starts at the first day of test data, even though train and test data overlap.</p> <p>Models in our team were trained to predict next day. To predict on a longer period we append the prediction to train data, then apply to model on it shifted by one day to get the prediction for the second day. We repeat for all remaining days in validation or test. Other teams trained one model per prediction day. I didn't thought of it but it is definitely doable given how fast training my model is.</p> <h1>Predicting cases</h1> <p>For cases I used an auto regressive model, i.e. linear regression using past values as input. That model was not that bad, but it turned out to be pessimistic. The flattening of curves we observed wasn't captured correctly. I spent the last 2 weeks thinking about how to model this. I tried to revisit the epidemiology models, but couldn't find a way to reliably fit them for all geos. I didn't want to add an arbitrary decay, or fit a curve either. I therefore turned back to data exploration. And the day before deadline I found that a log of the decay in daily cases was very interesting, see some plots below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fdf16b94f837e4ed920730f7375df613a%2Fweek4.png?generation=1587030089243556&amp;alt=media" alt=""></p> <p>It means that daily cases decrease linearly in the log space. It means that the number of daily cases is of the form N - exp(a t) where N is a starting population, a a constant, and t the number of days. I explain this derivation <a href="https://www.kaggle.com/c/covid19-global-forecasting-week-5/discussion/143791">here</a>.</p> <p>The model is therefore fitting a linear model on the log of cases over time. </p> <p>This model works well in geos where the number of cases is high, but fails miserably in geos where epidemic starts. I therefore blended it with the previous one, with weights depending on the number of cases a while ago, 12 days if I remember well. This model improved CV scores tremendously compared to the previous one. It was modeling curve flattening rather well, at last.</p> <p>I then tried the same with fatalities: They indeed decrease linearly in the log space, but the data is sparse: I could fit a linear regression reliably only for geos with a rather large number of cases. Fortunately for people and unfortunately for my model, there were not that many geos like this. Then I realized I could use the same trick as in my very first model: log of fatalities depend linearly on past log cases. I could therefore fit a linear regression on past log cases as input, and log of fatalities as output. I used the same past period, i.e. cases between 15 to 5 days ago more or less (I don't remember the exact figures). My fatalities model is therefore a blend of the original linear model from past cases, a linear regression fit on past log fatalities, and a linear regression fit on past log cases. </p> <p>That's what I submitted. </p> <h1>Post Deadline</h1> <p>I am rather pleased to see that a simple blend of linear models was competitive. I wasn't sure I could handle differences in geos via adaptive weights when blending, but it seems it was good enough.</p> <p>After one day of rest after deadline I realized I could have done a better job. If fatalities depend on the slope of the log of cases, and if that slope is constant, then why estimate it on days from 15 to 5 days before, and not on days from, say 10 days to current day? Using more recent data is always better, then why not here? I implemented this small change and used late submission. This improved model score was similar to the one I submitted in time until last week where it started to outperform it. Its final private Lb score is 0.41840. Bottom line is it took me 4 weeks and 2 days to get the right model...</p> <h1>Week 5</h1> <p>I contemplated entering week 5 but didn't for 3 reasons. One reason is I was bothered by the extra day data from world odometer. I saw people who used it in week 4 ask for it to be forbidden. It would have been a smart move IMHO. Another reason, more important, is that a one week sprint with changes in competition setting was too short for me. Last reason may be even more important. I think my model works better when there are high level of cases and fatalities. The inclusion of smaller grained geos like US counties probably mean that my model would not be a great fit.</p> <p>I admire those who entered week 5 and I'm looking forward to their writeups when the competition completes!</p> <p>One week was too short for me.</p> <p>Thanks!</p> <p>Thanks, and congrats on the consistent performance across weeks.</p> <p>Simple models are often overlooked in Kaggle competitions ;)</p>
COVID19 Global Forecasting (Week 5)
My Solution for week 5
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Wow Vitalii, thanks a lot !! Congratulations for your performance as well! </p> <p>You're welcome! </p> <p>Thanks! </p> <p>You are welcome! </p> <p>Hi! The one you can get if you click on my position in the leaderboard ๐Ÿ˜‰</p> <p>Hi everybody, this week I did not have time to adapt my previous approach so I simply used a public baseline and then added some code for quantiles. Basically I used poisson distribution to estimate 0.05 and 0.95 quantile.</p>
COVID19 Global Forecasting (Week 5)
PDD Writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my teammates @dott1718 and @christofhenkel for collaborating throughout the four competitions. As even the last week competition is already close to half-time in terms of scoring, we want to give a short overview over out solutions. We are really happy with our results across the four weeks, specifically as more optimistic models (predicting fewer cases and fatalities) performed better and our most pessimistic models in week 2 performed the worst.</p> <p><strong>Week 1</strong></p> <p>Everyone went into this competition with a lot of fear and uncertainty about covid-19 and I think this also reflects in the models that were produced. Very little data was available and most countries were in earlier periods of severe exponential growth. We quite quickly figured that utilizing any ML model does not seem appropriate here due to lack of historical data. We thus crafted the basis for our main model that we utilized throughout all four weeks. For predicting confirmed cases, the model uses a constant growth rate that is reduced by a decay that is accelerating over time. For fatalities, we multiply predicted cases of a given time lag with a fatality rate for a given country, and also add a constant growth.</p> <p>This model allowed us some flexibility also in choice of (hyper-)parameters. We ended up deciding on one optimistic and one pessimistic submission in week 1, where the optimistic one has lower growth rates and higher acceleration of the decay of this growth rate and the pessimistic one has it the other way around. For countries where we already have "sufficient" data above a certain threshold, we learn the growth over a certain time period, and for others we employ default growth rates.</p> <p>Optimistic sub: <a href="https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-09?scriptVersionId=30833479">https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-09?scriptVersionId=30833479</a> Pessimistic sub: <a href="https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-03-per-5?scriptVersionId=30833633">https://www.kaggle.com/dott1718/cv19-by-growth-rate-v5-03-per-5?scriptVersionId=30833633</a></p> <p>The optimistic sub predicted globally 2.6 mio cases after 30 days and 120k fatalities after 30 days as of 22nd of April. The pessimistic one was way more doomsday and predicted 24 mio cases and 430k fatalities. The optimistic one ended 2nd on private LB and scored 1.07716 while the pessimistic one scored still 1.29940 and would have been 5th rank. This also gives you a good idea of the problems of this metric, because even if you are off that far globally from the actual numbers, if you get some low-case countries right, the metric improves drastically.</p> <p>Now looking back, I have to say that our global estimation here was really, really precise. I just checked worldometer data, and on 23nd of April the actual confirmed cases were <strong>2,722,000</strong> and what we predicted was <strong>2,650,252</strong>. I would have not thought that we can be so precise there, even if individual countries can be off. </p> <p><strong>Week 2</strong></p> <p>Even more exponential short-term growth was observed in the pre-ceding week. We mostly re-utilized our models from week 1 and adapted them a bit and overfitted them thankfully on short-term growth patterns which made them quite pessimistic in retrospective. The second submissions blends a power law fit with exponential cutoff to the model and it is actually our better sub here. </p> <p>Sub 1: <a href="https://www.kaggle.com/dott1718/cv19w2-2-sub?scriptVersionId=31270684">https://www.kaggle.com/dott1718/cv19w2-2-sub?scriptVersionId=31270684</a> Sub 2: <a href="https://www.kaggle.com/philippsinger/cv19w2-pl4-sub?scriptVersionId=31271930">https://www.kaggle.com/philippsinger/cv19w2-pl4-sub?scriptVersionId=31271930</a></p> <p><strong>Week 3</strong></p> <p>We saw that machine learning solutions slowly start to shine, specifically for short-term predictions and decided to one the one hand further tune our model and sub it, and as a second submission blend some of the public ML solutions in. In detail, we used subs from @cpmpml (+team), @gaborfodor and @osciiart - so thanks to them! We heavily overweighted our model and the one from oscii in the final blend, so the weights were 8-8-1-1.</p> <p>Single model sub: <a href="https://www.kaggle.com/philippsinger/cv19w3-2-v2-play-2-v3fix-sub-last6dayopt?scriptVersionId=31654736">https://www.kaggle.com/philippsinger/cv19w3-2-v2-play-2-v3fix-sub-last6dayopt?scriptVersionId=31654736</a> Blend sub: <a href="https://www.kaggle.com/christofhenkel/cv19w3-v3fix-cpmp-oscii-belug-full-8118?scriptVersionId=31655753">https://www.kaggle.com/christofhenkel/cv19w3-v3fix-cpmp-oscii-belug-full-8118?scriptVersionId=31655753</a></p> <p>Our blend sub predicts 3,668,603 confirmed cases for 7th of May and 295,911 fatalities which is ~10 days in the future. I think globally this solution will also come quite close to reality. We are also currently ranked 3rd in the competition with trending upwards, even though the gap to 2nd might be too large.</p> <p><strong>Week 4</strong></p> <p>Again, we decided to improve our model and blend it with other solutions. The big change we did to our model was that we do not rely for fatality predictions anylonger on predictions on confirmed cases but rather also take the individual fatality curves for future predictions. We also blend our model now with solutions from @osciiart, @rohanrao and @david1013. Our two subs this time have different weights for the blends, usually overweighting our model. We also fully rely on our model predictions for Chinese countries.</p> <p>Sub 1: <a href="https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub1?scriptVersionId=32072839">https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub1?scriptVersionId=32072839</a> Sub 2: <a href="https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub2?scriptVersionId=32072928">https://www.kaggle.com/christofhenkel/cv19w4-full-pdd-oscii-david-vopa-sub2?scriptVersionId=32072928</a></p> <p>Sub 1 predicts globally on 14th of May 3,898,447 cases and 278,848 fatalities while sub 2 predicts 3,802,472 and 224,623 respectively. In general it seems that public subs tend to agree way more on cases and fatalities seems to drive most of the score differences. We are currently positioned 5th, but trending down a bit. Will be interesting to see how scores evolve.</p> <p><strong>Learnings</strong></p> <p>I am writing this write-up while hearing the news that Austria (the country I live in) removes many of the restrictions in place from May 1st onward. We have had one of the best curve flattenings globally, even though having huge exponential growth in the beginning and we quite overpredicted Austria from the start thankfully. The country implemented strong restrictions very early on and I believe they have helped a ton. This is something that we can learn from the most I feel like. Active cases (from worldemeters) looks like that for Austria:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F37166%2F198aec01a918b144c789672f5e97d980%2FSelection_041.png?generation=1588072057446301&amp;alt=media" alt=""></p> <p>I really hope that this trend will not reverse again, many other countries also look that promising and we can only hope for a bright future.</p> <p>Predicting these pandemics is very hard, mostly due to lack of data, very large uncertainty, difference in reporting, differences in public behavior, and so forth. Still, I believe the models produced in these competitions can be helpful in the future, even if it is only due to our better understanding how hard it is to predict it. A lot of personal judgement flows into these models and predictions, making them also very subjective and thus combining multiple models and opinions seems to be also worth-wile to follow up on in future.</p> <p>Thanks <a href="/rohanrao">@rohanrao</a> - totally agree that errors cancel each other out, getting the global estimates right is still based on a combination of luck, personal tuning, and modeling.</p> <p>Thanks for your contributions throughout the weeks, your model appears to be very strong and we borrowed parts of it for our blend :)</p> <p>Your week 4 model looks very good. Will very likely overtake us and some others soon.</p>
COVID19 Global Forecasting (Week 5)
Some ML, A lot of judgement and luck
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Intro</h1> <p>The title may not seem very inspiring when you try to solve something so serious, but it is what worked semi-OK (for the first 2 weeks at least ) in this competition series( given a large horizon). I think predictions of that length (e.g 30 days ahead) require a lot of guessing and luck if you are not an expert epidemiologist (and quite possibly even if you are one too).</p> <p>There are so many things which are unknown when you try to predict that far ahead, like:</p> <p>Will the vaccine come out? (ok this one might not really be the case here - I think we are still early for that ) or an efficient drug? Are the people really following the social distancing rules or are they going to? what rules will the governments put in place? How strict will the government be at enforcing them? How will the infection rate differ based on how people socialise in different countries/areas? etc.</p> <p>Needless to say that the data is not perfect.</p> <p>Short term predictions (maybe within 3-5 days or so) could be modelled reasonably well. I hope you don't mind me <a href="https://www.h2o.ai/blog/modelling-currently-infected-cases-of-covid-19-using-h2o-driverless-ai/">sharing a blog</a> that summarises some ideas on how this could be done for short horizons on the same underlying data for active cases - many of the tricks mentioned there were applied here as well particularly in the <strong>pre-processing of the data</strong> and <strong>post-processing</strong> of the predictions . </p> <p>I still believe that an ML practitioner in tandem with an experienced epidemiologist as a combo can work better than any individual on her/his own, particularly if you try to make predictions at scale for many countries/counties/cities at the same time. </p> <p>The inference kernel (with commented code for training ) <a href="https://www.kaggle.com/kazanova/script-with-commented-train-counts-for-zeros?scriptVersionId=32084418">for week 4 is here</a></p> <p>I have the same version with <a href="https://www.kaggle.com/kazanova/fork-of-script-with-commented-train-counts-for-zer?scriptVersionId=32113592">training included here</a> :</p> <p>(It scores a tiny bit differently 0.02549 versus 0.02551 than my uploaded models although I have used same seeds - maybe because I run on a windows laptop - or different lightgbm version - not sure) . </p> <h1>The ML part</h1> <p>The Power Growth model has worked quite well to map epidemics in the past and I think many people have made good implementations here. </p> <p>I have tried a simple implementation myself - the problem was that adjusting the growth was challenging without expert knowledge (at least for me)- so so I tried to model growth specifically. Instead of predicting counts, or log counts, what I tried to predict was:</p> <p>" with what value do I need to multiply the previous cumulative value every day". For weeks 2,3, I modelled 60 separate models. One model for each plus+x day for the confirmed and fatalities. </p> <p>I used <a href="https://lightgbm.readthedocs.io/en/latest/">lightgbm </a>to build the models.</p> <p>The features were : - In how many days ago was X cumulative count reached. Where X for confirmed cases was <code>[1,5,10,20,50,100,250,500,1000]</code> days and for fatalities <code>[1,2,5,10,20,50]</code> - up to 10 lags of 3-size-window average rates for both confirmed and fatalities - the actual confirmed and fatalities as row counts for day 0.</p> <p>The target is the average-3-day rate for each one of the +x days. So instead of modelling the rate of <code>future1/current</code>, i took the average of <code>future1/current + future2/future1 + future3/future2</code> when predicting the target for day+1. That is because it has many spikes, so the average smoothness the target and (I think) controls for overfitting. </p> <p>For week 2, I only validated through observation - no cv. For week 3 I validated based on the week 2 results and for week 4 on both week2 and 3. A <strong>fundamental FLAW</strong> of both week2 and week3 models is that they ignore the 0s. The model requires an initial value of &gt;=1 to add the growth , so I expect the models down the stretch to do much worse (unless I am lucky - both me but most importantly the world and we don't have new deaths). The rules I used to initialise values for the zero cases were poor (to say the least) . </p> <p>In week 4, I have added a model that tries to predict the 0 cases modelling the difference in log counts between days .The model setup and features are the same, the only difference is that instead of only rates, the model also uses difference of log counts for the same windows. In that model I have also added USA county level data - i saw a tiny improvement from adding that (probably because it is heavily weighted towards 0 as the cumulative values are generally smaller). Other than that, I have found no other external resource useful (for longer term predictions). I have tried adding geographical data and recoveries with no success. </p> <p>Sadly these models are not very powerful - but maybe not too bad either. I would have been much further down without <strong>post processing</strong> and I would be drifting down very heavily. </p> <p>The problem with this approach is that it completely ignores the metric and a small error in the rate prediction can make a series go berserk. I saw quite a few series that for example started with 20 confirmed cases and my model was predicting it will reach to more than 30k (which was obviously a bit too dangerous to accept). Another problem is that the models are heavily influenced by what happened in China - it is the area that has the most future points available (to compute the target). In China the treatment of the outbreak was handled very differently than other areas and generally the models expected a much quicker plateauing (and different initial growth rates) than what is actually happening in many other parts of the world. This is where <strong>post-processing</strong> helped.</p> <h1>Post Processing</h1> <p><strong>Before I elaborate on this - I heavily applaud and congratulate people that have modelled this without much judgement or post-processing. They have definitely added much more than this solution ever will</strong>. I found not doing the post processing very challenging to get a highly competitive score (for the first 2 weeks at least). </p> <p>Back to the post-processing, this is the most important part of my solution. The ML model gave a curve to use as a basis and make decisions on it. The model was not too bad on cases with stable growth rates and counts of over 5,000 (approx). I created certain rules after observing most of the series individually. <strong>All the decisions I made on the post processing rules, were based on judgement</strong> - nothing coded or something that could be reused as a logic safely without observing the series. To manage time, I have ignored cases with larger counts (which is very bad from an epidemiological point of view ) subject to RMSLE (that I generally don't like for this problem) .</p> <p>The reasoning behind the rules: </p> <ul> <li><p>"first_10_then_1_" : This rule was mainly used in China provinces for cases that still had some minor increments. it applies rule (e.g decay) in the first 10 days and then it stops the growth. I anticipated that even if there are some new cases , China is now very efficient at containing the outbreak with the surveillance measures and the health app system.</p></li> <li><p>"_last_12_linear_inter_" : I am an optimistic person. if for some reason the model was not picking up that the rate at some point needs to start decreasing , I was forcing a linear decay in the last 12 days. I have no idea how this will play out. because we have not reached that point yet in any of the competitions. for example if the rate at day 19 is 1.12. In day 20 it will become 1.11 , then 1.10, etc. I do not have a great scientific explanation to tell you why I do this, other than I compared some of the curves against some others from the Chinese provinces at similar count levels and saw that 18 days was around the average the rates were still going up. Maybe not a great assumption given it was based on a geographical area only - but this is all I had at that point.</p></li> <li><p>" decay_x" : This rule is the most common one. I apply various decays in the predicted rates. A lot of it is based on expectations and comparing series with what happened to other countries/states when the counts were at similar levels , preferably close in geography. I also used my knowledge of what rules/measures have been applied to the countries when making these judgements . </p></li> <li><p>"combination of all the above" : you can apply any decay on the first 2 rules</p></li> <li><p>"accelerate" :Being an optimistic person, generally, I have avoided using that. It is the opposite of decay. </p></li> <li><p>"stay_same": rate=1 - mostly applied to Chinese Provinces, low count countries, Diamond princess.</p></li> <li><p>"normal" : Don't change the rate - trust the model as is</p></li> <li><p>"fatality" : Rule applies only on fatalities - it excludes confirmed. For week 2 model, I completely ignored fatalities when made judgements. From week 3 onward, I started making special judgements for fatalities.</p></li> </ul> <p>Now that I have a sample of making these decisions - when the competition is over, I am hopping to find a pattern that can be applied without judgement. E.g evaluate the judgements and find what would have been the correct decay. </p> <p>For the last week. I created a simple file that has the predictions of the new model without applying most of the rules versus the predictions of the model of the previous week and saw how over/under I was. I used this to aid my decisions. I have a screenshot attached.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F111640%2F27591e50f9509eba9c71e2813c745fad%2FComparing_predictions.png?generation=1587056981562287&amp;alt=media" alt=""></p> <h1>Luck/guessing</h1> <p>I have taken many guesses in this competition. For example:</p> <ul> <li>China won't generally have new fatalities or many new confirmed cases especially in the smaller provinces. Same with Diamond Princess. </li> <li>Tibet will remain with that single 1 confirmed case . If this goes to 1000 -, maybe I am out of the competition.</li> <li>For 2,3 weeks that generally 0 fatalities -will remain 0 unless confirmed cases exceed 400. </li> <li>News and Dashboards for recent information and trends (from what people have posted on the forums)</li> <li>Countries that enter later are more prepared/cautious about the virus and are better equipped to deal with this, since they have an idea for what measures might be more effective and/or how serious this can be. I have applied stronger decays to countries that enter now. </li> </ul> <p>Happy to answer any questions. </p> <p>The ML part took me more time for week 2. Maybe 2 days. I had to make the setup. Post processing always started with the moment the new data came in - so around 20 hours .</p> <p>For weeks 3,4 , ML was 3 hours each (because it is the same approach ) and Post processing around 20 hours each </p>
COVID19 Global Forecasting (Week 5)
W5 LB scorecard & analysis update 0611
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Update 0607 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fdf26c0fdb482000dd5927d4ffe96d5d4%2F67_1.jpg?generation=1591570668725292&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd155f25dd468ef67c4489591bf91beb5%2F67_2.png?generation=1591570668631761&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1f356d83c55a3d4ccb66907d48f6623a%2F67_3.png?generation=1591570668660355&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb6b2f6bf3aab3372dfb97386bfdcd787%2Fnedladdning%20(38" alt="">.png?generation=1591570712872617&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7d2f972ae3dc022c7e5ccbeae2156ac3%2Fnedladdning%20(39" alt="">.png?generation=1591570711698441&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2debff51b0e8c6dae0b9f19da98a8446%2Fnedladdning%20(40" alt="">.png?generation=1591570711094114&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fac2a4e9edafdda5b2f72bc01e8710ab8%2Fnedladdning%20(42" alt="">.png?generation=1591570753800488&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4a4a832de3eec4b56651ffdb48fb3d93%2Fnedladdning%20(41" alt="">.png?generation=1591570856410573&amp;alt=media)</p> <p>Update 0608 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F34821042ba2e7cd6efd4542d22533aa9%2F68_1.jpg?generation=1591659224336278&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F745a16dd6d90eb7a7c8ac9a1c24867df%2F68_2.png?generation=1591659222389372&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F14feaf2fcc8a2a77a199429d3cf1d32f%2F68_3.png?generation=1591659225481371&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4af6454cd4c37c0e6fbb638c7e391f01%2Fnedladdning%20(43" alt="">.png?generation=1591659267032290&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fcf34da06fd548119af33c40e097f5f4e%2Fnedladdning%20(44" alt="">.png?generation=1591659265455708&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0ad7217a95cf89f2ce2672aaaf4bed85%2Fnedladdning%20(45" alt="">.png?generation=1591659266694443&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb3e360e168aa87840c7065248c4a333c%2Fnedladdning%20(46" alt="">.png?generation=1591659298022027&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc76038bf514ef68e052639fce29e92ff%2Fnedladdning%20(47" alt="">.png?generation=1591659322345161&amp;alt=media)</p> <p>Update 0606 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6c58dc4c645412d13ea6d51808c185ed%2F66_1.jpg?generation=1591463701269878&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ff8e72f7edfe82bc44c61dc1ae178d75f%2F66_2.png?generation=1591463701762333&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6067ea90193b5eda41cb5d012e9acb6a%2F66_3.png?generation=1591463701629022&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F177d8c954a3b155f42b4bcb7e8eb7051%2Fnedladdning%20(36" alt="">.png?generation=1591464311422864&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0827e0e44d3d015a9a2b3d7cdb4e29e8%2Fnedladdning%20(33" alt="">.png?generation=1591463760243147&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fca689f7b29f962def3113ce32cbc3239%2Fnedladdning%20(34" alt="">.png?generation=1591463761005217&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F60949a47efc13fe2b800a8fcf2b8c42b%2Fnedladdning%20(35" alt="">.png?generation=1591463808578363&amp;alt=media)</p> <p>That was the final and last update! Well done everyone with an important work, especially <a href="/gaborfodor">@gaborfodor</a> who created the #1 model! :)</p> <p>Now we hope that this horrible virus will disappear quickly and cures are created immediately. The thoughts go to everyone affected.</p> <p>Update 0611 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F75df848bc38c12d1f15abf5611f24ef2%2F611_1.jpg?generation=1591897088370287&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fbed4633a3ee32c0d0f3a0a1afa5361e5%2F611_2.png?generation=1591897088353323&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ffd038e65ee269059e85a355a87e34299%2F611_3.png?generation=1591897089015973&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F84fef648b05fc87ccd08b5742a13d3b4%2Fnedladdning%20(58" alt="">.png?generation=1591897120991013&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1a2711bd38a59c03cf174cff6cf74eff%2Fnedladdning%20(59" alt="">.png?generation=1591897121767768&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F97c5719431646ae069ebed4c3d723af3%2Fnedladdning%20(60" alt="">.png?generation=1591897120972891&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2b40b725fa431241d6d48ae896166e73%2Fnedladdning%20(61" alt="">.png?generation=1591897121662173&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fecc8bc90c616abc489ec78cda0366a8c%2Fnedladdning%20(62" alt="">.png?generation=1591897122409789&amp;alt=media)</p> <p>Update 0609 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5a3e399db7e1ab1447934c027601894a%2F69_1.jpg?generation=1591728321567328&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2b0174581c00c4f88f7dbf61f18c98a7%2F69_2.png?generation=1591728321325162&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3ee3bbebc3d46c3d9c9e1a4f89f89eb3%2F69_3.png?generation=1591728321461290&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F336d2250a6360a92cb61466bd8915c4b%2Fnedladdning%20(48" alt="">.png?generation=1591728373043015&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5c0edf5049be5fc46954021e4c71929d%2Fnedladdning%20(49" alt="">.png?generation=1591728372916590&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3260763057608b0855a3daf8a48dfe40%2Fnedladdning%20(50" alt="">.png?generation=1591728375088400&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4d522b0751ba85ed33374442c4bfeb4d%2Fnedladdning%20(51" alt="">.png?generation=1591728457203778&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb21099acc2ca544a499daee80ad26c37%2Fnedladdning%20(52" alt="">.png?generation=1591728484912030&amp;alt=media)</p> <p>Update 0610 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F90f4181db0303f0760564ae71676112b%2F610_1.jpg?generation=1591827478438941&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F82a6fc5e7a4e8bf4114eff867f5414ae%2F610_2.png?generation=1591827479359922&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7e140636af35c1d5c05a7a61cefc5973%2F610_3.png?generation=1591827478155263&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ff1789e7134aae1cbb9abe170a3fc8db8%2Fnedladdning%20(53" alt="">.png?generation=1591827776863571&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fde8e074b4619ade8808ca0141146672a%2Fnedladdning%20(54" alt="">.png?generation=1591827775838966&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F55330ebae4922f6474e331fc02ba85f5%2Fnedladdning%20(55" alt="">.png?generation=1591827775822026&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2c8d60168bbd0123b2cb102acd3e7143%2Fnedladdning%20(56" alt="">.png?generation=1591827817311542&amp;alt=media) Score distribution all teams from start <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7e1fd86f3d79a43594c20cf42013b7da%2Fnedladdning%20(57" alt="">.png?generation=1591827835180455&amp;alt=media)</p> <p>Update 0520</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F36db30c01bcac0370a1cc0b7c568d30f%2F0520_1.jpg?generation=1589995497971429&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F20a34f8f6ff63895934b604d06e9a1e0%2F0520_2.png?generation=1589995511511685&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7d442f04f7c11e88698bd1443704dd8a%2F0520_3.png?generation=1589995525062115&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 6 day period 6d LB change Top 31-60 &amp; Top 31-60 6d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F58f8f94b54990c2dfcf9178a3ca30387%2F0520_4.jpg?generation=1589995560878618&amp;alt=media" alt=""></p> <p>Additional scorecard II - Visualizing the score gap and Dist to the top gap</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fe7c41c73935bbd4925db7332c31ab02d%2Fnedladdning%20(8" alt="">.png?generation=1589995597625688&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8bd0a2911c913c1d5abbdc71d23010ce%2Fnedladdning%20(9" alt="">.png?generation=1589995611904992&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0424fcde0081fb49ff8ada09ae21235c%2Fnedladdning%20(10" alt="">.png?generation=1589995628159425&amp;alt=media)</p> <p>-Score and LB Top 15 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc41659ad044d68ef636d3cfddeb13113%2Fnedladdning%20(11" alt="">.png?generation=1589995655144446&amp;alt=media)</p> <p>Thanks <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>! Hope we will see lots of teams again, more teams, better development, more help to those in need.</p> <p>I see your point, I think you saw mine, ment if we will see a W6 and I hope we see lots of teams again to the model dev, important contribution</p> <p>Important and important just a positive feedback on your models performance, good work with them :)</p> <p>Update 0521</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2d650ca2cf5bec29838c1f1f965b531b%2F0521_1.jpg?generation=1590070623274099&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3dccb1fbaa18ce121ce37be03dd592d5%2F0521_2.png?generation=1590070637992639&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F01ae3bf9ef11a5a2add7fc8373700e10%2F0521_3.png?generation=1590070650792254&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 7 day period 7d LB change Top 31-60 &amp; Top 31-60 7d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0573f5bda00abe1e68c8c281ef38473e%2F0521_4.jpg?generation=1590070706010424&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the score gap and Dist to the top gap</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F206153adbf2895660c18a877eae0f4e8%2Fnedladdning%20(12" alt="">.png?generation=1590071096866296&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8e03eefa83fa532857479a0de13e7700%2Fnedladdning%20(13" alt="">.png?generation=1590071114157418&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F459ad266fb81e0c80e5cc38313284f34%2Fnedladdning%20(14" alt="">.png?generation=1590071130310648&amp;alt=media)</p> <p>-Score and LB Top 15 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F66485649edddddbfb5f40d18b554db76%2Fnedladdning%20(15" alt="">.png?generation=1590071156855760&amp;alt=media)</p> <p>Thanks! :) But amazing how well the top teams are performing, but not quite surprising david1013 sasrdw aerdem4 philippsinger dott1718 kazanova christofhenkel , all the top grandmaster? Besides cpmpml whom we miss.</p> <p>After 4 days we can see that it's really even between the models in top 5 and in top 10 @bernardklinger has a good momentum followed by <a href="/neg0000273">@neg0000273</a>, great work!</p> <p>Yes it's hard to predict and follow the short trends in top/bottoms, average works better.</p> <p>Update 0518</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5579f0768153bfb782ee8e150e918305%2F0518_1.jpg?generation=1589839737617035&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F819b1eee1bb1f898c9432806ecc89472%2F0518_2.png?generation=1589839517348798&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F379443aaac68afc27e7eab6289864641%2F0518_3.png?generation=1589839534275443&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 4 day period 4d LB change Top 31-60 &amp; Top 31-60 4d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev4dscore)/4))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fbb086fdeeb238d762eedff2fe51ca764%2F0518_4.jpg?generation=1589839561810161&amp;alt=media" alt=""></p> <p>Amazing how the top 4 models are in sync with each other, and almost no difference in scores between the days, good work! Great work again with your model <a href="/moradnejad">@moradnejad</a>, this week maybe even better :)</p> <p>Look at the "Score gap top 50" line chart above, some models in the bottom 50 reduce their values โ€‹โ€‹quickly, must study what algos they use.</p> <p>Update 0519</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F118286d62b966144e6c1df1f98f810e8%2F0519_1.jpg?generation=1589904310080109&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb3f60794d458a4b0cd80652d1799ec83%2F0519_2.png?generation=1589904323455958&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fe243a88ecab9f89db8cff60221bb56de%2F0519_3.png?generation=1589904336983765&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 5 day period 5d LB change Top 31-60 &amp; Top 31-60 5d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F148b0f6981cc59a521521b2431d4c9d7%2F0519_4.jpg?generation=1589904361436285&amp;alt=media" alt=""></p> <p>Additional scorecard II - - Visualizing the score gap and Dist to the top gap <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3ff263ff309877c9c214dd5a9034d144%2Fnedladdning%20(4" alt="">.png?generation=1589904552714636&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F732af8dd585c850188aadf9bc58978fe%2Fnedladdning%20(5" alt="">.png?generation=1589904570030207&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F33e29d5197cb66e10f30e51daa8ae6b5%2Fnedladdning%20(7" alt="">.png?generation=1589904682286367&amp;alt=media)</p> <p>-Score and LB Top 15 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Faf355f709fecd0527301de7ec8179573%2Fnedladdning%20(6" alt="">.png?generation=1589904586617118&amp;alt=media)</p> <p>Additional charts II Score and LB Top 15</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fe349fa28772dd401ab51b0ac75f128f2%2Fnedladdning%20(3" alt="">.png?generation=1589897899410364&amp;alt=media)</p> <p>Additional charts</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd37afe851de30b43eb4dfbaf4b0b087f%2Fnedladdning%20(2" alt="">.png?generation=1589884516042426&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ff51c54320d4856a8ed1d9c5a9d860062%2Fnedladdning%20(1" alt="">.png?generation=1589884536367951&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F58d15f1b8b01c51661303350b670ee14%2Fnedladdning.png?generation=1589884548359851&amp;alt=media" alt=""></p> <p>Update 0523</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F70663415d5c9e745e1e2b01aa096747b%2F0523_1.jpg?generation=1590258031451646&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1f4f28797f446e4dcb78533993cc62aa%2F0523_2.png?generation=1590258044456014&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb8e78ed837705257d10350ed8b967b2a%2F0523_3.png?generation=1590258057651853&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores and Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F07addb5f2d6766bc281c4da937d0a392%2Fnedladdning%20(20" alt="">.png?generation=1590258089536013&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0516ca9e944d62e5cfae9c9cf9acef91%2Fnedladdning%20(21" alt="">.png?generation=1590258106365435&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fe4ce538788f6df0068d20d80f80fa105%2Fnedladdning%20(22" alt="">.png?generation=1590258120804038&amp;alt=media)</p> <p>-Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F43e8bda7380835382a3071b3557fc637%2Fnedladdning%20(23" alt="">.png?generation=1590258145933555&amp;alt=media)</p> <p>Update 0522 - reduced number of charts due to 38 teams missing in the top 50-88 in yesterday's LB</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4db6a653f15210560984daa3fe8c1c33%2F0522_1.jpg?generation=1590232473516496&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F478f67008cdb17bd0cdb8a8258f6eef9%2F0522_2.png?generation=1590232485337598&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3b696cad9cfe003083aebe5d34bc18fb%2F0522_3.png?generation=1590232499086793&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the score gap and Dist to the top gap <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7b7a36d5c3dd1a215a5132eedb37ece0%2Fnedladdning%20(16" alt="">.png?generation=1590232636721717&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F647e47f82904e4ae044e708db6e3fef0%2Fnedladdning%20(17" alt="">.png?generation=1590232650764779&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7ad7e4259cb24a50b13376243f535380%2Fnedladdning%20(18" alt="">.png?generation=1590232664751648&amp;alt=media)</p> <p>-Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6b9f19dd5f7355125891ef5633f577e8%2Fnedladdning%20(19" alt="">.png?generation=1590232697768089&amp;alt=media)</p> <p>Update 0524</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5d210864f615f0f78a71420919259138%2F0524_1.jpg?generation=1590347478315723&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fca59eda95e9732219a1fff4dc3e521e0%2F0524_2.png?generation=1590347493613671&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Feaea8cc5968240b9077c65cc2db6a44e%2F0524_3.png?generation=1590347507583192&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores and Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc48a947190d4143c770aaa49c2f2b1e8%2Fnedladdning%20(24" alt="">.png?generation=1590347536456116&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4f19f87cc3280292d7fed600481032a6%2Fnedladdning%20(25" alt="">.png?generation=1590347550165971&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc88ce4f221097c60ba15d509ca30ccfb%2Fnedladdning%20(26" alt="">.png?generation=1590347564927746&amp;alt=media)</p> <p>-Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4719a9d27c85465acc8e87b25ebee6e1%2Fnedladdning%20(27" alt="">.png?generation=1590347632886065&amp;alt=media)</p> <p>Additional scorecard - Best LB change Top 31-88 - 10 day period 10d LB change Top 31-60 &amp; Top 31-60 10d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6283f594ea961c992ab40548ac618db3%2F0524_4.jpg?generation=1590347693590632&amp;alt=media" alt=""></p> <p>There are 38 teams missing in the top 50-88 in todays LB so I wait with the update until I know the status.</p> <p><a href="/rohanrao">@rohanrao</a> thanks, I'll continue as long there is an interest for it :) No not many what I can see, they with larger teams look to ensemble each other's models. Otherwise most weighted average ensembles of the same model / ML.</p> <p>Update 0516</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fab3b45200263603c3c4ad2d191f2f855%2F0516_1.jpg?generation=1589645588605428&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F49b76dd42e1c6df23651ed445fb32e1d%2F0516_2.jpg?generation=1589645604774306&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F12a893513ea8d3160868519547bc7f9c%2F0516_3.png?generation=1589645625336555&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 2 day period 2d LB change Top 31-60 &amp; Top 31-60 2d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc ((TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev2dscore)/2))/2)PredDayAhead)+TodayScore(((1dAveScoreDiff+2dAveScoreDiff)/2)+1)^PredDayAhead)/2</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F754e2b51dc9ee4d7cb36a016f29b4b8b%2F0516_4.jpg?generation=1589645793785322&amp;alt=media" alt=""></p> <p>my model seems to have some momentum :) also <a href="/bernhardklinger">@bernhardklinger</a> and <a href="/rohanrao">@rohanrao</a> after 2 days.</p> <p><a href="/titericz">@titericz</a> perfect thanks, your notebooks I wasn't 100%, I just saw a private dataset in the beginning and imports of from sklearn.ensemble import GradientBoostingRegressor from sklearn.ensemble import GradientBoostingRegressor from lightgbm import LGBMRegressor import xgboost as xgb. Thought the dataset could have had pretrained models. You will see the update tomorrow :)</p> <p>Update 0517</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0bddc94ad756b73102769c6b31fe0bac%2F0517_1.jpg?generation=1589743688557080&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F02c5d9f6fd118a7a6522c4a7520c3809%2F0517_2.png?generation=1589743701436371&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc5c62bf4b4bb151a381ffc2a41d937e5%2F0517_3.png?generation=1589743715975040&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 3 day period 3d LB change Top 31-60 &amp; Top 31-60 3d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev3dscore)/3))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F659473220e376d70cb6c493f256025e7%2F0517_4.jpg?generation=1589743750645926&amp;alt=media" alt=""></p> <p>Thanks :) I think it's a good way to keep track of all movements between the models from top to bottom, identify patterns. Each team can also compare more easy and do thier own analysis based on the idea/architecture of the model.</p> <p>Please upvote if you think this is usefull to follow, then I know if it's good to continue the updates and not creating lots of spam :) And for you who click the downvote, this is not a state-of-the-art scorecard, please contribute with a new scorecard, you probably do better than me at this, I just making a good try :)</p> <p>An idea to the scorecard is also to get more use of the analysis and see some patterns that helps to create better models and predictions to those in need e.g. identify models there are better in short, medium and long term, use them thereafter and combine them.<br> Explanation to the columns:<br> Score diff - is the % difference between 2 LB scores(1d = 1 day difference)<br> Dist. to the top - distance to the leader.<br> Models - which technique(LGB, math, Fastai etc) that are used in the models, in top 20, can be interesting to have and follow. Because a team has two submission, I just add what I see, and complements ahead to the team's total.<br> Diff from last day - the difference between the distance to the leader, from yesterday, approach the leader or lose momentum.<br> KPI "1d(day) best moverโ€ฆ" - Total move(score difference) the group did against the leader and the individual contribution. Track the individual movement within the group against the leader, see the performance in that timeline regardless what speed has as a whole.<br> Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc - <br> TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p>One can calculate in many different ways to see that comparison, this was a quick way to do it.</p> <p>Also check notebooks below for more analysis.<br> <a href="https://www.kaggle.com/gaborfodor/w5-top-submissions" target="_blank">https://www.kaggle.com/gaborfodor/w5-top-submissions</a> <br> <a href="https://www.kaggle.com/david1013/covid-19-week-5-daily-predictions-top-teams/" target="_blank">https://www.kaggle.com/david1013/covid-19-week-5-daily-predictions-top-teams/</a></p> <p>Thanks! :) I'll take that in count. I think I will try a mix of prev scorecard pred calc and with some modifications, to use in this scorecard's pred columns, for 1, 3 and 5 days pred ahead. ((TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)<em>PredDayAhead)+TodayScore</em>(((1dAveScoreDiff+5dAveScoreDiff)/2)+1)^PredDayAhead)/2</p> <p>I've added which technique(LGB, math, Fastai etc) that are used in the models, in top 20, can be interesting to have and follow. Because a team has two submission, I just add what I see, and complements ahead to the team's total.</p> <p>Update 0515</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8179a7818c6f91444794c2a3e5620aa5%2F0515_1.jpg?generation=1589567937497147&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd62e50280e4710d3b0ec4eeab93a51ba%2F0515_2.jpg?generation=1589567953423085&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc93c0bc4c271c858627868393692f296%2F0515_3.png?generation=1589568365057861&amp;alt=media" alt=""></p> <p>Additional scorecard - Best LB change Top 31-88 - 1 day period 22d LB change Top 31-60 &amp; Top 31-60 1d ave. % score diff. Prediction Top20 1 day ahead predictions, use actual score value change added to the new #, times days.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2f0524a6de944d2350d4232c016bc3e1%2F0515_4.jpg?generation=1589585947616265&amp;alt=media" alt=""></p> <p>Update 0601</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fdee4b43ade4c2e703013c8938a00115e%2F601_1.jpg?generation=1591034610405576&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fecbe2a91b9dc71dfaec5485d668ce435%2F601_2.png?generation=1591034610006555&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5bdff27d9935e36ef9684c2d1b75798b%2F601_3.png?generation=1591034609762501&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F03f95cf4abf4987b76993a42850dfa16%2Fnedladdning%20(56" alt="">.png?generation=1591034761254106&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ffd73fac0475ac14a12abebb7b30adba1%2Fnedladdning%20(57" alt="">.png?generation=1591034757575379&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8cd8159b59ee4656e7953a2261cb1d47%2Fnedladdning%20(58" alt="">.png?generation=1591034759622582&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5dd98caecaa12076d643105bc9bebc93%2Fnedladdning%20(59" alt="">.png?generation=1591034796853624&amp;alt=media)</p> <p>Update 0530</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F45820d69a80b74874b7bde712ad22fb4%2F530_1.jpg?generation=1590909265209926&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F18dd852286233b440e9ee67802f35203%2F530_2.png?generation=1590909264637739&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F30015c2a1ee6a855aa5ec744086ef58a%2F530_3.png?generation=1590909265930537&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F594b0b03dd2997c5afa4b3307b7fc879%2Fnedladdning%20(48" alt="">.png?generation=1590909315821566&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3c9e424c9177a57f554c449a785fb46c%2Fnedladdning%20(49" alt="">.png?generation=1590909313209295&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fddbf4fe2d979887be127bad03ed3d775%2Fnedladdning%20(50" alt="">.png?generation=1590909315421603&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Feb7ae4d8e7efafa958c96809e6d168d4%2Fnedladdning%20(51" alt="">.png?generation=1590909313769494&amp;alt=media)</p> <p>Update 0531</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6e52957accb0f4ddaa5ec9100ceea89c%2F531_1.jpg?generation=1590953613062425&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ffe3619a468a951380db54a2f615cecb5%2F531_2.png?generation=1590953239744354&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ff63dd3c00369ed9f027d05a9351adcea%2F531_3.png?generation=1590953239952443&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F41a075ba364eed2de54e47a850424462%2Fnedladdning%20(52" alt="">.png?generation=1590953275475090&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fa780a68fc24c39568afaeb79e6caa339%2Fnedladdning%20(53" alt="">.png?generation=1590953276248377&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3f12cd169eac124b422343ad8ceda5a5%2Fnedladdning%20(54" alt="">.png?generation=1590953275777142&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc1fae9364068c8a4bfffaaf15ee4b5c0%2Fnedladdning%20(55" alt="">.png?generation=1590953275909618&amp;alt=media)</p> <p>Update 0604 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1988c0cd57f941b457a84e3c44047d9e%2F64_1.jpg?generation=1591279303480293&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6fb57facad69fd0f0b7043e64f90d721%2F64_2.png?generation=1591279303458333&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F340a7ce961b9f45483a0581c6b55106f%2F64_3.png?generation=1591279303204760&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fdd8331334acbc5fb16584083f1a0cca0%2Fnedladdning%20(68" alt="">.png?generation=1591279348929407&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F80ef27c19762cd0523781b7e98f3dad5%2Fnedladdning%20(69" alt="">.png?generation=1591279349624742&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F57d81957779246910927b819542d2b1f%2Fnedladdning%20(70" alt="">.png?generation=1591279349144642&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5415dda0d18772ce2cffe64774c1285e%2Fnedladdning%20(71" alt="">.png?generation=1591279349469668&amp;alt=media)</p> <p>Update 0605</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6aba1ceae04d52c99c1cd7a3e57df75e%2F65_1.jpg?generation=1591377980446317&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F06cc431c861c4c2d914f86efc8e2b9fa%2F65_2.png?generation=1591377982008419&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2e33fea0555026d5f6cf3ad835387c12%2F65_3.png?generation=1591377980157752&amp;alt=media" alt=""> Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2f80bbe7de5adbc53ab5a6c3f966ac4d%2Fnedladdning%20(28" alt="">.png?generation=1591378269792155&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7e66b036de2c9334679593cd30d6db15%2Fnedladdning%20(29" alt="">.png?generation=1591378269265438&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2dc09c939fac727d1b55bd0ea5f85869%2Fnedladdning%20(30" alt="">.png?generation=1591378271765835&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F490eb9f8dd0c448b732b66ab6386224d%2Fnedladdning%20(31" alt="">.png?generation=1591378307733273&amp;alt=media)</p> <p>Update 0602 Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F706b6d0ec6a5cefde8c2b98066a26b36%2Fnedladdning%20(60" alt="">.png?generation=1591111418284109&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F86166600d9f07d8c1eaf143f5e078a92%2Fnedladdning%20(61" alt="">.png?generation=1591111418692065&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd8def157fddb04acd3d7cb6382e9962e%2Fnedladdning%20(62" alt="">.png?generation=1591111420200103&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F649ecebb0da80d97c1b42d28bda50927%2Fnedladdning%20(63" alt="">.png?generation=1591111418137060&amp;alt=media)</p> <p>It will come soon :)</p> <p>Update 0603 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb41aaef73ec804bb8197169350c16915%2F63_1.jpg?generation=1591209853544136&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2954405c8f9b7ef3464b3b7e44ca0695%2F63_2.png?generation=1591209855794134&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F9bb9090e58f50b7788950adc844cef63%2F63_3.png?generation=1591209855862073&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fad3138a04e977fe99ad25379ef249896%2Fnedladdning%20(64" alt="">.png?generation=1591203009300088&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F42bb6a4189893d5ad5b977ec1701b0ab%2Fnedladdning%20(65" alt="">.png?generation=1591203009726542&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F45a97a91aeab667105b161aa9f35a135%2Fnedladdning%20(66" alt="">.png?generation=1591203009074868&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F29fbececc063d0f476eba3647fcf1a89%2Fnedladdning%20(67" alt="">.png?generation=1591203009488129&amp;alt=media)</p> <p>Update 0526</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F06c8ae039d38aa4cffd386b14f99f7e9%2F526_1.jpg?generation=1590511148341097&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F725031f2571db2514c7d7992431689c4%2F526_2.png?generation=1590511163926317&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F55a09fd12d3df4ad8d0946220ebe58e0%2F526_3.png?generation=1590511177933053&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F53de3360b1e35096e72da6cd054013be%2Fnedladdning%20(32" alt="">.png?generation=1590511320541250&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F21c1477eee160798f0c89c1dd8381706%2Fnedladdning%20(33" alt="">.png?generation=1590511337419125&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F04b7dd2be91f68e9287791662998bfa6%2Fnedladdning%20(34" alt="">.png?generation=1590511352099860&amp;alt=media)</p> <p>Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F57db7e9182629ab82ef9c07ba1c521a9%2Fnedladdning%20(35" alt="">.png?generation=1590511366180353&amp;alt=media)</p> <p>Update 0525</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F9450b320f5fd2d9684049a982618eb96%2F525_1.jpg?generation=1590435512251646&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fd1aa6a1cafa6dded7689a0c16764e412%2F525_2.png?generation=1590435528126325&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4abe893dd10e65d5a1c9ab4d47cc706d%2F525_3.png?generation=1590435544023317&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores and Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F7cdbb240ce3286e852b93cd378604842%2Fnedladdning%20(28" alt="">.png?generation=1590435822310254&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F6e9f2102386187e8832b9b052c92699b%2Fnedladdning%20(29" alt="">.png?generation=1590435836052528&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fa8844d9191edd9ca89ab525add2ab675%2Fnedladdning%20(30" alt="">.png?generation=1590435854255671&amp;alt=media)</p> <p>-Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fa8e65528b3e58ea1fe095da58d1674ab%2Fnedladdning%20(31" alt="">.png?generation=1590435898607450&amp;alt=media)</p> <p>Additional scorecard - Best LB change Top 31-88 - 11 day period 11d LB change Top 31-60 &amp; Top 31-60 11d ave. % score diff. Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F000381621c9e42878d95781dc037e350%2F525_4.jpg?generation=1590435987082963&amp;alt=media" alt=""></p> <p>Update 0529</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F52a65999d593e607ab170b31783e4991%2F529_1.jpg?generation=1590774426604795&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F28e9c77c1ee2ebf19e442e6b2d91c973%2F529_3.png?generation=1590774427412137&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fad7904737e713dd759e66501d17b8241%2F529_2.png?generation=1590774427355772&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1b332a933b7e5de1d852a0c6b0d6b7c4%2Fnedladdning%20(44" alt="">.png?generation=1590774460154868&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fb10adf26956acd06533b8f61ebc2273a%2Fnedladdning%20(45" alt="">.png?generation=1590774460125443&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fbb85739ac3accfad797dc1492b432c81%2Fnedladdning%20(46" alt="">.png?generation=1590774460457662&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F883af179f4ebfd4e02570a122c401ad7%2Fnedladdning%20(47" alt="">.png?generation=1590774460161138&amp;alt=media)</p> <p>Halfway into w5 we can see some patterns. I think my model could predict with more accuracy with NN in the first days. The model also works as one of more ensemble model to balance the predicted trends with the RF average. So far in W5 it looks like an ensemble with the top 3 models from teams DiDoKaPs/disinfectant injection/Lockdown / Northquay would be the best model to use and maybe with a increasing and decreasing ensemble weight, with DiDoKaPs and also add Bernhard Klinger as the Long term models.</p> <p>Update 0527</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Fc43a822af53d08235fb350e2d84e5642%2F527_1.jpg?generation=1590602927264613&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F3cb48b76a0f178945eff5a79415886d6%2F527_2.png?generation=1590602943333768&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F858af213fbe3c737f885add511663965%2F527_3.png?generation=1590602091760044&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F1fd2b5e9c16e4d04e53d8d10cddaabaa%2Fnedladdning%20(36" alt="">.png?generation=1590602119748761&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F5890d15cd1360d20ecf405f3ab2ba750%2Fnedladdning%20(37" alt="">.png?generation=1590602132315735&amp;alt=media)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F0041e80675046ab528b92796c945036c%2Fnedladdning%20(38" alt="">.png?generation=1590602146614575&amp;alt=media)</p> <p>Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F9a3f7b40cf89838e90ac2a05c9a72be1%2Fnedladdning%20(39" alt="">.png?generation=1590602186808334&amp;alt=media)</p> <p>Update 0528</p> <p>Halfway into w5 we can see some patterns. So far in W5 it looks like an ensemble with the top 3 models from teams DiDoKaPs/disinfectant injection/Lockdown / Northquay would be the model to use and maybe with an increasing and decreasing ensemble weight, with DiDoKaPs and also add Bernhard Klinger and Beluga as Long term models. I think my own model could predict with more accuracy using NN the first days. The model may also work as one of more ensemble models to balance the predicted trends with the RF average.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Febee6c3a32f9ecaa1a95d07a641a4cab%2F528_1.jpg?generation=1590686401792662&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F887154bd680f71b9b34732a180c61e89%2F528_2.png?generation=1590686437268926&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F40563ac8295ddcbed93440bbe6cc9926%2F528_3.png?generation=1590686459062886&amp;alt=media" alt=""></p> <p>Additional charts - Visualizing the scores &amp; Dist to the top</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2Ffa2c2c4e0b75a0fbaa84a55c89d21d25%2Fnedladdning%20(40" alt="">.png?generation=1590686488627164&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F4b3aa9675cd4b863a3b9c1fc24530627%2Fnedladdning%20(41" alt="">.png?generation=1590686505247899&amp;alt=media) <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F2ed3d07d272a13b8565353a8f0f2a325%2Fnedladdning%20(42" alt="">.png?generation=1590686505659542&amp;alt=media) Score and LB Top 20 <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3924325%2F8f9c27f10915172f14d503dbb596f100%2Fnedladdning%20(43" alt="">.png?generation=1590686504696759&amp;alt=media)</p>
COVID19 Global Forecasting (Week 5)
Week 1 Winners
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The Week 1 competition concluded on Thursday and congratulations are in order for OsciiArt, PDD, Roger Li, and other top finishers. OsciiArt in particular put together a very nice notebook showing how to use lightgbm effectively with both provided and external data, and it has become a great benchmark for all four weeks. In general I recall that first week being quite hectic and it is impressive to look back and see the quality of models put together in such a short time frame.</p>
COVID19 Global Forecasting (Week 5)
~1st Place Solution LGBM with some adjustments
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Summary</h2> <p>LGBMs with quantile regression were trained on time-series and geo features for short term predictions. Aggressive 1w average smoothing was used for long term predictions. Due to the large number of small locations the top 30 country/state had to be adjusted manually.</p> <strong>Notebooks</strong> <p><a href="https://www.kaggle.com/gaborfodor/c19w5-create-submission">https://www.kaggle.com/gaborfodor/c19w5-create-submission</a> <a href="https://www.kaggle.com/gaborfodor/c19w5-check-submission">https://www.kaggle.com/gaborfodor/c19w5-check-submission</a> <a href="https://www.kaggle.com/gaborfodor/c19w5-train-lgbs">https://www.kaggle.com/gaborfodor/c19w5-train-lgbs</a></p> <h2>Feature Extraction</h2> <ul> <li>Population</li> <li>Latitude, Longitude</li> <li>Day of week</li> <li>Share of total cases for each day of week</li> <li>Rolling mean/std for 1w, 2w, 3w</li> <li>Cumulative totals</li> <li>Confirmed - Fatality rate</li> <li>Trend from last 2-3 weeks</li> <li>Normalized features by population</li> <li>Nearby features based on the closest 5-10-20 locations <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F1a011ee06339075039995790bd86b046%2FScreenshot%202020-05-31%20at%2010.32.51.png?generation=1591101401062776&amp;alt=media" alt=""></li> </ul> <p>Rescaled and rounded features to 1-2 decimals to decrease overfitting</p> <h3>External data</h3> <p>I started with searching public us county level demographic data (age, income, population density, covid19 lockdown info etc.) I found a few useful sources though I did not have time to clean and merge them. The only external data I used was the geo encoded lat-lon coordinates for each location.</p> <h2>Modeling</h2> <p>For each target/quantile/forecast lag separate model was trained with location based 5-fold CV and early stopping based on pinball loss. Models were only trained to predict the next 1-14 days.</p> <ul> <li>Trained bunch of LGBMs with random parameters to blend</li> <li>Sample Weighting based on location weights and time decay </li> </ul> <h2>Post processing</h2> <ul> <li>Clipped negative predictions at 0 </li> <li>Made sure the 0.05 (0.95) quantile predictions are not higher (lower) than the median</li> <li>Smooth daily predictions (<code>Y[t] * 0.66 + Y[t-1] * 0.33</code>)</li> <li>For US country total used the state level rollups for median</li> <li>Manually inspected and adjusted the top 30 countries</li> <li>Flat long-term predictions based on the last predicted weekly average</li> <li>Small daily decay was added to 0.05 quantile and median</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F1788c82f1c31a3e3900073134256424a%2FScreenshot%202020-06-02%20at%2014.39.01.png?generation=1591101619319599&amp;alt=media" alt=""></p> <p>Not clear. I have the worst score for Brazil among the top submissions, excluding Brazil I would be first, focusing only on US 3rd. I have good guesses for Russia, Mexico. <a href="https://www.kaggle.com/gaborfodor/w5-top-submissions">https://www.kaggle.com/gaborfodor/w5-top-submissions</a> has a section for the unexpected regions for each team.</p> <p>It will be close for sure <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F9d4c5fb091a0ea296af174b8601d6efc%2FScreenshot%202020-06-04%20at%2015.48.39.png?generation=1591278553258974&amp;alt=media" alt=""></p> <p>Thanks, I don't think I will be able to reach you, but I hope to get closer ;)</p>
COVID19 Global Forecasting (Week 5)
~5th Place Simple Linear QuantReg Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, unfortunately late sub are not enabled :(</p> <p>The task here is a time series forecast to predict Covid19 daily numbers of Cases and Fatalities for countries and also for some US regions. But they also challenge us to predict the 5% and 95% quantiles of that numbers and the metric chosen for that is the Pinball loss and optimizing it isn't an easy task. Pinball loss guarantee that all forescasts are between 5 and 95 quantile range, every forecast outsite that limits is extremely penalized, so models needs to take care when predicting the lower and upper limits.</p> <p>For this challenge I build a simple linear model using <a href="https://www.statsmodels.org/devel/generated/statsmodels.regression.quantile_regression.QuantReg.html">QuantReg</a> regressor from statsmodels lib. QuantReg showed to be very good to estimate the quantiles forecasts out of the box. </p> <p>To train the model I used data from 2020-04-01 to 2020-04-26 and to validate from 2020-04-27 to 2020-05-10. Note that I used 2 weeks only for validation and the Private LB will be calculated using 30 days, so I expect some drifts in my forecasts for the last 15 days of private data.</p> <p>Were trained two independent models, one for Daily Cases and one for Fatalities. Both models uses similar features build from some lags and rolling windows statistics. Due the quantile bands forecast, it is logical that a good feature could be the rolling window of the standard deviation. I added rolling windows of std() with windows size 7, 14, 21 and 28, but due a typo in my code only windows 7 and 28 are being used in the final submission. I tried other kinds of rolling windows functions but everything else decreased my validation scores. All my predictions in the validation period and in Private dataset are made recursively (prediction for day t becomes the lag(1) of day t+1).</p> <p>Also to avoid issues with the 5 and 95 quantiles close to the end of the 30 days period, I added a linear margin every day in order to always increase the 95 quantile and decrease the 5 quantile forecast.</p> <p>The validation score for daily Cases is 0.2619 and for Fatalities 0.2341, ~0.248 in average for the two weeks validation period. Right now, about 15 days of Private data are already scored and my Pinball score is 0.2301, pretty close of my calculated scores. </p> <p>Here is an example of my forecasts for US Cases and Fatalities: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F54836%2F9f43bc1d7f00db208c3af8d362aafe3f%2FScreenshot%20from%202020-05-29%2015-50-52.png?generation=1590778310937696&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F54836%2F4accb58cccc80b5beb662b047397a22e%2FScreenshot%20from%202020-05-29%2015-31-44.png?generation=1590777149092734&amp;alt=media" alt=""></p> <p>Link for my notebook <a href="https://www.kaggle.com/titericz/test-1?scriptVersionId=33803268">here</a>.</p>
COVID19 Local US-CA Forecasting (Week 1)
Top 4%-6th place solution Notebook
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Local US-CA Forecasting (Week 1) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi All, I am sharing this notebook in which I have been able to achieve (6th place/top 4%) after putting so much effort. Please check &amp; upvote if you find this notebook useful. Also please suggest your observations for further improvement. <a href="https://www.kaggle.com/mdmahmudferdous/covid-19-us-ca-forecasting-top-4-notebook-7th">https://www.kaggle.com/mdmahmudferdous/covid-19-us-ca-forecasting-top-4-notebook-7th</a> Thanks. </p>
CVPR 2018 WAD Video Segmentation Challenge
21th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CVPR 2018 WAD Video Segmentation Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition is a tough one, with more than 4K resolution images training a decent model will take ages. One look at the competition and the immediate thought of solution is directly Mask RCNN based on the multi-class segmentation problem. </p> <p>Our solution is a simple RESNET50 and RESNET101 Mask RCNN ensemble, training didn't stop improving for both of the models but we were forced to stop halfway after a few epochs wasting a lot of time as predicting and generating predictions also takes a lot of time. The ensemble we used is a simple average of mask and weighted average of scores based on proximity. </p> <p>The ensemble didn't help as much as we expected it to and we only had time to test one type of parameters. Looks like we should have used our time to train more epochs. All in all a great experience especially with the lack of code from kernals to go forward. Looking forward to the Top Teams Sharing! </p> <p>Ensembling worked for me just not that much . @Tommy Would love to hear your solution ๐Ÿ™‚</p>
CVPR 2018 WAD Video Segmentation Challenge
2nd place solution (based on Detectron)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CVPR 2018 WAD Video Segmentation Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Higher resolution is always a better option. My solution is based on Mask R-CNN. Source code and model are available from below link,</p> <p><a href="https://github.com/Computational-Camera/Kaggle-CVPR-2018-WAD-Video-Segmentation-Challenge-Solution">https://github.com/Computational-Camera/Kaggle-CVPR-2018-WAD-Video-Segmentation-Challenge-Solution</a></p> <p>I didn't. But i think it is definitely a promising way to improve the prediction result by using temporal information.</p>
CVPR 2018 WAD Video Segmentation Challenge
Mask RCNN based solution (Keras and pytorch)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CVPR 2018 WAD Video Segmentation Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We have a team from Nvidia?</p> <p>It is 3rd place solution now. I am also looking forward to see their solutions.</p> <p><a href="https://github.com/wwoody827/cvpr-2018-autonomous-driving-autopilot-solution">https://github.com/wwoody827/cvpr-2018-autonomous-driving-autopilot-solution</a></p> <p>Here is my solution to this challenge, based on mask rcnn. I used both keras implementation and pytorch. I also tried different backbone for Mask RCNN. Hope this could be useful!</p>
Carvana Image Masking Challenge
1 st place solution - code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Carvana Image Masking Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ul> <li>Blog post: <a href="http://blog.kaggle.com/2017/12/22/carvana-image-masking-first-place-interview/">http://blog.kaggle.com/2017/12/22/carvana-image-masking-first-place-interview/</a></li> <li>Code: <a href="https://github.com/asanakoy/kaggle_carvana_segmentation">https://github.com/asanakoy/kaggle_carvana_segmentation</a></li> </ul> <p>P.S. ะกั‡ะฐัั‚ัŒั ะดะปั ะฒัะตั…, ะดะฐั€ะพะผ, ะธ ะฟัƒัั‚ัŒ ะฝะธะบั‚ะพ ะฝะต ัƒะนะดะตั‚ ะพะฑะธะถะตะฝะฝั‹ะผ! (C) ะ. ะธ ะ‘. ะกั‚ั€ัƒะณะฐั†ะบะธะต.</p>
Carvana Image Masking Challenge
3rd place solution (U-Net + Dilated Conv)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Carvana Image Masking Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><pre><code>ReduceLROnPlateau(monitor='val_dice_coef', factor=0.2, patience=3, verbose=1, epsilon=1e-4, mode='max') </code></pre> <p>I used above settings, but I think it is better to use validation loss as metrics.</p> <p>Using test data predictions by other model as if them were ground truth. In the training phase, you can use (X_train, y_train) and (X_test, y_test) for training.</p> <p>Thank you! I'm looking forward to your experiment results. I'm very curious.</p> <p>Thanks! I tried many things make my model further complicated(post processing, various augmentation method, weighted loss, etc.), but no one helps me improving score. I realized that simple method is good for simple task like this competition :)</p> <p>Yes. I used normal K-fold CV.</p> <p>Thanks for hosting such an exciting competition! I was really enthusiastic for spending my time for this competition! And thanks for helpful code by @Peter and excellent ideas by @HengCher Keng. I learned a lot from them.</p> <p>The competition repository is here. I put two scripts (My network script &amp; loss functions script) in it. <a href="https://github.com/lyakaap/Kaggle-Carvana-3rd-place-solution">https://github.com/lyakaap/Kaggle-Carvana-3rd-place-solution</a></p> <h2>My solution overview</h2> <ul> <li><p>I used 1536x1024 &amp; 1920x1280 resolution.</p></li> <li><p>I used modified U-Net. It has several dilated convolution layers in bottleneck block. (i.e. where the resolution of feature maps are lowest)</p></li> </ul> <p>Detailed figure of my network architecture is here.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/225523/7428/network.png" alt="my_network" title=""></p> <p>The best score of this model is <strong>0.997193</strong> only around 8.5 million parameters. (trained one of 6 folds, no TTA &amp; no ensemble, input resolution: 1920x1280) Averaging two predictions(TTA, original image &amp; flipped image) by 0.997193 model reached <strong>0.997222</strong>. They are ranked 6th place and 5th place on LB respectively!</p> <p>I tried normal convolution layers instead of dilated convolution layers in bottleneck block, and its score is significantly lower than using dilated convolution. (normal: 0.9905, using dilated conv: 0.9918 @256x256)</p> <p>I also tried parallelized dilated convolution layers instead of stacking them, but it gave me lower score than stacked architecture.</p> <ul> <li><p>Optimizer: RMSprop lr = 0.0002, reducing learning rate by using ReduceLROnPlateau() that is Keras callback function. Reducing factor is 0.2 &amp; 0.5</p></li> <li><p>Data Augmentation: only horizontal flip. Scaling, Shifting, and Shifting HSV were results of overfitting for me.</p></li> <li><p>Batchsize: 1, and no BN.</p></li> <li><p>Training whole time on single model takes around 2 days.</p></li> <li><p>Pseudo Labeling: learning simultaneously or only using pretraining phase.</p></li> <li><p>Loss function: bce + dice loss (I also tried weighing boundary pixel loss, it gave similar result. Fear of overfitting, I finally decided not to use it.)</p></li> <li><p>Ensemble: 5 fold ensemble @1536x1024 + 6 fold ensemble @1920x1280, weighted average. I weighted by LB ranking in my submissions.</p></li> <li><p>TTA: only horizontal flip.</p></li> <li><p>Adjusting threshold: I decided threshold which gives best score on validation set. I set the threshold to 0.508. In LB, it makes score improving only 0.000001.</p></li> </ul> <h2>Other</h2> <p>One of the best contributer of improving score is training on pseudo labeling data. I think why pseudo labeling contribute so much is the amount of test data, and we can get predictions close ground truth.</p> <p>I tried post processing by using pydensecrf for only difficult to mask car images. But it gave me no improvement. As for how to choose "difficult images", I calculated multi class version of dice coeficient (I'm afraid that I shouldn't say so) of predictions by several models.</p> <p>Principal Components of feature maps that is output of Dilated Convolution layers each has a different dilation_rate. I think it's interesting. <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/225528/7429/dilation_rate=1.png" alt="1" title=""> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/225528/7430/dilation_rate=2.png" alt="2" title=""> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/225528/7434/dilation_rate=4.png" alt="4" title=""> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/225528/7431/dilation_rate=8.png" alt="8" title=""></p>
Carvana Image Masking Challenge
LB 10th place 0.9971 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Carvana Image Masking Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>mobo: gigabyte AORUS GA-Z270X-Gaming 9. itโ€™s one of the few LGA1151 moboโ€™s with 4 double width pcie slots out there. Had to put a 1600W power supply on to be safe. During training, wall power is about 900W. </p> <p>Not really. Iโ€™ve got an i7-7700k overclocked to 5GHz so it could still chew through augmentations pretty quickly</p> <p>We created three models, all variants of Unet, and then used some post processing techniques to more robustly detect antenna's and car tops by colors. The three models were ensembled with an equal vote.</p> <p>1st model- Background detection. Inverted the mask image and created a Unet to detect the background (instead of the car). 5 up/downsamples. Then inverted the final background prediction mask to convert back to the car prediction. This model on it's own scored 0.9970 and was the highest performing of the three. It seemed to do best at the areas underneath the car, wheel/spokes, and detecting true car edges on difficult test images.</p> <p>2nd model - Classic Unet architecture to detect car. 6 up/down samples.</p> <p>3rd model - Split images in half for top bottom, trained Unet for each, and then concatenated final results. Same architecture as 2nd model, just on half images.</p> <p>Post processing: It appeared that many of the antenna's on the cars were non-continuous in the predictions. I id'd these by counting the external contours of small contours that were within 50 pixels of each other and above the center of mass of the largest contour of size &gt; 100k pixels. I then cropped an image containing the antenna range and ran a simple canny that ran through the already identified points. This seemed to clean up the antenna's substantially, but my estimate on final score impact was only around 0.00004.</p> <p>Used Keras with TF backend, and primarily ran on a 4x 1080 Ti setup. Found early on that running the Ti's for many hours straight caused them to get too hot and throttle their speed, so I had to install watercoolers on each of them to keep them at stable temps.</p> <p>Another interesting finding that I didn't take full advantage of was the difference in the stage backgrounds. It was stated by Carvana in one of the posts that there were three stages used, and if you look at enough images carefully you can identify the three distinct backgrounds. One of the three backgrounds performed substantially better (0.0001) across all cars when blocked by color, size, and rotation. I think there was opportunity to increase the score by creating custom predictors off of each of the backgrounds separately.</p>
Carvana Image Masking Challenge
LB 11th(0.9971) solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Carvana Image Masking Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Basically, generate image with edge, and eliminate pixels while getting tile location.</p> <p>For training set, it took 100 sec with 8 cores with overlap_ratio of 2. In retrospect, I should have added some more code for checking sanity and correct big holes far inside the car edge, etc. </p> <pre><code>mask = mask &gt;= 127 edge = mask - morphology.binary_erosion(mask) edge *= overlap_ratio hs = tile_size//2 qs = tile_size//4 while (True): nzs = np.transpose(np.nonzero(edge)) if len(nzs) == 0: break # Pick random pixels among non-zero values. for i in range(len(nzs)//10): nz = nzs[random.randint(0, len(nzs)-1)] if edge[nz[0], nz[1]] &lt;= 0: continue # Make it fit inside valid area nz[0] = np.clip(nz[0], hs, 1280 - hs) nz[1] = np.clip(nz[1], hs, 1918 - hs) tile_rcs.append((nz[0] - hs, nz[1] - hs)) # Record left-top coordinates edge[nz[0] - hs:nz[0] + hs, nz[1] - hs:nz[1] + hs] -= edge[nz[0] - hs:nz[0] + hs, nz[1] - hs:nz[1] + hs] &gt; 0 # Mark center area as 'done' part to avoid sampling again from similar location edge[nz[0] - qs:nz[0] + qs, nz[1] - qs:nz[1] + qs] = 0 if nz[1] == hs or nz[1] == 1918 - hs: # Avoid oversampling on Left,right edge cases edge[nz[0] - qs // 2:nz[0] + qs // 2, nz[1] - hs:nz[1] + hs] = 0 </code></pre> <ol> <li><p>Stage 1's output mask is used for generating stage2's test tile and fill areas that are not covered by tiles. I thought the quality of stage 1 output doesn't matter much since details are corrected anyway in second stage anyway. But in reality, there were a few images that have big errors.</p></li> <li><p>I think higher resolution in stage 1 will increase score just little bit. With 320x480 image, details like antenna is lost, but higher resolution will help</p></li> <li><p>I tried several other tile sizes. I saw heavy over-fitting with smaller tile size than 256. With 64 tile size, it works extremely well in local validation(even reached 0.9981 - well partially caused by the way that I split train/val), but terrible in public leader board. I tried tile size of 384 also, but it took too long to converge so I gave up. With 256x256 tile, it got okay model in a day. </p></li> </ol> <p>For code, I guess I need to clean it up first (for myself also) and it can take some time, but I'll try to share it.</p> <p>My solution is in two stages.</p> <p><strong>Stage 1: Predict rough outline of mask with 1/4 downscaled image</strong></p> <ol> <li>Scale down all images/masks to 320x480</li> <li>Train with default UNET.</li> <li>Predict mask for test set</li> </ol> <p><strong>Stage 2: Train/Predict mask only using images around edge areas.</strong></p> <p>This reduces the amount of data to process a lot, and I can use more complex network in reasonable time.</p> <p>Details are:</p> <ul> <li><p>Extract 256x256 tiles only along the edge of masks from stage 1 </p></li> <li><p>Used deeper UNET - 6 downsample/upsample. But for each depth, Conv layers with small number of layers than original UNET to avoid explosion of parameter numbers. Around 3million parameters are used.</p></li> <li><p>During training, tiles are overlapped so that each edge pixel are covered twice.</p></li> <li><p>During inferencing, tiles are overlapped so that each edge pixel are covered at least three times. So each pixel's prediction is voted at least by 3 predictions(see attached image) UNET tends to have bad prediction around edge area, but this method reduced this problem.</p></li> <li><p>Used standard augmentation(scale+-10%, flip left-right, brightness, contrast, pixel shift, etc) and basic dice_coeff as loss.</p></li> <li>Used batch normalization, and to maximize the effectiveness of it, <br> the batch consists of random tiles from multiple images. I guess this helped network to converge much faster.</li> </ul> <p>I tried several other segmentation networks, but this one showed best result with large margin(0.0002) compared to other models. Single model LB was 0.9970 and with ensemble of several folds from same model, it got to 0.9971. </p> <p>I think this pipeline works really well considering its simplicity, but I made a big mistake. - I didn't noticed that some test outputs from stage 1 were horribly wrong until today and didn't have chance to correct it. :( I should have found the issue early only if I had good visualization or simple sanity check logic.</p> <p>The best lesson from this competition is that good visualization/analyzing tool is really important.</p> <p>Oh, Heng CherKeng, thank you very much. I learned a lot from your posting!</p>
Carvana Image Masking Challenge
LB 15th (0.9970) Some new ideas (and a bug!)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Carvana Image Masking Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>No, I just translated it using Google docs, but it's basically a Unet with some residual blocks.</p> <p>It did on cases where the background was too similar to the car so the net supposedly uses L2 distance to discern... but it's difficult to say for sure with more comparisons.</p> <p>We had better predictions with RGB + X + CO (we also tested the other variations)... but unfortunately the race condition bug and the fact that we only had time to refine ~80% of the coarse masks prevented us from getting better score.</p> <p>The other obvious thing we could have done in hindsight was to be a bit more aggressive with augmentations... we just did flips... and I think we should have done small rotations + zooming.</p> <p>Our solution has two stages that can be run recursively (but we only ran then once b/c of lack of time) =&gt;</p> <p><strong>1. Predict a coarse mask (CO)</strong>. For this we used a regular UNET @ full resolution.</p> <p><strong>2. Refine the mask using patches around the mask contour</strong> </p> <p><img src="http://s3.eu-central-1.amazonaws.com/kaggle-carvana-team73/patches-net.png" alt="Patchesnet" title=""></p> <p>This second network is a residual UNET (<a href="https://arxiv.org/pdf/1708.04747.pdf">https://arxiv.org/pdf/1708.04747.pdf</a>) that works with patches, we used <code>384x384</code> patches.</p> <p>The input of the network is <code>384x384x6</code> =&gt; RGB (<code>3</code>) + X (<code>2</code>) + CO (<code>1</code>)</p> <p>Where X is the L2 difference of the patch RGB vs. a computed background. For each car, using its coarse mask we (offline) computed the background using the 16 views and basic pixel statistics:</p> <p><img src="http://s3.eu-central-1.amazonaws.com/kaggle-carvana-team73/carvana-background.png" alt="Computed background" title=""></p> <p>When feeding the L2 difference, pixels in magenta will be substituted by gaussian noise with same mean/variance that non-magenta pixels, that's the first channel of X, the second channel is just an 1,0 mask of whether the pixel is magenta or not.</p> <p>Finally CO is just the probabilities (saved as PNG files) of the coarse mask for the previous iteration.</p> <p>We computed the effective receptive field of our residual Unet and was ~224 px. Since patches are going to overlap on inference we weighted each pixel in the patch so center pixels contributed more than edge pixels (ratio 1:4).</p> <p>For augmentation we just used random horizontal flips, both for training and TTA.</p> <p><strong>BUG</strong> Unfortunately we had a bug and some predictions came out like this: <img src="http://s3.eu-central-1.amazonaws.com/kaggle-carvana-team73/bad-prediction.png" alt="Bad prediction" title=""></p> <p>Which is due (we think!) to a race condition in our Keras inference/generator pipeline... and meant we had to blindly reject predictions whose refined mask was different (dice &lt;0.999) than the coarse one... neglecting scenarios where the refined one made a lot of corrections.</p>
Carvana Image Masking Challenge
LB0.9972 - 5th Place Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Carvana Image Masking Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Overview</h2> <p>My overall pipeline looks something like this:</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/225124/7421/CarvanaPipeline.jpg" alt="Carvana Pipeline" title=""></p> <p>As per above I only used a pretrained Resnet-50 based FCN for this competition (see "Future Improvements" for why), fine tuning all layers (Adam, learning rates from 1e-4 -&gt; 1e-6). Scales of 1280x1280, 1600x1280, 1918x1280, and 2010x1340 were used to generate different 7 different weighted models for ensemble. In summary:</p> <ol> <li>1918x1280(HQ, TTA6x) [weight=0.2,public LB=0.9970+, private=0.9969]</li> <li>1918x1280(TTA6x) [weight=0.3,public LB=0.9970+,private=0.9969]</li> <li>1918x1280(HQ, TTA2x), Fold 2 [weight=0.2,public LB=0.9970+,private=0.9969]</li> <li>1600x1280(TTA4x) [weight=0.1,LB=0.9969]</li> <li>2010x1340(TTA2x) [weight=0.1,LB=0.9969]</li> <li>1280x1280(TTA6x) [weight=0.05,LB=0.9968]</li> <li>1280x1280(TTA6x, using only dice loss) [weight=0.05,LB=0.9968]</li> </ol> <p>(1918x1280 and 2010x1340 scales need &gt;8GB cards)</p> <p><strong>Augmentations</strong> Horizontal flips, rotations (up to 10 degrees), height/width translation of about 5%, and zooms of about +/-10%. There was a strong penalty on using strong augmentations - I turned down from an initial rotation of 30 degrees and I remember local and LB improving by a factor of 0.0001. Did not sweep further or try other augmentations (contrast/brightness, etc.) TTA used both horizontal flips and slight vertical translations (up to 6x permutations). </p> <p><strong>CV</strong> CV=0.9970 for 1918x1280, 0.9969 for 1600x1280/2010x1340, and 0.9968 for 1280x1280, and their CVs were close to LB. I had some earlier runs with random splits between (not car split), and these seemed to give a slightly higher CV than LB (e.g. CV of 0.9971+ would yield a LB of 0.9970), but kept them anyway since I didn't want to retrain everything again.</p> <p><strong>Loss</strong> binary cross entropy - log(dice). This gave slightly better results in local score than dice only. Batch size of 1 was used. Did not try weighted losses.</p> <h2>Post-processing</h2> <p>I spent the bulk of my time (while waiting for models to complete training) to implement post-processing. Like team "David" I saw a lot of small artifacts representing the antenna post-prediction, and found a way to merge (first check if the artifacts represent a straight line [check their R^2], then train a linear regressor and add a point inside the main hull, then create a convexHull around these artifacts + thresholding to improve the shape). Although the process was complex, the score impact was minor but it helped. </p> <p>I also had another step to close all holes inside the main contour (with certain distance from the edge, to avoid closing sidebars or wheels), and also close deep U-shaped concave contours (U-shaped only, to avoid closing say the front/back of a pickup). This helped significantly in some cases especially on the van, and I believe that this helped a lot for private LB.</p> <p>Finally, one more post-processing step was added to remove artifacts some distance away from the main contour. I had two thresholds - one to determine a far away but small artifact/contour (to avoid purging unmerged antennas), and a close threshold to determine large objects close by, and purge them.</p> <p>What was the impact of these fixes? Looking the private LB showed an improvement from 0.997010 (before post-processing) to 0.997209 (after post-processing - a big jump!</p> <p>In addition, I ran a visual check on the test predictions using the following criteria:</p> <ol> <li>Stability of ensembles (i.e. for each model, the pixel sum variance)</li> <li>Number of contours (i.e. if too many contours, prediction is dirty)</li> <li>Area of the second largest contour (i.e. same idea as #2, if there is a second contour that is large, something is off)</li> </ol> <p>This helped to catch some issues like the van, the single blue car (which was noisy for any architecture/scale including UNET), etc.</p> <h2>Other Notes</h2> <p>Earlier on I stopped all progress once I found out the LB was broken and resumed only once it was fixed :) (so a few weeks lost there). Also, I only purchased a GTX1080Ti about 3 weeks before the end of the competition - this was required to train scales &gt; 1600x1280 for the pretrained model.</p> <h2>Hardware Used</h2> <ol> <li>GTX1080Ti - purchased 3 weeks prior to end of competition, so not much mileage. Needed for 1918x1280 and 2010x1340.</li> <li>GTX1080 </li> <li>GTX1070 - inferencing / TTA only</li> </ol> <h2>Future Improvements</h2> <p>I only used a Resnet-50 FCN here (rather boring, yes), since from a quick experiment I saw comparatively better performance for the pretrained Resnet vs UNET at similar scales (but without trying to crop or layer tune further on the UNET). </p> <p>In retrospect - that was not a good decision since my ensembles turned out to be so-so (without good diversity), but I ran out of time with my limited hardware to attempt other architectures.</p> <p>Also, I tried to use Resnet-101/152 as well but these ran out of memory beyond 1024x1024, and fine tuning only certain layers (e.g. deep layers) did not give improvements over all layers in Resnet-50.</p>
Carvana Image Masking Challenge
My solution with code Public 99.71 but Private 99.67
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Carvana Image Masking Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ol> <li><strong>Framework</strong>: Tensorflow(Keras)</li> <li><strong>Network</strong>: customized Unet referred to Wide Residual Network, Squeeze-and-Excitation Network <a href="https://github.com/fujisanx/kaggle/tree/master/Segmentation">https://github.com/fujisanx/kaggle/tree/master/Segmentation</a></li> <li><strong>Training</strong>: 640x960 (640x959 1px padding), rotation, horizontal flip, scaling, HSV change, ganma correction, random erase</li> <li><strong>Inference</strong>: 640x960 ( crop 640x959 ) up size cv2.INTER_CUBIC</li> <li><strong>Validation</strong>: one hold out 20% data </li> <li><strong>Test set predictions</strong>: 3 model predictions ( a little different paramater NN ) with TTA (flips).</li> <li><strong>Hardware</strong>: GTX1080ti x 1</li> </ol> <p>Japanese blog <a href="http://zero-ai.hatenablog.com/entry/2017/09/28/200328">http://zero-ai.hatenablog.com/entry/2017/09/28/200328</a></p>
Cassava Leaf Disease Classification
#77 LB - some low-tech ideas
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It does seem like it.</p> <p>Just the <strong>vit_base_patch16_384 / EnetB4 / Resnext blend</strong> (no weighting, just unweighted full image+center crop+tta) scored:<br> Priv/Public 0.898 0.905</p> <p><strong>Adding crop/tta weightings</strong> to the above 3:<br> Priv/Public 0.899 0.903</p> <p>Adding in the <strong>EnetB3 and vit_small_patch16_224</strong> (even though they had poor accuracy by themselves) and their weightings gave<br> Priv/Public 0.900 0.906</p> <p>So the last step is probably borderline (made more of a difference on public LB, did a little better overall but hard to see exactly with 3 decimal places!) but as far as I can see, optuna came up with some decent weightings, unless it was just good luck.</p> <p>I am actually fortunate that I ran the 3rd of the above submissions, because otherwise I would possibly have gone with the higher Public LB score / lower private as at least one of the 2 selected subs, even though Optuna said CV would be higher with weightings.</p> <p>Edit: should also add, Optuna analysis provided this improvement (assuming it's not just good luck) with probably non-ideal data, as I only had the full 5-fold ensemble for efficientnetb4. Some images only had OOF predictions from 2-3 models.</p> <p>Thanks! I read and upvoted your notebook during the competition :-)</p> <p><strong>Disclaimer:</strong> I don't have background knowledge in image classification etc. - just learnt what I could from Kaggle, other competition posts &amp; notebooks over last few months. I'm just sharing this as it's possible there are some ideas that others did not try.</p> <p>Really enjoyed learning from this comp though by the end of 3 months I think I was happy to end and spend less time tracking results of model runs for a while regardless of where I landed up on LB :-) Thanks to everyone who posted very insightful notebooks and discussion posts.</p> <p>Landed up simply following anything which improved both CV and LB, given the problems in correlating the two. In retrospect, should probably have had a little more faith in CV (as many people on forum were saying!).</p> <p><strong>Things I did do:</strong></p> <p>Dropped some of the likely mislabeled or at least potentially misleading healthy samples (around 150) based on model predictions. Dropping a lot of other samples seemed to make public LB drop and I felt I could not be confident it was helpful.</p> <p>Pretrained on 2019 images (used features to create 50 labels based on clusters). This was probably mostly practical as I felt like the model then could be trained faster from that point. I don't think I saw any specific edge from including 2019 data in training directly, but it may have been that my experiments were not sufficiently thorough. It seemed to me that adversarial analysis of 2019 vs 2020 extracted features showed some significant differences in the average image (I didn't go into depth on this). Rightly or wrongly, this also made me wonder whether the image test set in this comp might have some modest differences compared to the train - which made me lean towards things which improved both CV and LB.</p> <h2>Individual Classifiers</h2> <p>Individual classifiers used in blend had CVs in range of 0.905 (EnetB4, VIT) down to around 0.89. The CV numbers are including some TTA (different crops).</p> <p>Trained with some variation in settings:<br> Pretrain (2019) - 3 epochs with lowish / slightly increasing LR. More than this did not really produce any benefit I could see.</p> <p>Training Loss = bitempered ('t1' : 0.6, 't2' : 1.4) or TaylorCrossEntropyLoss</p> <p>Augmentations: CutMix p=0.5 (most of the models), CoarseDropout, Cutout (some models), other standard Albumenations (flip/rotate/hue/brightness)</p> <p>Learning rate: for main training (after pre training) started around 1e-04 and declined over around 8-15 epochs. Think I found that mostly some final epochs with a low LR could squeeze out a little more validation accuracy.</p> <p>Validation accuracy - not sure if this is the best approach, but moved from random crop to centre crop during the comp to have more stable validation trend. Tried to look at validation loss as well as accuracy and to keep training accuracy below validation.</p> <h2>What made a difference:</h2> <p>From Public 0.91 LB (best efficientnet 5-fold with TTA) to 0.96 was:</p> <p><strong>Blending</strong><br> Analysis of image crops - realised from various iterations that just random TTA might not be the best approach.</p> <p>Therefore tried weighting of Classifier + Crop with Optuna analysis of CV predictions. This was imperfect because I didn't actually have 5 models of each type, so I did a bit of rounding where the Optuna outputs looked a bit strange, and dropped models/crops with very low weighting to control total run time.</p> <p><strong>Classifiers:</strong></p> <p>5 x tf_efficientnet_b4_ns, 512*512 image size</p> <ul> <li>1 x full image weight 0.082631</li> <li>1 x centre crop weight 0.102575</li> <li>4 x tta random resized crop weight 0.042924</li> </ul> <p>4 x tf_efficientnet_b3_ns, 384*384 image size</p> <ul> <li>1 x full image weight 0.028432</li> </ul> <p>3 x vit_base_patch16_384, 384*384 image size</p> <ul> <li>1 x full image weight 0.037909</li> <li>1 x centre crop (512*512 resized) weight 0.126364</li> <li>5 x tta random resized crop weight 0.063182</li> </ul> <p>2 x resnext50_32x4d, 384*384 image size</p> <ul> <li>1 x full image weight 0.1</li> <li>1 x centre crop (512*512 resized) weight 0.1</li> <li>5 x tta random resized crop weight 0.078689</li> </ul> <p>2 x vit_small_patch16_224, 224*224image size</p> <ul> <li>1 x full image weight 0.075</li> </ul> <p>All TTA were limited to scale around 0.37 - 0.85 (in albumenations RandomResizedCrop) based on some limited CV experiments.</p> <p>I'm not really sure that I understood all the weights that Optuna allocated, in particular the weighting given to vit_small_patch16_224, but the LB increased roughly in line with what Optuna calculated for CV improvement and end leaderboard change was relatively little as far as I can see compared to the general shakeup. Changing a few things in my final submissions yielded no improvements though I'm sure many could be found with time. </p> <p>Am not sharing notebooks as a) they are not particularly organised and b) there is nothing special I think in my training / use of optuna (plenty of other optuna notebooks out there!).</p> <p>Anyway, I look forward to learning from the top solutions :-)</p>
Cassava Leaf Disease Classification
10th place solution (23rd public)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I don't know if it's correct to call it "zoom" (like I did). But yes, I used RandomResizedCrop augmentation during training, and the dataset also consists of different kind of images - from close-ups to photos of fields. So such kind of TTA should help to extract more diverse features (see nice explanation about image size &amp; patterns by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/160147" target="_blank">https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/160147</a> ). Another reason I tried to use center crop is to "focus" on the leaves in the center of the photo. TTA with crops could affect close-up photos, but the CV score always was better.</p> <p>Thank you!</p> <p>thanks, fixed</p> <p>It was my first Kaggle competition, and my goal was to jump into the top 200 and get a silver medal, but luckily I ended up with gold, and I'm very happy with the result. On the other hand, I understand that it was a lottery, and many participants have better models, but they did not choose it. I decided to select my best LB (0.9079, 23rd) and one of the best CV with a high LB score (0.9066 LB, top-70). The last one, as I expected, gave me 10th place at the private LB, funny that this was my final submission in the competition, and I even called it "good_luck" :)</p> <p>But anyway, it looks like the guys from 1st place (and probably others, will see) have built something special, and many other solutions in the 0.899-0.903 range are very close. It's also sad that some people who shared good ideas and notebooks didn't get the expected place. I want to thank <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> (Heroseo) for the great notebooks and <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for so many ideas shared.</p> <p>Below you can find the key items of my solution. My solution is pretty simple, and I think many participants used a similar approach, but probably someone finds it to be useful.</p> <h3>Approach (tldr)</h3> <ul> <li>Prevent overfitting on noise, but don't clean or re-label the dataset.</li> <li>Trust your CV, but use public LB as a double-check (but towards the end of the competition, when I saw such close results, I decided to trust CV even more, especially given the number of test samples in LB and CV).</li> <li>Key things that worked for me: noise removal (~2-3%), OUSM loss, mixup/cutmix, TTA</li> </ul> <h3>Submissions</h3> <h6>Single models (5 folds, 4xTTA)</h6> <table> <thead> <tr> <th>#</th> <th>Model</th> <th>Dataset</th> <th>CV</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>model-1</td> <td>DeiT-base-384</td> <td>2020 + 2019 data</td> <td>0.901</td> <td>0.9043</td> <td>0.8950</td> </tr> <tr> <td>model-2</td> <td>EffNet-B4</td> <td>2020 data</td> <td>0.901</td> <td>0.9048</td> <td>0.8975</td> </tr> <tr> <td>model-3</td> <td>EffNet-B4</td> <td>2020 + 2019 data</td> <td>0.906</td> <td>0.9025</td> <td><strong>0.9010</strong></td> </tr> </tbody> </table> <p>Note: "<em>2020 + 2019 data</em>" means upsampling for minor classes using the 2019 dataset (duplicates were removed, use only 2020 for testing), and model-3 is better not only because of 2019 data, but rather due to mixup/cutmix. And yes, as others have already mentioned, there are many unexpected scores at private LB for models I didn't select, e.g. one of the model-1 fold has 0.899 private score (while 5 folds ensemble 0.895).</p> <h6>Ensembles</h6> <table> <thead> <tr> <th>(just avg)</th> <th>LB</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>model-1 + model-2 + model-3</td> <td>0.9066</td> <td><strong>0.9017</strong></td> </tr> <tr> <td>model-1 + model-2</td> <td>0.9079</td> <td>0.8998</td> </tr> <tr> <td>model-1 + model-3 (not selected, CV=0.907)</td> <td>0.9066</td> <td>0.9013</td> </tr> </tbody> </table> <p>Due to my vacation, for the last 2 weeks, I didn't train new models (but there was some potential in improving the 3rd model), but only tried to make an ensemble from what I already had.</p> <h3>Noisy data</h3> <p>It was quite obvious that private data is also noisy. After analyzing the dataset and reading various articles on cassava diseases, I assumed that there are different types of noise, so eliminating all of the noise can be risky. My approach was:</p> <ul> <li>Remove only ~2-3% of data (noise) using OOF confident predictions to reduce the overall % of noise in the dataset, but keep "useful" noise.</li> <li>Use part of OUSM loss (without re-weighting) - <a href="https://arxiv.org/pdf/1901.07759.pdf" target="_blank">https://arxiv.org/pdf/1901.07759.pdf</a> . I enable it after N epochs to let the model normally train before it starts to memorize the noise, and I also modified it a little bit to use with a smaller batch size of 16 (i.e. remove a loss even for 1 sample per batch is too much).</li> <li>Standard but strong augmentations to prevent overfitting (to train longer).</li> <li>Early stopping. At some point, I noticed that my valid accuracy is still improving slightly, but train accuracy is already too high, it didn't look like classic overfitting. So I tried early-stopping by max train accuracy, and it helped to climb LB with the same CV score. Later, when I added mixup/cutmix, that behavior disappeared (less overfit).</li> <li>Mixup/Cutmix (I also disable it completely for the last 3 epochs). Unfortunately, I tried it too lateโ€ฆ and just trained 2 models with some default parameters. But after looking at the private LB scores it turned out to be an important part of my best model (both CV and private LB). In the mixup paper ( <a href="https://arxiv.org/pdf/1710.09412.pdf" target="_blank">https://arxiv.org/pdf/1710.09412.pdf</a> ) you can also find that it helps in training with corrupted labels.</li> </ul> <p>There are other noise-robust losses and loss correction techniques, but I didn't try them due to lack of time.</p> <h3>TTA</h3> <p>I rarely use TTA in my work due to performance reasons (except when it is offline processing and it provides significant improvements), so it was a surprise that it can help so much in competitions.<br> TTA gave me a stable 0.001-0.002 improvement in all experiments for both CV and LB. I used horizontal/vertical flips + zoom (crop 384 -&gt; resize to 512).<br> "Zoom" augmentation was especially good for both DeiT and EffNet. When I found that, I even tried to build an additional classifier to pre-process all images (both for training&amp;testing) to crop not close-ups samples. The classifier (close-up or not) worked very well, but CV score slightly dropped, so I decided to not proceed with that idea.</p> <h3>Distillation</h3> <p>I previously used knowledge distillation, self-distillation, โ€ฆ several times at work <a href="https://towardsdatascience.com/distilling-bert-using-unlabeled-qa-dataset-4670085cc18" target="_blank">https://towardsdatascience.com/distilling-bert-using-unlabeled-qa-dataset-4670085cc18</a> , and it gave me the highest single model CV score (0.91+) in this competition. But I didn't use it because I was afraid of an implicit leak through the soft labels (or weighted soft+ground truth) that I generated from the OOF predictions. But all my distillation experiments were done with a weaker model/teacher, and later I did not try to use it again, perhaps it could help.</p> <h3>Other things that worked well for me</h3> <ul> <li>Label smoothing (0.2 - 0.3)</li> <li>Sometimes Taylor Loss (+label smoothing) was better than Cross-Entropy, I didn't try Bi-Tempered loss.</li> </ul> <h3>Training</h3> <ul> <li>GPU: I used my 2080TI and Colab Pro.</li> <li>PyTorch</li> <li>Adam, cosine annealing with a warmup, 16 batch size, gradient accumulation, fp16. </li> </ul> <p>The most important thing I want to point out is that I learned something useful from this competition - various strategies for dealing with noisy data. I have not tried many of them due to lack of time, but I am sure that knowledge will help in my work.</p> <p>Thank you to the organizers and all the participants. Let me know if you have any questions and see you in the next competitions! </p> <p>Thanks.<br> OOF - out-of-fold predictions.</p> <p>Hi <a href="https://www.kaggle.com/yutoshibata" target="_blank">@yutoshibata</a>,<br> I used center crop (384x384) -&gt; resize (512x512) for Effnet, and just crop for DeiT.</p> <p>I just averaged the predictions of all the models without any weighting (if I understand your question correctly). I didn't experiment much with ensembles, I spent most of my time training a single model, that's what was interesting to me.</p>
Cassava Leaf Disease Classification
123rd place (Silver) solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is for sure the best part of the Kaggle journey - learning new stuff :) Keep it up and good luck!</p> <p>Now honestly it depends (re Keras/Tensorflow). Pytorch is very good for research/experimentation/quick&amp;dirty iterations - which makes it of course super fit for Kaggle. But is it that good for production/real-world applications (where performance/stability/scalability are crucial) - this is arguable (in my opinion). For example, I use ONLY TF in my job and also for my side projects. My point is - the picture you see here on Kaggle is a bit biased (re technologies in use/popularity) - and doesn't necessarily reflect the real world situation. TF is dominating the industry with a huge margin - and for a good reason :) (just like pytorch is dominating the research/academic fields). Good luck with your next competition - cheers! </p> <p>Don't think you failed mate - you did it all correct. It was the super noisy (and unstable) data unfortunately.</p> <p>ะžะฟั€ะตะดะตะปะตะฝะพ ะฑะตัˆะต ะดะพัั‚ะฐ โ€ฆ ะฝะตัั‚ะฐะฑะธะปะฝะพ (ะทะฐ ะดะฐ ะฝะต ะธะทะฟะพะปะทะฒะฐะผ ะฟะพ-ัะธะปะฝะธ ะดัƒะผะธ :D ). ะะต, ั‡ะต ะต ะฝะตั‰ะพ ะฝะตะพะฑะธั‡ะฐะนะฝะพ ะฝะฐะฟะพัะปะตะดัŠะบ. ะ—ะฐ ะผะตะฝ ะฑะตัˆะต (ะทะฐ ััŠะถะฐะปะตะฝะธะต) ะฟะพ-ัะบะพั€ะพ ะฟั€ะพะฟัƒัะฝะฐั‚ะฐ ะฒัŠะทะผะพะถะฝะพัั‚ :( ะœะพะดะตะปัŠั‚, ะบะพะนั‚ะพ ั‰ะตัˆะต ะดะฐ ะผะธ ะดะพะฝะตัะต ะทะปะฐั‚ะพ ะฑะตัˆะต ัั€ะตะด ั„ะธะฝะฐะปะฝะธั‚ะต ะผะธ ะบะฐะฝะดะธะดะฐั‚ะธ (ะธ ัะต ะฟั€ะตะดัั‚ะฐะฒััˆะต ะฝะฐะน-ะดะพะฑั€ะต ัั€ะตะด ั‚ัั…). ะะท ะพะฑะฐั‡ะต ะฝะต ะผัƒ ะฟะพะฒัั€ะฒะฐั… - ะฒัะต ะผะธ ัะต ัั‚ั€ัƒะฒะฐัˆะต, ั‡ะต ัะปะตะดะฒะฐั‰ะธัั‚ ะฝะฐะน-ะดะพะฑัŠั€ ะต ะฟะพ-ัั‚ะฐะฑะธะปะตะฝ. ะ˜ ั‚ะฐะบะฐ ะฟั€ะธะบะปัŽั‡ะธั… ััŠั ัั€ะตะฑั€ะพ โ€ฆ :( Keep walkingโ€ฆ</p> <p>And yes I use pytorch</p> <p>Hey, usually when you predict from cross validated model, you take the best epoch(weights) for each fold, predict from it - and then average the results to create the final prediction. You can make more predictions from each fold though - like instead of one(best) - take the 2-3 best epochs(weights), predict from them and average at the end. This is kind of "self blending" the same model. For example if you have 5 fold CV - and you choose and predict from 3 best epochs for each fold - you will end up with averaging 15 predictions at the end. That will be of course more stable than averaging just 5 predictions (one (best epoch) from each fold). I achieved pretty much the same result though by training the model on the full data (not using CV) - and then averaging the predictions from the 3-4 best epochs. Basically achieving stability with 3-4 predictions rather than 15. I still used CV to guess the performance of the model and choose the (probably) best epochs to blend. Is this a good idea - depends on the particular case of course. But is a valid approach. Please let me know if something is still unclear :)</p> <p>Hey Dear Kagglers :)</p> <p>First of all, congratulations to all winners and participants!<br> And many thanks to all contributors for their amazing kernels and topics!</p> <p>Wanted to put my 2 cents in - so you can find a brief overview of my solution below:</p> <h3>General approach</h3> <p>I noticed the CV results was quite unstable overall - I was observing very significant discrepancies in validation and LB scores between the different folds of the same models (mostly because it is not easy to stratify the noise (probably :) ) ). When blending 2-3 best epochs from each fold it was kind of stabilizing - but it was not a very efficient way to do so in my opinion (especially considering the submission time limitations of the code competitions). I discovered though, I was able to produce better/equivalent results by blending the 3-4 best epochs of a model trained on the full set (stabilizing the models "horizontally" in a way). This also allowed me to include way more (diverse) models in my submissions. <br> And so I did:</p> <ul> <li>Final models were trained on the FULL data.</li> <li>Still used CV for validation/evaluation purposes only.</li> </ul> <p>Was this the best approach? Well on one side I perfectly survived the shakeup. On the other - I failed to identify my best submission (which would have been in the money/gold medal zone: <a href="https://www.kaggle.com/gpamoukoff/private-lb-0-902-blend-effnet-b3-b4-vit-resnext50)" target="_blank">https://www.kaggle.com/gpamoukoff/private-lb-0-902-blend-effnet-b3-b4-vit-resnext50)</a>.</p> <h3>Models</h3> <p>Trained with the aim to be as diverse as possible:</p> <ul> <li>effnet b3/b4, vit, xception, resnext50 architectures</li> <li>Label Smoothing</li> <li>Various loss functions: CE, Symmetric CE, Taylor CE</li> <li>Various schedulers: Gradual Warmup/CosineAnnealingWarmRestarts</li> <li>Augmentation: played mostly around crops/cutmix apart from the standard techniques.</li> <li>Data:<ul> <li>only 2020</li> <li>pretrain on 2019, train on 2020</li> <li>train on both (one pass)</li></ul></li> </ul> <h3>Ensemble:</h3> <p>Hierarchical blend:</p> <ul> <li>Blended different epoch/best of kind models first</li> <li>Then blended the already blended โ€œsuperโ€ models</li> <li>Used CV to optimise the weights</li> </ul> <p>In short - could have been way better. Next time. Keep walkingโ€ฆ</p>
Cassava Leaf Disease Classification
14th Place Solution & Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/raivokoot" target="_blank">@raivokoot</a> almost :) The important thing is that I treat models that have the same meta-parameters but come from the different training folds as <strong>one model</strong>. You can call it <code>k</code> versions of one model. For each of them, the predictions are only computed for the validation fold. Together, they cover the whole dataset since we have multiple versions. </p> <p>Next, I add a new model - model with different meta-parameters than the first one. I also train <code>k</code> versions of that model and obtain a set of out-of-fold predictions for the whole dataset. Now I have two models where all predictions are out-of-fold and I can use them as features to build a stacking classifier. </p> <p>True. Not all ensembles are useful in the real world though. Big respect to the teams who were able to score higher with a single model!</p> <p>True. You never know what works best on another dataset, so decisions like this usually come from trial and error.</p> <p><a href="https://www.kaggle.com/mrinath" target="_blank">@mrinath</a> The idea is to use a classifier with pre-trained ImageNet weights to extract some additional information from images without the need to train a model on the cassava dataset. For instance, <a href="https://www.kaggle.com/lizzzi1" target="_blank">@lizzzi1</a> noticed that images classified by ImageNet model as "apple" tend to be unusually green, "banana" have many yellow leaves, whereas images with stems are classified as "walking stick". Using these predictions from the ImageNet model in the stacking ensemble helped to add a little bit of signal in addition to the other classifiers.</p> <p>Hi <a href="https://www.kaggle.com/raivokoot" target="_blank">@raivokoot</a>! Ideally, you would like to have an additional holdout set or perform a nested cross-validation for the meta-classifier. In practice, this can be unstable on small datasets like the one we have or become time-consuming if you need to train base models multiple times. Using public LB would also be dangerous in terms of the risk of overfitting, since it only contains about 5,000 images.</p> <p>What we did here was simply the following:</p> <ul> <li>split data into <code>k</code> folds using stratified CV</li> <li>train <code>n</code> classifiers using the same CV split and get OOF predictions for the whole dataset</li> <li>concatenate OOF predictions from <code>n</code> classifiers and form a new training dataset</li> <li>split the new dataset into the same <code>k</code> folds</li> <li>train a meta-classifier using the same CV split</li> </ul> <p>Having the same split on both stages reduces the risk of overfitting of the stacking algorithm but does not eliminate it. You can read more about different validation strategies for blending/stacking <a href="https://www.kaggle.com/general/18793" target="_blank">here</a>. </p> <p>I would suggest you to check out these two notebooks (our approach was based on them):</p> <ul> <li><a href="https://www.kaggle.com/graf10a/cldc-image-duplicates-with-dbscan" target="_blank">DBSCAN on embeddings</a></li> <li><a href="https://www.kaggle.com/nakajima/duplicate-train-images" target="_blank">Image hash</a></li> </ul> <p>Thanks <a href="https://www.kaggle.com/mohneesh7" target="_blank">@mohneesh7</a>. We started with 2-3 model ensembles as well, where we would average predictions from 5 folds for each model. But we quickly discovered that having more single-folded models brings better performance due to a higher diversity and shifted towards a larger model library.</p> <p>Thanks <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> and congrats with the bronze! I know you probably expected more but shakeups can be roughโ€ฆ</p> <h2>Summary</h2> <p>First, I would like to say thanks to my teammate <a href="https://www.kaggle.com/lizzzi1" target="_blank">@lizzzi1</a> for the great work and to Kaggle for organizing this competition. This was a very interesting learning experience. We invested a lot of time into building a comprehensive PyTorch GPU/TPU pipeline and used Neptune to structure our experiments. </p> <p>Our final solution is a stacking ensemble of different CNN and ViT models; see the diagram below. Because of the label noise and the evaluation metric, trusting CV was very important to survive the shakeup. Below I outline the main points of our solution in more detail. <br> <img src="https://i.postimg.cc/d1dcZ6Zv/cassava.png" alt="cassava"></p> <h2>Code</h2> <ul> <li><a href="https://www.kaggle.com/kozodoi/14th-place-solution-stack-them-all" target="_blank">Kaggle notebook reproducing our submission</a></li> <li><a href="https://github.com/kozodoi/Kaggle_Leaf_Disease_Classification" target="_blank">GitHub repo with the training codes and notebooks</a></li> </ul> <h2>Data</h2> <ul> <li>validating on 2020 data using stratified 5-fold CV</li> <li>augmenting training folds with:<ul> <li>2019 labeled data</li> <li>up to 20% of 2019 unlabeled data (pseudo-labeled using the best models)</li></ul></li> <li>removing duplicates from training folds based on hashes and DBSCAN on image embeddings</li> <li>flipping labels or removing up to 10% wrong images to handle label noise (based on OOF)</li> </ul> <h2>Augmentations</h2> <pre><code>- RandomResizedCrop - RandomGridShuffle - HorizontalFlip, VerticalFlip, Transpose - ShiftScaleRotate - HueSaturationValue - RandomBrightnessContrast - CLAHE - Cutout - Cutmix </code></pre> <ul> <li>TTA: averaging over 4 images with flips and transpose</li> <li>image size: between 384 and 600</li> </ul> <h2>Training</h2> <ul> <li>10 epochs of full training + 2 epochs of fine-tuning the last layers</li> <li>gradient accumulation to use the batch size of 32</li> <li>scheduler: 1-epoch warmup + cosine annealing afterwards</li> <li>loss: Taylor or OHEM loss with label smoothing = 0.2</li> </ul> <h2>Architectures</h2> <p>We used several architectures in our ensemble:</p> <ul> <li><code>swsl_resnext50_32x4d</code> - x14</li> <li><code>tf_efficientnet_b4_ns</code> - x8</li> <li><code>tf_efficientnet_b5_ns</code>- x7</li> <li><code>vit_base_patch32_384</code> - x1</li> <li><code>deit_base_patch16_384</code> - x1</li> <li><code>tf_efficientnet_b6_ns</code> - x1</li> <li><code>tf_efficientnet_b8</code> - x1</li> </ul> <p>ResNext and EfficientNet performed best, but transformers ended up being very important to improve the ensemble. Two EfficientNet models also included a custom attention module. Some models were pretrained on the PlantVillage dataset, others started from ImageNet weights.</p> <h2>Ensembling</h2> <p>Having done many experiments allowed us to inject a lot of diversity in the ensemble. Our best single model scored <strong>0.8995 CV.</strong> A simple majority vote with all models got <strong>0.9040 CV</strong> but we wanted to go beyond that and explored stacking.</p> <p>Stacking was done with a LightGBM meta-model trained on the OOFs from the same CV folds. Apart from the 33 models described above, we included 2 "special" EfficientNet models:</p> <ul> <li>pretrained ImageNet classifier predicting ImageNet classes (from 1 to 1000)</li> <li>binary classifier predicting sick/healthy cassava (0/1)</li> </ul> <p>Including the last two models and optimizing the meta-classifier allowed us to reach <strong>0.9067 CV</strong>, which was our best. To fit all 33+2 models into the 9-hour limit, we only used weights from a single fold for each of the base models. The inference took 8 hours and scored <strong>0.9016</strong> on private LB just a few hours before the deadline. It was tempting to choose a different submission since stacking only got <strong>0.9036</strong> on public LB (outside of the medal zone), but we trusted our CV and ended up rising to the 14th place. </p> <p>Let me know if you have any questions and see you in the next competitions! ๐Ÿ˜Š</p> <p>Thank you <a href="https://www.kaggle.com/skgone123" target="_blank">@skgone123</a>! I hope you will finally get your Master title in the next competition!</p> <p><a href="https://www.kaggle.com/vkehfdl1" target="_blank">@vkehfdl1</a> thanks for the kind words ๐Ÿ˜Š</p> <p>No, each model - except for the 2 "special" ones - comes from the same fold. We used one fold that was kind of "average" in terms of its OOF. Using models from different folds was good for simple ensembles but a bit harmful for stacking. Some base models were not very stable across the folds, and the meta-algorithm performed worse on LB due to this change in the model quality. Fixing all models to the single fold proved to work better for us.</p> <p>P.S. You can also check out <a href="https://www.kaggle.com/kozodoi/14th-place-solution-cassava-stacking" target="_blank">this notebook</a> with more details on how the stacking was done :)</p> <p><a href="https://www.kaggle.com/amiiiney" target="_blank">@amiiiney</a> thank you! We observed the highest CV accuracy when using hard labels such that every base model provides 4 features to the stacking model: the highest predicted class, the second-highest predicted class and so on. The meta-classifier predicted 5 class probabilities and we used <code>argmax</code> over them to produce the final prediction.</p> <p>You can read more details about ensembling in <a href="https://www.kaggle.com/kozodoi/14th-place-solution-cassava-stacking" target="_blank">this notebook</a> that reproduces our submission.</p> <p>Thanks! Our best single model is <code>tf_efficientnet_b5_ns</code> with 512x512 images.</p> <p>I am no expert in PyCharm, but maybe this page might help: <a href="https://www.jetbrains.com/help/pycharm/configuring-python-interpreter.html" target="_blank">https://www.jetbrains.com/help/pycharm/configuring-python-interpreter.html</a></p> <p>I would recommend trying running the notebooks in Google Colab if you have issues on a local machine.</p> <p>Hi <a href="https://www.kaggle.com/dnghnglinh" target="_blank">@dnghnglinh</a>, thanks! Could you be a little more specific regarding the exact libraries you are missing? Currently, all helper modules and functions should be present in <code>functions/</code> folder at my GitHub repo is that they can be imported in the modeling notebooks.</p>
Cassava Leaf Disease Classification
16th place solution with detailed and readable code for beginner
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to <strong>Makerere University AI Lab</strong> and <strong>Kaggle</strong> for organizing this competition.<br> Here is <a href="https://github.com/freedom1810/kaggle-cassava" target="_blank">our solution with code</a>, our approach is based on a <strong>good CV splitting strategy</strong> and better augmentation + loss + optimization to face the <strong>noisy-label problem</strong>.<br> <strong>1. Data Preprocessing + Augmentation:</strong><br> <strong>Dataset: 2019 + 2020 (512 input size).</strong><br> <strong>For training:</strong></p> <ul> <li><strong>Simple augmentation</strong>: RandomResizedCrop, Transpose, HorizontalFlip, VerticalFlip, ShiftScaleRotate, ShiftScaleRotate, HueSaturationValue, RandomBrightnessContrast, Normalize, CoarseDropout, Cutout (we referred to this <a href="https://www.kaggle.com/khyeh0719/pytorch-efficientnet-baseline-train-amp-aug" target="_blank">kernel</a> (many thanks to <a href="https://www.kaggle.com/khyeh0719" target="_blank">@khyeh0719</a>), with a little bit change in the RandomResizedCrop and Normalize params).</li> <li><strong>Advance augmentation</strong>: We tried with mixup, cutmix, fmix, snapmix. With our model, snapmix worked best with alpha = 5 (we refer to this <a href="https://www.kaggle.com/sachinprabhu/pytorch-resnet50-snapmix-train-pipeline" target="_blank">kernel </a>for snapmix, many thanks to <a href="https://www.kaggle.com/sachinprabhu" target="_blank">@sachinprabhu</a>). </li> </ul> <p><strong>For validation:</strong> We remove RandomResizedCrop by using CenterCrop + Resize with cv2.INTER_AREA. Snapmix was also not used. We also applied TTA = 5.</p> <p><strong>2. Traning parameters</strong></p> <ul> <li><p><strong>Loss function</strong>: We tried focal cosine loss, bi-tempered loss, SCE loss (with label smoothing, of course). Bi-tempered worked best with t1 = 0.6, t2 = 1.2, label_smoothing=0.1, num_iters=5(we refered to this <a href="https://github.com/mlpanda/bi-tempered-loss-pytorch/blob/master/bi_tempered_loss.py" target="_blank">kernel </a>for bi-tempered loss).</p></li> <li><p><strong>Optimization</strong>: We tried Adam, Ranger, SAM. SAM worked best on our CV (0.2-0.3% higher than Adam on CV). However, it did not show good results on Public LB. Finally, we chose Adam.</p></li> <li><p><strong>LR Scheduler</strong>: We used the LR scheduler based on the Yolov5 <a href="https://github.com/ultralytics/yolov5/blob/master/train.py" target="_blank">code </a>and <a href="https://arxiv.org/pdf/1812.01187.pdf" target="_blank">tricks </a>on this paper with init LR = 1e-4 -&gt; 5e-4, lrf = 1e-2.</p></li> <li><p><strong>Training strategy</strong>: We unfroze the backbone after 5 epochs. The total training epochs were 50.</p></li> <li><p><strong>Backbone model</strong>: All the pre-trained models are noisy-student models. We tried with B0 (using B0 as a baseline), B3, B4, and resnext50 using <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">Timm</a>. With limited resources (up to 2x2080Ti and Colab Pro), we were not able to obtain a better result with B4 compared to B3 on the CV. Also, it took a lot of time for some experiments with ViT, but we did not get any prospects.</p></li> <li><p><strong>Some tricks</strong>: We applied EMA on yolov5 with fp16. Using fp16 helped us to save a lot of time and computing resources.</p></li> </ul> <p><strong>3. Splitting strategy</strong><br> It sounds really simple. We separately divided the 2020 and 2019 data into 5folds. After each trial, we mixed the best and the worst folds on the CV and resplitted them until the gap CV is not large than 0.4% (it took 4-5 days using Colab Pro with B0). It helped us to boost our result at the very last submits.</p> <p><strong>4. Experiments</strong><br> Here are some of our outstanding experiments.</p> <table> <thead> <tr> <th>Model</th> <th>Note</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>B0</td> <td>SAM + 10 TTA</td> <td>0.8993</td> <td>0.8969</td> <td>0.8989</td> </tr> <tr> <td>B3</td> <td>Adam + 10 TTA</td> <td>0.9011</td> <td>0.9036</td> <td>0.8984</td> </tr> <tr> <td>B3</td> <td>SAM + 10 TTA</td> <td>0.9029</td> <td>0.9027</td> <td>0.8981</td> </tr> <tr> <td>B3</td> <td>SAM + newest splitting + 5 TTA</td> <td>0.9033</td> <td>0.9007</td> <td>0.8992</td> </tr> <tr> <td>B3 + B4 + resnext50</td> <td>5 TTA</td> <td>โ€ฆ</td> <td>0.9045</td> <td>0.9014</td> </tr> <tr> <td>B3 + B4 + B0</td> <td>5 TTA</td> <td>โ€ฆ</td> <td>0.9032</td> <td>0.8990</td> </tr> <tr> <td>B3 + B0 + resnext50</td> <td>5 TTA</td> <td>โ€ฆ</td> <td>0.9030</td> <td>0.9001</td> </tr> </tbody> </table> <p>The inference notebook is not included in our code, inference TTA is similar to validation TTA.<br> We are working to modify your code as readable and usable as possible. I hope it could help the newbie to get familiar with other Kaggle competitions.</p> <p><strong>P/S:</strong> We are a Vietnamese team (me and <a href="https://www.kaggle.com/gdvipbb258" target="_blank">@gdvipbb258</a>), thank you in advance for reading!</p> <p><a href="https://www.kaggle.com/mohit13gidwani" target="_blank">@mohit13gidwani</a> for EMA, please follow this link:<br> <a href="https://github.com/ultralytics/yolov5/issues/1254" target="_blank">https://github.com/ultralytics/yolov5/issues/1254</a><br> For fp16, you only need to search on Google for the keyword "fp16 training pytorch".</p> <p><a href="https://www.kaggle.com/sj161199" target="_blank">@sj161199</a> we froze the backbone and batch norm in the first 5 epochs.<br> Certain last layers were not frozen in the whole training time.</p>
Cassava Leaf Disease Classification
18th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to give thanks to all participants who shared their wealth of knowledge and wisdom for this competition thru notebook and discussion sharing.</p> <p>My submitted solution consist of simple average of 3 models namely: 5 folds gluon_seresnext101_32x4d, tf_efficientnet_b4_ns and seresnext50_32x4d. </p> <p>The following are the final scores of my model:</p> <p>gluon_seresnext101_32x4d<br> CV: 0.9012; Public LB: 0.9011; Private LB: 0.8993</p> <p>tf_efficientnet_b4_ns<br> CV: 0.9013; Public LB: 0.9011; Private LB: 0.8970</p> <p>seresnext50_32x4d<br> CV: 0.9007; Public LB: 0.8978, Private LB: 0.8987</p> <p>Simple Average<br> Public LB: 0.905; Private LB: 0.9013</p> <p>Training Methodology<br> The parameters used during stages of model training:</p> <p>1st stage<br> image size: 320x320 or 384x384<br> 10 epochs<br> learning rate: 1e-4<br> train set: 2019 + 2020 data<br> validation set: 2020<br> scheduler: GradualWarmupScheduler<br> criterion: TaylorCrossEntropyLoss with label smoothing=0.2<br> optimizer: SAM + Adam<br> augmentation: RandomResizedCrop, Transpose, HorizontalFlip, VerticalFlip, ShiftScaleRotate, Cutout</p> <p>2nd stage<br> image size: 512x512<br> 25-40 epochs<br> learning rate: 1e-4<br> train set: 2020 data<br> validation set: 2020 data<br> scheduler: GradualWarmupScheduler<br> criterion: Combo Loss (see notebook in detail)<br> optimizer: SAM + AdamP<br> augmentation: RandomResizedCrop, Transpose, HorizontalFlip, VerticalFlip, ShiftScaleRotate, Cutout + Cutmix</p> <p>Finetune stage<br> 5 epochs<br> learning rate: 1e-5<br> train set: 2020 data<br> validation set: 2020 data<br> scheduler: no scheduler<br> criterion: Combo Loss (see notebook in detail)<br> optimizer: SAM + AdamP<br> augmentation: hard albumentation-based augmentations</p> <p>You may check this notebook for reference:<br> <a href="https://www.kaggle.com/projdev/base-notebook-cassava-leaf-disease-classification" target="_blank">https://www.kaggle.com/projdev/base-notebook-cassava-leaf-disease-classification</a></p> <p>I believe that the combination of SAM + AdamP optimizer provides the model robustness in terms of prediction generalization and the key to survive in competition shake-up.</p> <p>A possible improvement for my model score is using SAM + SGDP and add FocalCosineLoss in Combo loss and find the right learning rate as it converges quickly. Unfortunately, I have no available GPU resources to re-train my model as SAM optimizer double the training time and I spent too much time on finding the right parameters for my model training.</p> <p>Once again, thank you very much.</p>
Cassava Leaf Disease Classification
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your congrats. We saw that ResNext and ViT together gave quite good results and we had no improvement in our CV scores by mixing in another model (e.g. B3/B5). <br> I can't remember exactly why we only chose 4 folds for the optimisation, but I think we were just impatient and didn't always want to wait for 5 folds. Besides, we didn't have the impression that the weights were less stable. A look at a bayesian approach would be really interesting: can you recommend a tutorial or notebook for this?</p> <p>Thanks for your relpy and sorry for the late answer:</p> <ol> <li><p>I tried class weights with the standard Keras method but that didn't work well. I don't know exactly why that didn't bring any improvement.</p></li> <li><p>Yes thats correct. We used this patch-approach only for the efficientnet</p></li> <li><p>Yes, we tried it but only in combination with Saturation and Contrast. This did not bring any improvement, which is why we tried to use as few augmentations as possible in the end.</p></li> <li><p>The problem at the beginning was that I wanted to copy the images to another folder via shell command, this threw some kind of error in the submission.</p></li> <li><p>Entropy</p></li> </ol> <p>Our overall strategy was to test as many models as possible and spend less time on fine-tuning. The goal was to have many diverse models for ensembling rather than some highly tuned ones.</p> <p>In the end, we had tried a variety of different architectures (e.g., all EfficientNet architectures, Resnet, ResNext, Xception, ViT, DeiT, Inception and MobileNet) while working with different pre-trained weights (trained e.g. on Imagenet, NoisyStudent, Plantvillage, iNaturalistโ€ฆ) some of which were available on Tensorflow Hub. </p> <p><b>Our winning submission was an ensemble of four different models.</b><br> <img src="https://i.ibb.co/fCdNjTY/1stplace.png" alt="Final Model"></p> <p>The final score on the public leaderboard was <b>91.36%</b> and <b>91.32%</b> on the private leaderboard. We opted to turn in this combination as it achieved a higher CV score than other combinations (which sometimes scored slightly better on the public leaderboard). We tested some of the models separately on the leaderboard (public/private): <b>B4: 89.4%/89.5% , MobileNet: 89.5%/89.4%, ViT:~89.0%/88.8%</b> </p> <p>The others were only evaluated using cross-validation.</p> <p>Overall, we can conclude that the key to victory was the use of CropNet from Tensorflow Hub, as it brought a lot of diversity to our ensemble. Although it did not perform better on the leaderboard as a standalone model than the other models, the ensembles that used this model brought a significant boost on the leaderboard.</p> <p>At this point I would like to thank hengck23, whose <a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/199276" target="_blank">discussion post</a> brought this model to our attention (as well as other pre-trained models for image classification available on Tensorflow Hub).</p> <p>Furthermore, we would like to thank all the participants who have helped us learn a lot in this contest through their contributions here in the board and their published notebooks. Finally, we would also like to thank the Competition Host, who made this exciting competition possible by releasing the data.</p> <p>You can find our inference code in this notebook: <a href="https://www.kaggle.com/jannish/final-version-inference">Inference Notebook</a></p> <p>And you can find the according training code in these notebooks:</p> <ul> <li><a href="https://www.kaggle.com/hiarsl/cassava-leaf-disease-resnext50">ResNext50_32x4d (GPU Training)</a></li> <li><a href="https://www.kaggle.com/sebastiangnther/cassava-leaf-disease-vit-tpu-training">ViT (TPU Training)</a></li> <li><a href="https://www.kaggle.com/jannish/cassava-leaf-disease-efficientnetb4-tpu">EfficientNet B4 (TPU Training)</a></li> </ul> <p>Detailed information about the configuration &amp; fine-tuning of our used models:</p> <p>We used a <b>ResNeXt </b>model of the structure โ€œresnext50_32x4dโ€ with the following configurations:</p> <ul> <li>image size of (512,512)</li> <li>CrossEntropyLoss with default parameters</li> <li>Learning rate of 1e-4 with โ€œReduceLROnPlateauโ€ scheduler based on average validation loss (mode=โ€™minโ€™, factor=0.2, patience=5, eps=1e-6)</li> <li>Train augmentations (from the Albumentations Python library): RandomResizedCrop, Transpose, HorizontalFlip. VerticalFlip, ShiftScaleRotate, Normalize</li> <li>Validation augmentations (from the Albumentations Python library): Resize, Normalize</li> <li>5-fold-CV with 15 epochs (after the 15 training epochs, we always chose the model with the best validation accuracy) (same data partitioning as for the other trained models)</li> <li>For inference we used the same augmentations as for validation (i.e., Resize, Normalize)</li> </ul> <p>We used the <b>Vision Transformer Architecture </b> with ImageNet weights (ViT-B/16)</p> <ul> <li>Custom top with Linear layer; Image size of (384,384)</li> <li>Bit Tempered Logistic Loss (t1 = 0.8, t2 = 1.4) and label smoothing factor of 0.06</li> <li>We chose a learning rate with a Cosine annealing warm restarts scheduler (LR = 1e-4 / 7 [7: Warm up factor], T0= 10, Tmult= 1, eta_min=1e-4, last_epoch=-1). A batch accumulation for backprop with effectively larger batch size</li> <li>Train Augmentations (RandomResizedCrop, Transpose, Horizontal and vertical flip, ShiftScaleRotate, HueSaturationValue, RandomBrightnessContrast, Normalization, CoarseDropout, Cutout)</li> <li>Validation Augmentations (Horizontal and vertical flop, CenterCrop, Resize, Normalization)</li> <li>5-fold-CV with 10 epochs, we took for each fold the best model (based on the validation accuracy) </li> <li>For inference we used the following augmentations: CenterCrop, Resize, Normalization</li> </ul> <p>We tried different EfficientNet architectures but finally only used a <b>B4 with NoisyStudent weights</b>:</p> <ul> <li>Drop connect rate 0.4, custom top with global average pooling and dropout layer (0.5)</li> <li>Sigmoid Focal Loss with Label Smoothing (Gamma=2.0, alpha=0.25 and label smoothing factor 0.1)</li> <li>Learning rate with warmup and cosine decay scheduler (ranging from 1e-6 to a maximum of 0.0002 and back to 3.17e-6)</li> <li>Augmentations (Flip, Transpose, Rotate, Saturation, Contrast and Brightness and some random cropping)</li> <li>Adapting the normalization layer with the global mean and deviation of the 2020 Cassava dataset</li> <li>5-fold-CV with 20 epochs with early stopping and callback for restoring weights of best epoch</li> <li>Final model was trained for 14 epochs on whole competition data set</li> <li>For inference we used simple test time augmentations (Flip, Rotate, Transpose). To do so, we cropped 4 overlapping patches of size 512x512px from the .jpg images (800x600px) and applied 2 augmentations to each patch. We retained two additional center-cropped patches of the image to which no augmentations were applied. To get an overall prediction, we took the average of all these image tiles. </li> </ul> <p>Finally, our ensemble included a pretrained <b>CropNet (MobileNetv3)</b> Model from Tensorflow Hub:</p> <ul> <li>We used a pretrained model from TensorFlow Hub called <a href="https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2">CropNet </a> which was specifically trained to detect Cassava leaf diseases </li> <li>The CropNet model is based on the MobileNetV3 architecture. We decided not to do any additional fine-tuning of that model.</li> <li>As stated in the description the images must be rescaled to 224x224 pixel which is pretty small. We achieved good results by not just resizing our 512x512 training images but to center crop them first.</li> <li>As the notebooks had to be submitted without internet access, it was necessary to cache the model before including it. You can find more information on this on the official <a href="https://www.tensorflow.org/hub/caching">TF Hub website</a> or alternatively in this post on <a href="https://xianbao-qian.medium.com/how-to-run-tf-hub-locally-without-internet-connection-4506b850a915">Medium</a>. </li> </ul> <p>For the ensembling, we experimented with different methods and found that in our case a stacked-mean approach worked best. For this purpose, the class probabilities returned by the models were averaged on several levels and finally the class with the highest probability was returned. </p> <p><b>Our final submission first averaged the probabilities of the predicted classes of ViT and ResNext. This averaged probability vector was then merged with the predicted probabilities of EfficientnetB4 and CropNet in the second stage. For this purpose, the values were simply summed up. </b></p> <p>Another solution which also generated good results on the leaderboard was finding weights before calculating the mean of models using an optimization. You can find the code for that in our published notebooks. Generally, we were surprised how stable the solutions with optimized weights were. It turned out that they only had small differences (often +/-0.1%) between our CV scores and the leaderboard score.</p> <p>One thing which didnโ€™t work out in our use case was an ensemble approach with an additional classifier (Gradient Boosted Trees) stack on top of our models. We did several experiments using also additional features, like e.g., the entropy of the modelโ€™s prediction, however we were not able to build a solution which generalized good enough. </p> <p>Thanks for reading. </p> <p>Thanks for your comment. We used (random) TTA only for the EfficientNetB4, but only light ones like Flipping and Transpose. The score of CropNet on private leaderboard was 89.42 using centercrop and resizing, almost all our single models scored in that range (i.e., B4 ~89.5 and ViT ~88.8) As far as i remember we didn' upload this specific ResNext as single model due to time</p> <p>Thanks :) We uploaded the CropNet once with Centercrop 0.9 and resizing alone and it came up with a score of 89.42 on the private leaderboard at the end</p> <p>Thank you Heroseo. We are glad you found the code helpful! Yes that is the cleaned-up inference code and except for B4 we didn't use random augmentations. Maybe that also made the score more stable on the private leaderboard. To be honest, I never heard of "distillation method" before. I'll have to read up on that </p> <p>Thank you. We have calculated the CV scores for many combinations using the out-of-fold predictions of the different models. We then uploaded the most promising ones. You can find some infos on that in the ensembling <a href="https://www.kaggle.com/jannish/cassava-leaf-disease-finding-final-ensembles" target="_blank">notebook</a></p> <p>Thanks. Yes, using CropNet in the ensemble helped us to boost our final score, even when considered individually it was not stronger than the other models.</p> <p>Thanks again for sharing your knowledge here in the discussion board!</p>
Cassava Leaf Disease Classification
25th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you :) </p> <p>Sorry for the late writeup ๐Ÿ˜…</p> <p>First of all, I would like to thank you to Kaggle for organizing this competition and great Kaggle community. Special thanks to <a href="https://www.kaggle.com/khyeh0719" target="_blank">@khyeh0719</a>, <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a>, and many other great notebooks that I didn't mention.</p> <p>This is my first competition and it's been a wild ride of learning. </p> <p><strong>Summary</strong> <br> My final submission two submissions are stacking ensembles of efficientnet b4 and ViT models (1st submission) and only ViT models (2nd submission). My solution is relatively simple compared to other competitors. </p> <p><strong>Datset</strong><br> Efficiennet only used 2020 data. I tried to use both but it didn't score well. I think it has something to do with the 2019 image sizes, as I had to resize the image first in augmenetation.<br> ViT, Deit, R50+ViT I used 2020 + 2019 (data only for class 0,1,2,4). Class 3 already has many images (6x higher than other classes).</p> <p><strong>Architectures</strong></p> <table> <thead> <tr> <th>Model</th> <th>Public LB</th> <th>Priave LB</th> </tr> </thead> <tbody> <tr> <td>Efficientnetb4</td> <td>0.9023</td> <td>0.8977</td> </tr> <tr> <td>Deit_base_patch32_38</td> <td>0.9018</td> <td>0.8980</td> </tr> <tr> <td>Vit_base_patch32_38</td> <td>0.9061</td> <td>0.8978</td> </tr> <tr> <td>R50+ViT-B</td> <td>0.9009</td> <td>0.8975</td> </tr> </tbody> </table> <p><strong>Augmentation</strong><br> I spent few days trying to find the best augmentations. This is my final augmentation strategy.</p> <ul> <li>RandomResizedCrop</li> <li>Transpose</li> <li>HorizontalFlip</li> <li>VerticalFlip</li> <li>ColorJitter (brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2, always_apply=False, p=0.5),</li> <li>OneOf([MotionBlur(blur_limit=3), MedianBlur(blur_limit=3), GaussianBlur(blur_limit=3),], p=0.5,),</li> <li>Normalize</li> <li>CoarseDropout</li> <li>Cutout</li> </ul> <p>I tried cutmix, mixup, fmix snapmix but they didn't work for my models. I think coarsedropout and cutout already quite heavy augmentation.</p> <p><strong>Training parameters</strong><br> I tried all the loss functions available in <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> shared notebook. Also, some basic mix of loss functions (active and passive). Labelsmoothing =0.3 give me the best results.<br> Scheduler: CosineAnnealingWarmRestarts<br> Optimizer: AdamP<br> Train for 10 epoch + 4 epochs of fine tunning. </p> <p><strong>Ensemble</strong> <br> I tried different combinations of my models ranging from 2-4 models. Only ViT and efficientnet have good synergy. <br> My two submissions are:</p> <ol> <li>ViT base 16 (10 folds) of 2 learning rate 1e-2 and 1e-6 = Public LB 0.906 Private 0.8984. Many kagglers reported that ViT achieved high public LB but low CV, so I'm not that suprised.</li> <li>Efficientnet (0.4) and ViT (0.6) Public LB = 0.9068, Private LB =0.9011. </li> </ol> <p>My best private score is 0.9017, which is a combination of efficientnet (0.4) and ViT (0.6) but with ViT model that trained with only 2020 dataset.</p> <p>I still have a long way to learn, so any suggestions are welcome. </p> <p>Thank you for reading. </p> <p>PS. Sorry for my bad grammar.</p>
Cassava Leaf Disease Classification
2nd Place - Summary of Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I joined the competition late, played around with some ideas, couldn't get a good model going, but I learned a lot though. As it neared the end of the competition, I figured I'd submit a simple fine-tuned model for the sake of having a submission.</p> <p>I looked up cassava pre-trained models and found <a href="url" target="_blank">https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2</a>.<br> To setup the model on the TPU, I referred to tensorflow's tpu setup guides. </p> <p>Training: <a href="url" target="_blank">https://www.kaggle.com/devonstanfield/cassava-train</a><br> My image preprocessing was minor, just a quick resize and rescale. Most of the functions in the data pipeline were to handle TFRecords. I also extended the training labels from 5 to 6 as to match the cassava model's output which included a "background" label.<br> During training, I used "EarlyStopping" and "ReduceLROnPlateau". <br> The callbacks would cutoff training around ~30 epochs.<br> I saved the model weights: <a href="url" target="_blank">https://www.kaggle.com/devonstanfield/cassava-model</a> to a dataset, since we weren't allowed to perform inference on the TPU. I also saved the cached version of the tfhub layer to a dataset: <a href="url" target="_blank">https://www.kaggle.com/devonstanfield/cassava-layer</a> since we weren't allowed the internet during inference.</p> <p>Inference: <a href="url" target="_blank">https://www.kaggle.com/devonstanfield/cassava-infer</a><br> Before inference, I loaded the previously cached version of the model and the saved weights. The preprocessing pipeline was the same as training, mostly dealing with TFRecords. After passing the data through the model, I saved the results.</p> <p>Overall, I am surprised I placed 2nd. My approach was pretty much by-the-book basic. This was demonstrated by my model placing 660th in the public leaderboard with a score of 0.9025, landing right in the middle of a bunch of other people. I assumed they did the same thing I did. Go figure :)</p> <p>For training, I appended a zero to the training set labels, extending it from 5 to 6 in order to get around this issue. I assumed it wouldn't be a problem as every image in the dataset they gave would have a cassava plant and would not be "background". During the fine-tuning, the model should learn to ignore the 6th label.</p> <p>Here is the training code if you were still looking for it: <a href="https://www.kaggle.com/devonstanfield/cassava-train" target="_blank">https://www.kaggle.com/devonstanfield/cassava-train</a>.<br> Note that my code isn't very clean, I figured I shouldn't change it for posterity. Some of the code/functions are unused.</p> <p>I looked over your colab notebook as well. I had the same problem you are having with low accuracy. I tried resnet50 at the start and didn't get good results. Looking at the TFHub model, they based their design around MobileNetV2, so if you're looking to improve upon it further, that would be a good place to start. </p>
Cassava Leaf Disease Classification
30th solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your comment.</p> <p>EfficientNetB5 takes about 30 minutes per epoch.<br> Therefore, the total of training is 25 hours (about 5 hours per fold).<br> I did it on AWS.</p> <h2>Acknowledgements</h2> <p>Thanks to Kaggle and hosts for holding this competition.<br> Thanks Kaggler!! This is my first silver medal :) </p> <h2>Result</h2> <ul> <li>30th</li> <li>Private Score 0.9010</li> <li>Public Score 0.9018</li> </ul> <h2>Summary</h2> <pre><code>My final submission is ensemble of EfficientNetB5 with Noisy Student and ResNext50(32x4d). </code></pre> <h2>Datasets</h2> <pre><code>This competition datasets only. (I don't use 2019 datasets.) </code></pre> <h2>Preprocessing</h2> <pre><code>Image size 512 Augumentations - ResizeCrop - Horizontal,VerticalFlip - OneOf(RandomBrightness, RandomContrast) - OneOf (RandomBlur,MedianBlur,GaussianBlur) - ShiftScaleRotate - Normalize </code></pre> <h2>Training</h2> <pre><code>Optimizer Adam Loss function CrossEntropyLoss </code></pre> <h2>Base Model CV</h2> <pre><code>CV is StratifiedKFold 5 folds. </code></pre> <ul> <li>EfficientNetB5<ul> <li>CV average 0.8898</li> <li>Public 0.894</li> <li>Private 0.897</li></ul></li> <li>ResNext50(32x4d)<ul> <li>CV average 0.8898</li> <li>Public 0.896</li> <li>Private 0.894</li></ul></li> </ul> <p><a href="https://github.com/kazukim10/kaggle_CasavaLeafDeseaseClassfication" target="_blank">https://github.com/kazukim10/kaggle_CasavaLeafDeseaseClassfication</a></p> <p>Thank you.<br> I referred to your notebook.<br> <a href="https://www.kaggle.com/piantic/no-tta-cassava-resnext50-32x4d-inference-lb0-903" target="_blank">https://www.kaggle.com/piantic/no-tta-cassava-resnext50-32x4d-inference-lb0-903</a></p> <p>Thank you for your comment.<br> I hope I can help you even a little.</p>
Cassava Leaf Disease Classification
35th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thanks to my great teammates for the hard work!</p> <p>In our solution we trained a B0, and tried several approaches and measured the CV improvement. </p> <h2>Filter Noisy Label</h2> <p>We checked the best model out of fold results with the labels and remove those label, which confidence difference with the ground truth was large. We used this Filter Noisy Label dataset for training, but we validated it on the original dataset.</p> <h2>Models</h2> <p>We had 8 categories during our training. </p> <p><strong>Model Category 1</strong></p> <ul> <li>Trained in Full Training Data</li> <li>Filter Noisy Label</li> <li>No external data</li> <li>No Cutmix</li> <li>Taylor + Cross Entropy Loss</li> <li>Image Size 666 500 -&gt; cropped to 500 500</li> <li>Noisy Student</li> </ul> <p><strong>Model Category 2</strong> </p> <ul> <li><p>The difference with Category 1:</p></li> <li><p><em>Image Size 600 800 -&gt; cropped to 600 600</em></p> <ul> <li>pretrained weights: Imagenet</li></ul></li> </ul> <p><strong>Model Category 3</strong></p> <ul> <li><p>The difference with Category 1:</p></li> <li><p>Using cutmix</p> <ul> <li>pretrained weights: Imagenet</li></ul></li> </ul> <p><strong>Model Category 4</strong></p> <ul> <li><p>The difference with Category 2:</p></li> <li><p>Using cutmix</p> <ul> <li>pretrained weights: Imagenet</li></ul></li> </ul> <p>With the 4 ensembled categories, we generated pseudo labels.</p> <p><strong>Category 5</strong>: same as Categor 1 + External data (2019 dataset) + Pseudolabels </p> <p><strong>Category 6</strong>: same as Categor 2 + External data (2019 dataset) + Pseudolabels </p> <p><strong>Category 7</strong>: same as Categor 3 + External data (2019 dataset) + Pseudolabels </p> <p><strong>Category 8</strong>: same as Categor 4 + External data (2019 dataset) + Pseudolabels</p> <p>We trained most of EfficentNet models (B0 to B4), and ResNext50 for each categories. We calculated the inference time for each model, and the accuracy for 8 TTA = ['rotate0', 'rotate90', 'rotate180','rotate270', 'rotate0_lr', 'rotate90_lr', 'rotate180_lr','rotate270_lr']. Based on the models out of fold results per TTA, we optimize their soft voting weights, and TTA numbers with a BayesianOptimization. </p> <p>In the end, we added a ViT model from a public notebook and ensembled. </p> <p><a href="https://www.kaggle.com/bessenyeiszilrd/final-submission-private-lb-0-901" target="_blank">https://www.kaggle.com/bessenyeiszilrd/final-submission-private-lb-0-901</a></p>
Cassava Leaf Disease Classification
375th place private LB (278th place public)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>So, after the latest updates, I moved up a few places on the LB and slipped into the bronze medals. Within two days I've gone from zero medals to my second bronze medal. ๐Ÿ˜ƒ On this competition, I worked on my own, while on the Rainforest Connection Species Audio Detection competition](<a href="https://www.kaggle.com/c/rfcx-species-audio-detection" target="_blank">https://www.kaggle.com/c/rfcx-species-audio-detection</a>) I had the pleasure to work with a great team. So, I thought I'd write up some of the more interesting things I tried on this competition.</p> <p>My best selected solution was a weighted power averaging ensemble (i.e. squaring the predictions of each model and then averaging, followed by picking the class with the highest score) of</p> <ul> <li><strong>EfficientNet-B5</strong> (i.e. not noisy student, somehow my noisy student version seemed worse on CV - I failed to figure out why) on image size 456 by 456 with TTA</li> <li><strong>EfficientNet-B4-noisy student</strong> on image size 512 by 512 with TTA</li> <li><strong>ResNeXt-50 (32x4d)</strong> with image size 512 by 512</li> <li><strong>Vision Transformer</strong> (<code>vit_base_patch16_224</code>) image size 224 by 224</li> </ul> <p>I did two different test-time augmentation approaches for the two EfficientNets and no TTA for the others (yes, yes, I clearly have an EfficientNet bias).</p> <p><strong>Things that worked</strong>:</p> <ul> <li><strong>label smoothing cross-entropy</strong> with epsilon of 0.1 to 0.2 or so as a loss function (not surprising, given that it was originally proposed for noisy labels)</li> <li><strong>focal loss</strong> (not in my selected solutions, but in CV it was very similar to label smoothing cross-entropy)</li> <li><strong>Ensembling</strong> (not surprise there, either)<ul> <li><strong>Power averaging</strong> was behind my best selected submission. This looked somewhat promising on CV and not that bad on the public LB (while I selected a weighted arithmetic mean based on LB, which did worse - in part, I also just wanted to select a second submission that was not too similar to the other one). </li> <li>I got some even better private LB numbers with <strong>rank averaging</strong> (in silver territory, but I did not select it), but admittedly my very best private LB score was a simple weighted arithmetic mean of two of the models. See below for what did not work so well (i.e. selection the best submission)โ€ฆ</li> <li>Why did I think power and rank averaging would work? I kind of figured that accuracy is in some sense a rank based metric (you only care about the first rank of course) and those methods had just been helpful in the <a href="https://www.kaggle.com/c/rfcx-species-audio-detection" target="_blank">Rainforest Connection Species Audio Detection competition</a> that used mean average precision (a very rank based metric).</li></ul></li> <li>Using <strong>diverse models</strong> in my ensemble, even if it does hurt on the public LB (I hedged my bets a bit and tried to bet on a half-way house that probably relied on the public LB to much, which cost me a bit in ranks).<ul> <li>The idea was to have diverse image sizes as inputs, somewhat diverse models and diverse augmentations during training (and TTA).</li> <li>Nevertheless, it seems the two EfficientNets were different enough that having both helped. I probably would not have added the EfficientNet-B4-NS, if it had not been for the public LB score of <a href="https://www.kaggle.com/japandata509/ensemble-resnext50-32x4d-efficientnet-0-903" target="_blank">this notebook</a>. That notebook also made me stop trying other ResNet-types and just go with ResNeXt-50 (32x4d) (who knows whether that was the right decision).</li> <li>My choice of architectures was perhaps a bit limited I mostly picked these architectures based on what I could get as pre-trained models via <code>timm</code> and perhaps payed too much attention to what others did. </li></ul></li> <li><strong>Gradient accumulation</strong> to use bigger batch sizes (72) with EfficientNet-B5 (necessary when using a GPU on Kaggle or my own 1080-Ti). It appeared to help a tiny bit in CV to use a batch size &gt;32 despite the whole "<a href="https://twitter.com/ylecun/status/989610208497360896?lang=en" target="_blank">Friends dont let friends use minibatches larger than 32</a>" <a href="https://arxiv.org/abs/1804.07612" target="_blank">thing</a>.</li> <li><strong>flat learning rate before cosine annealing learning rate schedule</strong> that I learnt about from <a href="https://www.kaggle.com/abhishek/tez-faster-and-easier-training-for-leaf-detection" target="_blank">Abishek Thakur's notebook</a></li> <li>Weight decay - whenever I tried it, it helped.</li> <li><code>fastai</code> + <code>timm</code> as e.g. described in this <a href="https://www.kaggle.com/muellerzr/recreating-abhishek-s-tez-with-fastai" target="_blank">great notebook</a> by <a href="https://www.kaggle.com/muellerzr" target="_blank">@muellerzr</a> (thanks, again, for that nice starter kit) and PyTorch. You only appreciate how smoothly that works with CPU and GPU when you try to use the TPU with PyTorch, which kind of worked, but was a bit painful.</li> </ul> <p><strong>Things that did not work for me (mostly my fault, some of these clearly worked out better for others):</strong></p> <ul> <li>Selecting the best submissions - I did 27 submission before the deadline and I selected in terms of the private LB the 17th and 20th best of these.<ul> <li>I suspect I was not sufficiently disciplined with running everything I tried in the last few days through a proper CV (yes, yes, I know, that's a terrible sinโ€ฆ) and was too influenced by the public LB.</li> <li>I wonder whether those that did best that were super disciplined and perhaps even used repeated k-fold?! </li> <li>To some extent the ordering of my various submissions may have had a strong element of luck (accuracy is such a low information content metric and I did not manage to) and dropping 97 places is not that bad, but I aspire to do better next time.</li></ul></li> <li>Getting value out of the 2019 competition data, but I probably just invested insufficient time. It just always made my CV worse even after de-duplication, so I gave up.</li> <li>Stochastic weight averaging: it kept reliably improving my loss function in CV, but not accuracy (it seemed I got the best accuracy when very slightly overfitting in terms of (label-smoothing) cross-entropy - which I understand is sort of expected). I wonder whether I should nevertheless put a model with SWA in my ensemble.</li> <li>Really large models: I initially thought that the biggest EfficientNet I could train with TPU would be best, then I scaled my "ambitions" back to the largest one I could comfortably cross-validate (B5) some experiments forโ€ฆ </li> <li>Doing two competitions that ended within 24 hours of each other. Yes, I got two medals, but I am left feeling I would have liked to rather do one of them more deeply and do a better job / contribute more to the one where I teamed up.</li> </ul>
Cassava Leaf Disease Classification
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!!<br> I plotted the figure after the competition was over (open private scores). </p> <h2>Acknowledgements</h2> <p>Thanks to Kaggle and hosts for holding this competition.<br> This is my first gold medal and I finally became Competition Master :)</p> <h1>Summary</h1> <p>[update 2021.03.02]<br> I made a mistake, vit_base_patch16_384's final layer is not multi-drop but simple linear.<br> I found a bug during the verification process.</p> <p>My final submission is ensemble of three ViT models; summarized below.<br> <a href="https://postimg.cc/sQddZFXM" target="_blank"><img src="https://i.postimg.cc/HsjCzjjw/summary.png" alt="summary.png"></a></p> <h1>Model</h1> <h3>vit_base_patch16_384</h3> <ul> <li>img_size = 384 x 384</li> <li>5x TTA </li> <li>Public : 0.9059</li> <li>Private : 0.9028</li> </ul> <p>This is my best single model</p> <h3>vit_base_patch16_224 - A</h3> <ul> <li>img_size = 448 x 448</li> <li>5x TTA </li> <li>weight calculation pattern A</li> <li>Public : 0.9030</li> <li>Private : 0.8990</li> </ul> <p>To adapt vit_base_patch_224(expected image size is 224 x 224) for 448 x 448 image,<br> after augmentation, divide it into four parts and input each image into the model. And then, they are adapted the weighted average using calculated weights at attention layer, and finally output prediction using Multi-Dropout Linear.<br> ใ€€</p> <h3>vit_base_patch16_224 - B</h3> <ul> <li>img_size = 448 x 448</li> <li>5x TTA </li> <li>weight calculation pattern B</li> <li>label smoothing, alpha=0.01</li> <li>Public : 0.9034</li> <li>Private : 0.8952<br> ใ€€</li> </ul> <h3>Weighted Averaging</h3> <ul> <li>Public : 0.9075</li> <li>Private : 0.9028</li> </ul> <p>Tried a bunch of pretrained models but ViT model works the best at Public LB. <br> The bigger the image size, the better cv score, but I thought it is overfitting. So I dropped efficient-net and se-resnext, etc with large image size in the early stages.</p> <h2>Some Settings</h2> <ul> <li>5fold StratifiedKFold</li> <li>Using 2020 &amp; 2019 data</li> </ul> <h3>Augmentation</h3> <p>I tried some types of augmentations, but finally adopted simple one.<br> The reason why is the same of chose not large size image, overfitting.</p> <pre><code>if aug_ver == "base": return Compose([ RandomResizedCrop(img_size, img_size), Transpose(p=0.5), HorizontalFlip(p=0.5), VerticalFlip(p=0.5), ShiftScaleRotate(p=0.5), Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), ToTensorV2(), ]) </code></pre> <h3>LR Scheduler</h3> <ul> <li>LambdaLR</li> </ul> <pre><code>self.scheduler = LambdaLR( self.optimizer, lr_lambda=lambda epoch: 1.0 / (1.0 + epoch) ) </code></pre> <p><img src="https://i.postimg.cc/Twv8mmCW/2021-02-21-9-29-45.png" alt="lr.png"></p> <h3>Scores</h3> <p><img src="https://i.postimg.cc/hvhFzmdT/2021-02-21-23-00-39.png" alt="scores.png"></p> <p>training code<br> <a href="https://github.com/TomYanabe/Cassava-Leaf-Disease-Classification" target="_blank">https://github.com/TomYanabe/Cassava-Leaf-Disease-Classification</a></p> <p>Thank you for reading :)</p> <p>Thank you :)</p> <p>Thank you :)<br> I'm looking forward to 1st place solution too.</p> <p>Thank you :)<br> DeiT was not bad, but couldn't beat ViT.<br> I tried DeiT late in the competition, so I didn't have enough time for tuning.</p> <p>What normalize parameters (mean, std) did you use?<br> Multi-Dropout Linear, augmentation, and lr-scheduler, I did tuning carefully</p> <p>Thank you !!!</p> <p>Thank you :)</p> <p>this is my inference notebook.<br> <a href="https://www.kaggle.com/tomyanabe/cassava-leaf-disease-classification?scriptVersionId=54882926" target="_blank">https://www.kaggle.com/tomyanabe/cassava-leaf-disease-classification?scriptVersionId=54882926</a></p> <p>Thank you!!<br> nice work you too, I think you'll become a competition master soon :)</p> <p>Thank you!!</p> <p>Final submissions are below;</p> <ul> <li><p>3 ViT models</p> <ul> <li>vit_base_patch16_384</li> <li>vit_base_patch16_224 with 448x448 image and attention pattern A</li> <li>vit_base_patch16_224 with 448x448 image, attention pattern B, and label smooth(alpha=0.01)</li></ul></li> <li><p>2 ViT models</p> <ul> <li>vit_base_patch16_384 with label smooth(alpha=0.1)</li> <li>vit_base_patch16_224 with 448x448 image and label smooth(alpha=0.01)</li></ul></li> </ul> <p>Thank you!!<br> I believed ViT to be the best at this competition :)</p> <p>Thank you !!<br> My another selected submission is ensemble of ViT only with label smoothing(alpha=0.2) not using other types of model, but it's not good at Private LB.<br> 2nd submission<br> Public : 0.9068<br> Private: 0.9005</p> <p>Thank you!!</p> <ul> <li>how you use multi-dropout Linear</li> </ul> <pre><code># when setting model for i in range(5): self.head_drops.append(nn.Dropout(0.5)) # when training for i, layer in enumerate(self.head_drops): if i == 0: output = self.head(layer(h)) else: output += self.head(layer(h)) output /= len(self.head_drops) </code></pre> <ul> <li>what kind of attention you use</li> </ul> <pre><code># pattern A self.att_layer = nn.Linear(n_features, 1) # pattern B self.att_layer = nn.Sequential( nn.Linear(n_features, 256), nn.Tanh(), nn.Linear(256, 1), ) </code></pre> <p>by using self.att_layer and softmax function, calculate weight of each latent variables.</p> <p>I'll update my write up, thank you :)</p> <p>Thank you :)</p> <p>Thank you :)</p> <p>Thank you :)</p> <p>Thank you :)</p> <p>Thank you :)</p> <p>Thank you :)</p> <p>Thank you :)</p> <p>Thank you :)</p> <p>I used simple Resize Augmentation,</p> <pre><code>if val_aug_ver == "resize": return Compose([ Resize(img_size, img_size), Normalize(mean=mean, std=std), ToTensorV2(), ]) </code></pre> <p>Thank you :)</p> <p>Thank you :)</p> <p>Thank you!!<br> I'm now re-writing training codes to make it more readable.<br> I'll share it soon, pls wait a moment :)</p>
Cassava Leaf Disease Classification
40th place solution(4 model ensemble)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all,I'd like to say thanks to <a href="https://www.kaggle.com/qi0239" target="_blank">@qi0239</a> <a href="https://www.kaggle.com/bilogo" target="_blank">@bilogo</a> <a href="https://www.kaggle.com/ageage" target="_blank">@ageage</a> <a href="https://www.kaggle.com/hsdtlp" target="_blank">@hsdtlp</a> ,this is our first match in kaggle,they spend much time finding suitable method and persisted till the end.</p> <h1>score</h1> <ul> <li>Public Leaderboard 0.9050</li> <li>Private Leaderboard 0.9007</li> </ul> <h1>training details</h1> <ul> <li>data: 2020 data with 5fold</li> <li>Image_size: 512x512</li> <li>Models: efficientnet b4,efficientnet b3,repvgg b2g4,se_resnext50</li> <li>Loss function: bi-tempered loss with label smooth </li> <li>Optimizer: AdamW</li> <li>Lr scheduler: Cosine Annealing with Warm Restarts</li> <li>Augmentations: HorizontalFlip, VerticalFlip,ShiftScaleRotate,mixup,snapmix(when we using HueSaturationValue and RandomBrightnessContrast,it improve our cv,but lower lb)</li> <li>TTA: ShiftScaleRotate,Flip</li> </ul> <h1>ensemble</h1> <p>we using 4 models :efficientnet b4,efficientnet b3,repvgg b2g4,se_resnext50.like other teams,single model didn't performance as good as ensemble.At first,we using efficientnet b3 &amp; se_resnext50,by this way,we get higest public lb 0.900.when finding ensemble working,we continue test efficientnet-b4 and add it in our ensemble,in this discussion<a href="url" target="_blank">https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/213183</a>,we notice repvgg model,and we tried testing type A0-B3,find type B2g4 have best performance in cv and public lb,by adding repvgg,it help our public lb +0.003.</p> <h1>methods that didn't work</h1> <ul> <li>remove noisy images,we using PHash to find same images and remove similar images,both cv and lb become lower</li> <li>Pseudo-Labelling</li> <li>concat avgpool feature &amp; maxpool feature</li> </ul> <h1>in this match,I have learnt a lot,people here are enthusiasm,their Reply in the discussion gave me a lot of help in the game.Best wish!</h1>
Cassava Leaf Disease Classification
45th Place - Silver Medal Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>Could you please say what you mean by "unweighted ensemble" at the last </p> </blockquote> <p>Just the simple average over your predicted probabilities. </p> <blockquote> <p>many single models seems to outperform the ensemble of those good models . Could you share your thoughts on this ?</p> </blockquote> <p>The margins are tiny in this competition. Maybe it was just a stroke of luck, my best single model B7 has only scored 0.897. </p> <p>Thanks <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> congrats on your medal as well </p> <p>Based on my CV score it was always slightly better than my B4 and gave a nice boost in LB and combined well with ViT. Maybe I could have still got similar results with smaller models but was short on time and resources so gave B7 a shot and luckily it paid off. </p> <p>Standard TTA. </p> <p>Thanks <a href="https://www.kaggle.com/nroman" target="_blank">@nroman</a> Will share code soon</p> <p>Thanks <a href="https://www.kaggle.com/wonjunpark" target="_blank">@wonjunpark</a> congrats to you too</p> <p>My own hardware wasn't good enough I used it for the baseline b0 model though. I split the process into 10 epochs each every time and used Kaggle Hardware + Google Colab and waited long hours.</p> <p>Thanks <a href="https://www.kaggle.com/julianbakerphd" target="_blank">@julianbakerphd</a> </p> <blockquote> <p><strong>Many congratulations to all the winners in this competition. Huge thanks to all the discussion topics, public notebooks and datasets - most of my learning came from them.</strong></p> </blockquote> <p>For the first time I am finishing so high in the final leaderboard grabbing a silver medal. I have failed miserably in past competitions, never had a satisfying finish but this time I really feel like I am learning from my mistakes, so yeah it feels great! </p> <p><br></p> <h2>Final Solution</h2> <h4>Dataset:</h4> <p><em>Competition Data ONLY. No extra data.</em></p> <h4>CV Strategy:</h4> <p>5 Fold Stratified CV</p> <h4>Models (Pretrained):</h4> <ul> <li>EfficientNet-B7 NS (Noisy Student)</li> <li>ViT Base16</li> </ul> <h4>Train Setting:</h4> <p><strong>Image Size:</strong> 512 for B7 and 384 for ViT<br> <strong>Epochs:</strong> 20 <br> <strong>Early Stopping:</strong> No (Saved Best Epoch)<br> <strong>Loss Function:</strong> Bi-tempered Logistic Loss with Label Smoothing<br> <strong>Optimizer:</strong> Good old "Adam"<br> <strong>LR Scheduler:</strong> Cosine Annealing with Warm Restarts<br> <strong>Batch Size:</strong> 32 (with gradient accumulation)<br> <strong>Augmentations:</strong> Standard (No CutMix, FMix or SnapMix)</p> <pre><code> - Transpose - HorizontalFlip - VerticalFlip - ShiftScaleRotate - HueSaturationValue - RandomBrightnessContrast - CoarseDropout - Cutout </code></pre> <blockquote> <p>Edit: Both models didn't use the same augmentations, I did use mix augs for ViT. <br> EfficientNet-B7: Standard Augs<br> ViT Base P16: Standard + FMix + CutMix</p> </blockquote> <h4>Evaluation:</h4> <ul> <li>I used the 5 Fold OOF predictions to pick and evaluate the models based on "Accuracy".</li> <li>Treated the Public Leaderboard Score as an alternative fold (as suggested in the forums)</li> </ul> <h4>Inference:</h4> <p>I experimented a few things here, </p> <ul> <li>No TTA</li> <li>Light TTA (only flips)</li> <li>Standard TTA (used for aug)</li> </ul> <p>I noticed that my OOF predictions were more stable when I used Standard 4xTTA for both ViT and B7 so I went with that for the final submission.</p> <h4>Submission:</h4> <ul> <li>EfficientNet-B7 was my best single model</li> <li>I noticed that ViT alone was not enough on CV and LB</li> </ul> <p>I decided to evenly blend them at first and surprisingly that gave me a boost in OOF (CV) Score from 0.898 --&gt; 0.901 and also a boost in Public LB 0.902 --&gt; 0.905. However a weighted ensemble always seemed to be just short yet better than my single models. This gave me a feeling that an unweighted ensemble can cut it for me so I went with it. </p> <h4>Results:</h4> <table> <thead> <tr> <th>Model</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>EfficientNet-B7 NS</td> <td>0.894</td> <td>0.900</td> <td>0.894</td> </tr> <tr> <td>ViT Base16</td> <td>0.888</td> <td>-</td> <td>-</td> </tr> <tr> <td>EfficientNet-B7 NS 4xTTA</td> <td>0.896</td> <td>0.902</td> <td>0.897</td> </tr> <tr> <td>ViT Base16 4xTTA</td> <td>0.891</td> <td>0.899</td> <td>0.896</td> </tr> <tr> <td>(0.35 * ViT) + (0.65 * B7) weighted ensemble with 4xTTA</td> <td>0.898</td> <td>0.903</td> <td>0.899</td> </tr> <tr> <td>ViT + B7 unweighted ensemble with 4xTTA</td> <td>0.901</td> <td>0.905</td> <td>0.900</td> </tr> </tbody> </table> <p>Looking forward to competing more and learning more!</p>
Cassava Leaf Disease Classification
5th Place Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> , thank you for your congrats. TTA is applied almost in every submission kernel of mine, but with differences in implementation details, e.g. full tta (as shown in the post above, other options like ShiftScaleRotate are also tried but didn't improve LB), light tta (remove hue, random brightness etc.) and tta using RandAug class. With the same pretrained models, full tta performs better than the other two in most cases.</p> <p>This is the first time that I participate in a real and entire kaggle competition. As this is the last chance for me to write something on my unremarkable resume for the upcoming recruitment season in my location, what I expect is to achieve a place around top 10%, but definitely the higher the better. I never ever imagine I could be in the gold area before the time that private leaderboard was released. For me, this is really a valuable as well as unforgettable experience.<br> Thankful to those kind even selfless kagglers who contributes their questions, thoughts, insights and techniques to every kernel and every discussion, they really enlighted me and helped me learn a lot. Special thanks to <a href="https://www.kaggle.com/Heroseo" target="_blank">@Heroseo</a>, <a href="https://www.kaggle.com/khyeh0719" target="_blank">@khyeh0719</a>, <a href="https://www.kaggle.com/szuzhangzhi" target="_blank">@szuzhangzhi</a>, <a href="https://www.kaggle.com/serigne" target="_blank">@serigne</a>, <a href="https://www.kaggle.com/debarshichanda" target="_blank">@debarshichanda</a>, <a href="https://www.kaggle.com/prvnkmr" target="_blank">@prvnkmr</a> (forgive me for not listing more due to page limit) as your works and answers bring me real progress and improvements for my LB and PB scores. <br> Back to the topic, I'd like to share my solutions selected for final submissions, just for reference since the best solution of the 1st team for this competition is obviously more delicate and robust.</p> <p><strong>First Final Selection (5th place)</strong>: </p> <ul> <li>PB: 0.9019, LB: 0.9030</li> <li>Inference kernel:<ul> <li>img_size = 384 </li> <li>model_8 (vit16) + model_13 (efn-b4-cmix) (2-model ensemble) </li> <li>tta: random crop, transpose, h/v flip, hue, random brightness, normalize</li></ul></li> <li>Training kernels:<ul> <li>Training model_8:<ul> <li>Dataset: Cassava 2020 only</li> <li>vit-b16 + img_size = 384 + augmentations + bi-tempered logistic loss (t1=0.8, t2=1.4)</li></ul></li> <li>Training model_13:<ul> <li>Dataset: Cassava 2020 only</li> <li>efn-b4 + img_size = 512 + augmentations + bi-tempered logistic loss (t1=0.8, t2=1.4) + cutmix</li></ul></li></ul></li> </ul> <p><strong>Second Final Selection</strong>:</p> <ul> <li>PB: 0.9002, LB: 0.9039</li> <li>Inference kernel:<ul> <li>img_size = 384 </li> <li>model_8 (vit16) + model_13_ft_2 (efn-b4-cmix) + model_10 (deit) (3-model ensemble) </li> <li>tta: random crop, transpose, h/v flip, hue, random brightness, normalize</li> <li>ensemble weight=[0.5, 0.3, 0.2]</li></ul></li> <li>Training Kernels:<ul> <li>Training model_8:<ul> <li>Dataset: Cassava 2020 only</li> <li>vit-b16 + img_size = 384 + augmentations + bi-tempered logistic loss (t1=0.8, t2=1.4)</li></ul></li> <li>Training model_13_ft_2<ul> <li>Dataset: Cassava 2019 + 2020 merged dataset</li> <li>finetune model_13 with: freeze non-classifier layers + bi-tempered logistic loss (t1=0.8, t2=1.4) + cutmix</li></ul></li> <li>Training model_10:<ul> <li>Dataset: Cassava 2020 only</li> <li>deit-b16 + img_size = 384 + augmentations + bi-tempered logistic loss (t1=0.8, t2=1.4)</li></ul></li></ul></li> </ul> <p>Some preliminary conclusions from my experience for this competition:</p> <ol> <li>Ensemble works better than single model inference in most cases.</li> <li>Cutmix improves classification performace for small models, like efn-b4.</li> <li>Label deleting/denoising using oof may be useful for LB (achieve the second hightest LB score for me), but may not for PB.</li> <li>Cassava 2020 dataset works better than Cassava 2019+2020 merged dataset when training models, probably because of the size difference between the two datasets. </li> <li>Proper-parametered bi-tempered logistic loss works better than other losses (like Taylor Cross Entropy loss or Label Smoothing loss) in most cases, no matter training or finetuning.</li> </ol> <p>Last but not the least, I'd like to appreciate Makerere University AI Lab and Kaggle platform for organizing such a competition that all kagglers can implement their ideas and show their thoughts. In my future career, I will join more kaggle competitions, trying to be a better kaggler and data science researcher. </p> <p>Thanks to your work and congrats๐Ÿ‘๐Ÿค</p> <p>I used the following augmentations for training: random crop, transpose, h/v flip, shift-scale rotate, hue, random brightness, normalize, coarse dropout and cutout. <br> For lr scheduler, I finally selected CosineAnnealingWarmRestarts. The epoch is set to 10 for training process. I also tried GradualWarmupScheduler as lr scheduler, but it seems to induce more oscillations in the cv results within 10 training epochs.</p>
Cassava Leaf Disease Classification
78th place solution (silver medal)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! :)</p> <p>Congrats to the top solutions and a great competition. <br> One learns new thing in every competition, a perfect way to follow the journey of AI world and also to contribute to important problems, thanks Kaggle, competition host and fellow kagglers.</p> <p>The solution was a global ensemble of 29 different models including 4 local ensembles(2 own, 2 public), trained in different ways.<br> All trained with dim 512 but the rest I tried using different techniques and parameters.</p> <p>Own local trainings and ensembles</p> <p>1 - FastAIv2 - 6 models</p> <p>Common for all training </p> <p>Used 1-fold of each model from random 10-fold training.<br> Ranger or AdamW optimizer<br> LabelSmoothingCrossEntropy loss<br> 1CycleScheduler<br> Started all training with 3 epochs with only training last layers and then trained all layers 20 epochs.<br> In between the trainings I used my own custom LR finder, which I posted in another competition.<br> โ€œLR Finder alternative use.โ€<br> <a href="https://www.kaggle.com/c/jane-street-market-prediction/discussion/218711" target="_blank">https://www.kaggle.com/c/jane-street-market-prediction/discussion/218711</a></p> <p>Models</p> <p>2 tf_efficientnet_b5_ns โ€“ mixup training<br> 2 seresnext50_32x4d โ€“ cutmix training<br> 1 xception โ€“ cutmix training โ€“ here I also used upsample with more upsample to class 4 which I visually noticed had more false classifications.<br> 1 densenet121 โ€“ cutmix training โ€“ here I also used upsample with more upsample to class 4 which I visually noticed had more false classifications.</p> <p>I equal ensembled the models with 5 TTA each with changing the aug. to just resizing image not cropping do have a different image view contrary the other 3 ensembles and skipped all other augs. just random brightness, contrast etc.</p> <p>2 - Pytorch - 13 models</p> <p>8 tf_efficientnet_b1_ns<br> 5 resnext50-32x4d</p> <p>Common for all trainings </p> <p>Extra training data<br> BiTemperedLoss with Random Noise settings described in the <a href="https://ai.googleblog.com/2019/08/bi-tempered-logistic-loss-for-training.html" target="_blank">https://ai.googleblog.com/2019/08/bi-tempered-logistic-loss-for-training.html</a> but I used more narrow approach with 0.5/1.5.<br> OneCycleLRWithWarmup from Catalyst which also takes in count momentum and weight_decay in the cycles.<br> Used a random mix with almost equal split of all different aug. techniques SnapMix, Fmix, CutMix,CutOut, Mixup and MWH(Mixup-Without-Hesitation).<br> AdamP optimizer <a href="https://github.com/clovaai/AdamP" target="_blank">https://github.com/clovaai/AdamP</a></p> <p>Here I instead used stacking with 5 TTA instead of ensemble to create a different approach to the global ensemble.</p> <p>For the last 2 local ensembles I used 2 public trained models 5 B4ns and 5 resnext50-32x4d with 4 TTA for each model to have bigger ensemble and trainings to not get stuck in oneโ€™s own strategy.<br> <a href="https://www.kaggle.com/piantic/cassava-resnext50-32x4d-weights" target="_blank">https://www.kaggle.com/piantic/cassava-resnext50-32x4d-weights</a><br> <a href="https://www.kaggle.com/underwearfitting/moa-b4-baseline" target="_blank">https://www.kaggle.com/underwearfitting/moa-b4-baseline</a><br> Thanks, and credit to your training and models.</p> <p>All 4 local ensemble where then ensemble with equal weights.</p> <p>Thatโ€™s it, thanks for reading the write-up.</p> <p>This gave my 3rd silver medal, hard work, lots of reading and testing pay off.</p> <p>I have also made a personal competition and challenge this time, to contribute in 5 different Kaggle competition in parallel, trying to get good results in every one of them, equal distribution of time. Just for fun and the learning. 1st out finished yesterday and got a bronze medal and now silver in this one, three to go, and hopes for the best, the learning is a big part of the value.</p> <p>Happy Kaggleing!</p>
Cassava Leaf Disease Classification
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/suryajrrafl" target="_blank">@suryajrrafl</a> , it is 10 epochs</p> <p>Thank you for all kagglers and organizers for this competition.I learned a lot from the competition.</p> <h1>Data</h1> <ul> <li>training and validating on 2020 data using stratified 5-fold CV</li> </ul> <h1>Model</h1> <ul> <li>EfficientNet B6 with Noisy Student</li> <li>ResNeSt50</li> <li>Vision Transformer(base patch16)</li> </ul> <p>my best single model is EfficientNet B6 , it can get 0.903 in public lb and 0.8946 in private lb.</p> <h1>Augmentation</h1> <ul> <li>HorizontalFlip, Transpose, VerticalFlip</li> <li>HueSaturationValue</li> <li>CoarseDropout</li> <li>RandomBrightnessContrast</li> <li>ShiftScaleRotate</li> <li>RGBShift</li> <li>Cutmix</li> <li>Fmix</li> <li>Snapmix</li> <li>ISDA(implicit semantic data augmentation)<br> <a href="https://arxiv.org/abs/2007.10538" target="_blank">https://arxiv.org/abs/2007.10538</a></li> </ul> <p>The combination of cutmix and fmix can boost the cv,but the snapmix makes the training unstable, maybe some hyperparameters need careful tuning.I had not try the ISDA because of the time, but I think it may bring a suprising effect.</p> <h1>Loss</h1> <ul> <li>LabelSmoothing</li> <li>FocalLoss</li> <li>FocalCosineLoss</li> <li>BiTemperedLoss</li> <li>ClassBlancedLoss(only training)<br> <a href="https://arxiv.org/abs/1901.05555" target="_blank">https://arxiv.org/abs/1901.05555</a></li> </ul> <p>LabelSmoothing(0.1 or 0.2) gave the best performance relative to other loss functions</p> <h1>Training</h1> <ul> <li>Optimizer:Adam, AdamW, AdamP, Ranger</li> <li>lr_scheduler:CosineAnnealingWarmRestarts</li> </ul> <h1>Inference</h1> <ul> <li>8xTTA(transpose, flip)</li> <li>ensemble(just average)</li> </ul> <p>The most important thing I learned in this competition is never give up, because no one knows the result until the end.<br> Finally, I hope everyone can get the gold medal in the next competition. Thanks my leader YuBo and my girl friend MingJun. </p>
Cassava Leaf Disease Classification
Diversity is all we need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks mane. OOF means your predictions for validation data. If you split your data into 5 folds, you will predict 5 validation fold where the model is not being trained on. The OOF is the combination of this 5 folds. You can use any strategy on the OOF, like TTA or postprocess, to evaluate your model.</p> <p>You can use np.corrcoef([predict1, predict2, predict3]) to calculate the correlation coefs for your different predictions.</p> <p>Congrats to all. Learned a lot from this community.</p> <p>I am really happy to team up with my teammates <a href="https://www.kaggle.com/ludovick" target="_blank">@ludovick</a> and <a href="https://www.kaggle.com/tonyxu" target="_blank">@tonyxu</a> who have strong engineering capabilities and are thus able to put any ideas into practice. Our final solutions consist of 4 models. Unfortunately, same as most teams, we missed selecting a sub that can beat 3rd place in Private LB. We three have totally different training processes, which I think helps us to stay gold zone on both public LB and private LB. The diversity makes us win.</p> <h1>Models</h1> <ul> <li>Shirokโ€™s effb5_ns with 8xTTA</li> <li>toxuโ€™s vit_base_patch16_384 with 8xTTA</li> <li>Yiminโ€™s effb5_ns with 8xTTA</li> <li>Yiminโ€™s seresnext50 with no TTA</li> <li>Same weight for ensemble.</li> <li>Without any one of these models, we cannot win.</li> <li>The oof corrcoef is as below:<br> [[1. , 0.79848116, 0.81904038, 0.81931724],<br> [0.79848116, 1. , 0.80287945, 0.79185962],<br> [0.81904038, 0.80287945, 1. , 0.82176191],<br> [0.81931724, 0.79185962, 0.82176191, 1. ]]</li> </ul> <h1>Diversity</h1> <h2>Teammatesโ€™ part</h2> <p>As I know, Shirok used tf TPU for training b5, and toxu used skd softlabel for vit. Welcome them to add more details at this part if would like.</p> <h2>In my part</h2> <h3>Dataset</h3> <ul> <li>2019+2020 dataset</li> <li>Removed some duplicates searched by <a href="https://www.kaggle.com/zzy990106/duplicate-images-in-two-competitions" target="_blank">https://www.kaggle.com/zzy990106/duplicate-images-in-two-competitions</a> and <a href="https://www.kaggle.com/graf10a/cldc-image-duplicates-with-dbscan" target="_blank">https://www.kaggle.com/graf10a/cldc-image-duplicates-with-dbscan</a>. Thanks again for sharing useful kernels that can be used anywhere.</li> <li>Tried Pseudo Label on 2019 test data and extraimage data (removed duplicates). Got better CV but worse LB. Probably because of data leak. I used the avg. probs of 5 folds, which may bring in leak info. between folds. Due to time limit, never chose it after.</li> <li>Tried self knowledge distillation softlabel. Improved 0.003 on CV but decreased 0.001-0.002 on LB. Did not choose it as the final sub. -0.0015 around compared with hard label on private LB.</li> <li>Hard Label</li> <li>Stratified 5 folds</li> <li>Batch size 32</li> <li>Image size 512 x 512</li> </ul> <h3>Training AUG</h3> <p>Random Rotate<br> ShiftScaleRotate<br> HueSaturationValue<br> RandomBrightnessContrast<br> Flip, transpose<br> GuaussNoise<br> Blur<br> Distortion<br> Cutout</p> <h3>Loss</h3> <p>Bi-tempered</p> <h3>TTA</h3> <p>You must be curious about why I did not use TTA for se50 but used TTA for b5. I was stuck at 0.900 (se50), 0.902 (b5), and 0.902 (ensembles using both 8xTTA) before I checked out this <a href="https://www.kaggle.com/japandata509/ensemble-resnext50-32x4d-efficientnet-0-903" target="_blank">notebook</a>. I was also curious about why Kaito did not use TTA for a model but used it for another model. Then I tried to inference my oof with and without various TTAs. And I found that TTA always decreased my cv score of se50 and increased the cv score of b5. The same situation happened on LB. Here is my previous <a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/214559" target="_blank">discussion</a> regarding this. After using the similar strategy, I climbed up to 0.905 on LB. So, I just trusted the cv and LB at that moment, although the private LB shows the contrary result regarding se50. But our ensembles took almost 8 hrs to run submit, so it actually did not allow another model with TTA. It was the fate. Anyway, I was probably wrong with insights about TTA at my that discussion.</p> <h2>ensemble</h2> <p>My se50+b5 scored 0.905 on public LB, and Shirokโ€™s b5 + toxuโ€™s vit scored 0.906. After merge, we got 0.908. After finetuning each single model, we finally got 0.910 which stand 2nd place on public LB. Our ensemble is somehow robust. It keeps scoring 0.901-0.902 on private LB where the gold zone is. I attribute it to the diversity of our models (to be proved).</p> <h1>Trust CV or LB?</h1> <p>Unlike the Riiid, I have totally no idea about this comp. The dataset (both train and test) is noisy, and I do not think that test data split is balanced. Our CV is range from 0.905-0.906 on 2020 dataset. It is not aligned with LB for us. Anyone can share your scores may help to determine. Anyway, trusting cv is always the better way. Make sure there is no leak in your cv.</p> <h1>Additional</h1> <p>The Class Activation Map (CAM) can somewhat help to track your model performance. Introduced by toxu, inspired by the winner in cvpr2020-plant-pathology. I used it for error analysis to determine if the model cares about the right position of the image. I made a simple <a href="https://www.kaggle.com/woshifym/cld-class-activation-mapping-cam-wip" target="_blank">notebook</a> to show how it works. Hope you would enjoy it.</p> <h1>Others</h1> <p>Forgive my messy formatting.<br> Will update if I think of something missingโ€ฆ</p>
Cassava Leaf Disease Classification
First time Competition, 75th place solution silver region
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition is indeed coming to an end. With 3 days remaining(as this post was made), lets overview important things that I found quite useful learning wise. This competition is my first competition, and set up a really good impression of kaggle for me. I believe that this competition is a great competition to not only learn, but to test your knowledge and discover new ideas.</p> <p>Notebooks down below.</p> <p>With over 3.k+ members attending, 4.8k competitors, with over 85.5k+ submissionsโ€ฆ</p> <p>Cassava 2019 tournament: A classification problem of this tournament back in 2019. Winning solutions for this classification are<br> 1st place: <a href="https://www.kaggle.com/c/cassava-disease/discussion/94114" target="_blank">https://www.kaggle.com/c/cassava-disease/discussion/94114</a><br> 2nd place: <a href="https://www.kaggle.com/c/cassava-disease/discussion/94112" target="_blank">https://www.kaggle.com/c/cassava-disease/discussion/94112</a><br> 3rd place: <a href="https://www.kaggle.com/c/cassava-disease/discussion/94102" target="_blank">https://www.kaggle.com/c/cassava-disease/discussion/94102</a></p> <p>These notebooks came in really handy, as they provide useful ideas that we competitors can integrate in.</p> <p>Some useful notebooks in this competition that benefitted me and many others.<br> (Note: These are not all of the helpful notebooks that have helped me, but instead the notebooks I believe started me up the fastest.)</p> <p><a href="https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tpu-tensorflow-training" target="_blank">Dimitre's cassava leaf disease training</a><br> <a href="https://www.kaggle.com/piantic/train-cassava-starter-using-various-loss-funcs" target="_blank">Heroseo's various loss functions</a><br> <a href="https://www.kaggle.com/khyeh0719/pytorch-efficientnet-baseline-inference-tta" target="_blank">Kun Hao Yeh's inferencing notebook +training</a><br> <a href="https://www.kaggle.com/yasufuminakama/cassava-resnext50-32x4d-starter-training" target="_blank">Y.Nakama's notebooks</a></p> <h2>Step 1: Knowledge distillation</h2> <p>My team and I realized that this dataset marginally have a lot of noisy labels, and so in order to address this problem.. We smoothed the labels and later combined knowledge distillation to feed into our model.</p> <h2>Step 2:</h2> <p>We realized that certain augmentations are really useful of course during training stage.<br> With 8x TTA in inference giving us a <br> <strong>Public LB 0.904</strong> and <strong>Private LB 0.900</strong><br> While 5x TTA was a tad bit worse.</p> <h2>Step 3:</h2> <p>Center crop + hue saturation effected out PB LB, alongside with 5 folds for efficientnetB4.<br> Because the training time was 8 hours max, all my teammates trained separate folds for each and later did the same to resnext for inference. You can check this out on the inference and training notebook down below.</p> <h2>Efficientnet B4 + Resnext:</h2> <p>Our best fit model with no distillation was:</p> <table> <thead> <tr> <th>Model</th> <th>Distillation</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>Efficientnet B4</td> <td>No</td> <td>0.899</td> <td>-</td> </tr> <tr> <td>Efficientnet B4</td> <td>Yes</td> <td>0.902~0.9003</td> <td>0.9034</td> </tr> <tr> <td>Efficientnet B4 + Resnext50</td> <td>Yes</td> <td>0.903~0.904</td> <td>0.90036</td> </tr> </tbody> </table> <h2>Experimentations</h2> <p><strong>Augs</strong>:</p> <ul> <li>Fmix</li> <li>SnapMix</li> <li>Training: Hue Saturation + Coarse dropout + Random Brightness</li> <li>Inference: Standard TTA augs</li> </ul> <p><strong>TTA:</strong></p> <ul> <li>Light augs(only flips): 4~8x TTA </li> <li>Standard augs: 5~8x TTA</li> </ul> <p><strong>Losses</strong>:</p> <ul> <li>Taylor crossentropy + smoothening(best)</li> <li>Bi-tempered loss</li> <li>Categorical crossentropy</li> </ul> <h2>Conclusion</h2> <p>Lucky to not have a bad shakedown!<br> We did try to implement NFnet, however we didn't have time when the weights were released so we bailed on that. Check it out as it is revolutionary!</p> <p>Again, these are just a list of the things that came off the top of my head! First competition, and I don't know if it is finalized so I won't be uploading our notebook at the moment for solution. However, I will be uploading it tomorrow! Just making sure that the place my amazing team <a href="https://www.kaggle.com/teodorbogoeski" target="_blank">@teodorbogoeski</a> and <a href="https://www.kaggle.com/varungadre0910" target="_blank">@varungadre0910</a> are finalized! Still don't quite understand how kaggle really works but definitely interested in learning more! If this is finalized, we will be placed 75th on the first competition that my team and I have done! Learned lots from kaggle and this competition! Definitely going to do moreโ€ฆ Thanks to many wonderful people who shared their ideas and knowledge in a meaningful manner for novices like me. </p> <p>(Edit): Never mind, turns out I will release the notebooks now.</p> <p>I will be planning to upload notebooks for various datasetsโ€ฆ Just waiting for this to finalize just to be sure! Thank you kaggle!</p> <p>Training: <a href="https://www.kaggle.com/andyjianzhou/cassava-training-fastai-efficnetnet-distill" target="_blank">Here</a></p> <p>Inference: <a href="https://www.kaggle.com/andyjianzhou/cassava-inferencing-efficientnetb4-resnext50?scriptVersionId=54404877" target="_blank">Here</a></p> <p>Thanks to my amazing teammates who built these models efficiently! Looking forward to next competitions! </p> <p>Useless note: Did pretty bad on my unit final today but that's alright.. Machine Learning Forever!</p> <p>Thank you Chris! All of your notebooks are really useful. Looked at possibly all your notebooks to learn. Thanks a lot for improving the kaggle community along with many others.</p> <p>Haha thank you! Couldn't do it without your amazing notebooks.</p>
Cassava Leaf Disease Classification
Looking back at my approach 1284 pos [0.8952 private leaderboard and 0.8950 public leaderboard]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I got around 0.8952 acc on private leaderboard and 0.8952 on public leaderboard. My final position on private lb is 1284. Surprisingly I went up by 556 positions on private board despite no major change in accuracy. What a shakeup!!</p> <h2>My Approach</h2> <h3>1. Models</h3> <ul> <li>Since, I had the option to select my two best performing submissions for the competition, I decided to train two separate models. One being ResNet50_32x4 and the other EfficientNet-B4. Both were initialized with pretrained ImageNet weights.</li> </ul> <h3>2. k-fold Cross Validation</h3> <ul> <li>Since no validation set was made available, all the train data had to be used for creating the final model. I decided to use k-fold for model and hyperparameter selection and then train that model on the full set of images.</li> <li>Since the dataset is imbalanced and to maintain the same class distribution across the folds, I used stratified K-fold that makes sure that the percentage of samples for each class are preserved across the folds.</li> <li>I used 5-folds to estimate the best performing model, data augmentation schemes and hyper-parameters, by averaging the predictions on the hold out across all k-folds.</li> </ul> <h3>3. Metric Used</h3> <ul> <li>Though, the competition scored submissions based on accuracy, I used F1 score during validation, since the classes are imbalanced and hence accuracy would not have been the best option. But for the final model selection while training on the full trainset I used accuracy as my selection metric.</li> </ul> <h3>4. Loss Function</h3> <ul> <li>I tried using a weighted cross entropy loss to overcome the class imbalance, but it did not show any f1-score improvement over cross entropy loss. So I went with the plain cross entropy loss.</li> </ul> <h3>5. Image Augmentations</h3> <ul> <li>I started off with the basic image augmentations like random resized cropping, transposing the image, horizontal and vertical flipping, random hue, saturation, value,brightness and contrast adjustments. Yet the model struggled with classifying 0,1, 2, and 4 class labelled images</li> <li>I got a significant boost in the f1 scores across all the classes after adding CoarseDropout, CutOut and CLAHE. Yet still the model was struggling with class 0 and 1.<br> Finally using CutMix boosted class 0's and 1's f1-scores.</li> </ul> <h3>6. Training</h3> <ul> <li>Both the models were fully trained on the entire training set with model selection based on the model accuracy.</li> <li>I used SGD optimizer with Cosine Annealing learning rate scheduler, decaying the lr every epoch. I also tried cosine annealing with restarts but I saw no reduction in convergence time.</li> </ul> <h3>7. Test Time Augmentations (TTA)</h3> <ul> <li>I saw an increase in my public score accuracy after using test time augmentations. I used 10 rounds of predictions for each image, where every time random augmentations are applied to it and then the predictions are averaged.</li> <li>I skipped CoarseDropout and CutOut for test time augmentations since they did more harm then help with predictions. I thought CLAHE would help get a better score when used along TTA. But, I was wrong, so I discarded it from TTA as well.</li> </ul> <h3>8. Soft-Voting Ensemble</h3> <ul> <li>To boost my accuracy I used a soft voting ensemble on my best performing Resnet50_32x4 and EfficientNet-B4 model. I tried a number of weights for combining their predictions. The best score I got with was Resnext50 weighted 0.6 and EfficientNet-B4 weighted 0.4</li> </ul> <h2>Glance at other approaches</h2> <ul> <li>I noticed a few approaches using more models in the ensemble in the numbers of 8 and more and also used previous competitions data as supplement to the given dataset. Here is one discussion I found interesting that used both of themโ€Š-โ€Š<a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220682" target="_blank">https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220682</a></li> <li>I saw people combining image augmentation techniques like Fmix, Mixup and CutMix<br> There is a huge variety of models and techniques used across the participants.</li> </ul> <h2>Here are links to some top submission notebooks and discussion threads that I enjoyed reading:</h2> <ul> <li>2nd Placeโ€Š-โ€ŠSummary of Approachโ€Š-โ€Š<a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220898" target="_blank">https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220898</a> [Surprisingly, this submission uses nothing fancy]</li> <li>375th place private LB (278th place public)โ€Š-โ€Š<a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220879" target="_blank">https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220879</a></li> <li>Private 7th | Public 71th Solutionโ€Š-โ€Š<a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220735" target="_blank">https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220735</a></li> <li>8th place solutionโ€Š-โ€Š<a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220994" target="_blank">https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220994</a></li> <li>10th place solution (23rd public)โ€Š-โ€Š<a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220788" target="_blank">https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220788</a></li> </ul> <h2>Conclusion</h2> <ul> <li>This was my first kaggle competition and I learned a lot from it. It was fun and exciting reading other participant's kernels, going through interesting ideas on the discussion board and implementing techniques I had learned from papers and school onto a real world problem.</li> <li>I signed up late for the competition, almost 15 days before the deadline and this definitely prevented me from trying different models and augmentation techniques. I can't hold this against anyone but myself.</li> <li>One of things I wanted to do was add train more models to add to the ensemble. Another was picking my ensemble weights, which I could have learned instead of manually trying a few different submission on the public leaderboard.</li> <li>Other things I wanted to try to tackle the imbalance problem was experimenting with focal loss and Remix: rebalanced mixup</li> </ul> <h2>My kernel links</h2> <p>Below are my kernel links. I used separate kernels. One for k-fold cv, one for full training and the other for submission.</p> <ul> <li>k-fold cv kernelโ€Š-โ€Š<a href="https://www.kaggle.com/krutpatel2257/cassava-leaf-disease-classification-k-fold" target="_blank">https://www.kaggle.com/krutpatel2257/cassava-leaf-disease-classification-k-fold</a></li> <li>full train kernelโ€Š-โ€Š<a href="https://www.kaggle.com/krutpatel2257/cassava-leaf-disease-classification" target="_blank">https://www.kaggle.com/krutpatel2257/cassava-leaf-disease-classification</a></li> <li>Ensemble submissionโ€Š-โ€Š<a href="https://www.kaggle.com/krutpatel2257/cassava-leaf-disease-classification-ensemble-sub" target="_blank">https://www.kaggle.com/krutpatel2257/cassava-leaf-disease-classification-ensemble-sub</a></li> </ul>
Cassava Leaf Disease Classification
Private 28th, Public 8th, Thank you all, my first medal!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you. I think so. I prioritized having more training data rather than keeping verification data.</p> <p>Oh, I also think the final submission choice is most difficult part of kaggle.. I failed to choose my final submissions last competition MoA. And I've learned to more believe public LB for me. It is successful for this competition. Good luck for your next competition!</p> <p>Thank you! ๐Ÿ˜„</p> <p>I'm sorry, it may not correct word. I use different seed on training each base model. 4021 has no meaning, it's like throwing dice. ๐Ÿ˜</p> <p>Thank you! And congratulate you too! I also think Kaggle is a great place to learn.</p> <p>Thank you! ๐Ÿ˜€ I also think someone tried the same approach. I'm not sure what is differentiated with you, but I hope it all goes well for you next competition.</p> <p>Thank you! ๐Ÿ˜„</p> <p>Thank you! I made notebooks and repository public right before, so some time may need to affect.</p> <p>Thank you! ๐Ÿ˜„</p> <h2>โœจ Result</h2> <ul> <li>Private: 28th, 0.901</li> <li>Public: 8th, 0.908</li> </ul> <p>This competition is my first image classification competition,<br> so many parts of my solution came from public notebooks and discussions in this competition.<br> I've learnt many things from it.</p> <p>Thank you for all kagglers and organizers for this competition.</p> <p>And I'm happy to get a solo silver as my first medal!</p> <h2>๐Ÿ”– Solution</h2> <p>I don't have a strong single model, but the ensemble has surprised me.</p> <h3>๐ŸŽจ Base Model</h3> <ul> <li>EfficientNet B4 with Noisy Student</li> <li>SE-ResNeXt50 (32x4d)</li> <li>Vision Transformer (base patch16)</li> </ul> <table> <thead> <tr> <th>Train</th> <th>Inference</th> <th>Public LB</th> <th>Private LB</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td><a href="https://github.com/IMOKURI/Cassava-Leaf-Disease-Classification/blob/f639150116370039666b7bab452abd85932f4d24/cassava-training.ipynb" target="_blank">EfficientNet</a></td> <td><a href="https://www.kaggle.com/imokuri/cassava-inference?scriptVersionId=52135491" target="_blank">EfficientNet-inf</a></td> <td><strong>0.900</strong></td> <td>0.891</td> <td>0.89103</td> </tr> <tr> <td><a href="https://github.com/IMOKURI/Cassava-Leaf-Disease-Classification/blob/fb7397ca97d624eb4db467c3d67a4c492313aaad/cassava-training.ipynb" target="_blank">SE-ResNeXt50</a></td> <td><a href="https://www.kaggle.com/imokuri/cassava-inference?scriptVersionId=52812836" target="_blank">SE-ResNeXt50-inf</a></td> <td>0.899</td> <td>0.894</td> <td><strong>0.89532</strong></td> </tr> <tr> <td><a href="https://github.com/IMOKURI/Cassava-Leaf-Disease-Classification/blob/9b7093ed7501254f7705edd31f96467f2be00d8b/cassava-training.ipynb" target="_blank">ViT</a></td> <td><a href="https://www.kaggle.com/imokuri/cassava-inference?scriptVersionId=52893502" target="_blank">ViT-inf</a></td> <td>0.899</td> <td>0.890</td> <td>0.89220</td> </tr> </tbody> </table> <h3>๐ŸŽ Ensemble and TTA</h3> <p>I tried weighted average of no TTA and TTA.</p> <table> <thead> <tr> <th>Inference</th> <th>Validation</th> <th>TTA</th> <th>Public LB</th> <th>Private LB</th> <th>CV</th> <th>TTA weight</th> </tr> </thead> <tbody> <tr> <td><a href="https://www.kaggle.com/imokuri/cassava-inference?scriptVersionId=54132321" target="_blank">inf-no-TTA</a></td> <td><a href="https://github.com/IMOKURI/Cassava-Leaf-Disease-Classification/blob/f7143beaf5c25829e686f94162cdfa7d0d88d7b1/cassava-validation.ipynb" target="_blank">val-no-TTA</a></td> <td>noTTA</td> <td>0.905</td> <td>0.896</td> <td>0.9429</td> <td>-</td> </tr> <tr> <td><a href="https://www.kaggle.com/imokuri/cassava-inference?scriptVersionId=54141945" target="_blank">inf-TTA</a></td> <td>-</td> <td>noTTA + TTAx6</td> <td>0.907</td> <td>0.899</td> <td>-</td> <td>6:6</td> </tr> <tr> <td><a href="https://www.kaggle.com/imokuri/cassava-inference?scriptVersionId=54244968" target="_blank">inf-TTA-weight</a></td> <td>-</td> <td>noTTA + TTAx6</td> <td><strong>0.908</strong></td> <td>0.900</td> <td>-</td> <td>4:6</td> </tr> </tbody> </table> <p>I decided TTA weight by the public LB score, So I think this may overfit to public LB.<br> I choose second final submission is average of no TTA and TTA.</p> <table> <thead> <tr> <th>Inference</th> <th>Validation</th> <th>TTA</th> <th>Public LB</th> <th>Private LB</th> <th>CV</th> <th>TTA weight</th> </tr> </thead> <tbody> <tr> <td><a href="https://www.kaggle.com/imokuri/cassava-inference/execution?scriptVersionId=54503857" target="_blank">inf-TTA-avg</a></td> <td>-</td> <td>noTTA + TTAx9</td> <td>0.908</td> <td>0.901</td> <td>-</td> <td>9:9</td> </tr> </tbody> </table> <h2>โœ๏ธ Memo</h2> <p>I've used following techniques for this competition.</p> <h3>๐Ÿƒ Datasets</h3> <ul> <li><a href="https://www.kaggle.com/piantic/train-cassava-starter-using-various-loss-funcs/notebook" target="_blank">Use 2019 datasets</a></li> </ul> <h3>๐Ÿ› ๏ธ Preprocessing</h3> <ul> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/207450" target="_blank">Image size 512 ~ 384</a></li> <li><a href="https://www.kaggle.com/khyeh0719/pytorch-efficientnet-baseline-train-amp-aug#Define-Train\Validation-Image-Augmentations" target="_blank">Additional augumentations</a><ul> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/212347" target="_blank">No augmentation for first few epochs.</a></li> <li>Reduce augmentation for last few epochs.</li> <li>Remove augmentation for final epoch.</li> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/209065" target="_blank">CutMix</a></li> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/212060" target="_blank">MixUp</a></li></ul></li> </ul> <h3>๐Ÿ“‰ Loss</h3> <ul> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/202017" target="_blank">Bi-Tempered Logistic Loss</a><ul> <li><a href="https://www.kaggle.com/piantic/train-cassava-starter-using-various-loss-funcs/notebook#Bi-Tempered-Loss" target="_blank">with label smoothing</a></li></ul></li> </ul> <h3>๐Ÿƒ Training</h3> <ul> <li><a href="https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/#tips-for-fine-tuning-efficientnet" target="_blank">batch normalization layers frozen for EfficientNet</a><ul> <li>also vision transformer.</li></ul></li> <li>Gradient accumulation for increasing batch size.</li> <li><ul> <li></li></ul></li> </ul> <h3>๐Ÿš€ Inference</h3> <ul> <li>Random seed ensemble.</li> <li><a href="https://www.kaggle.com/khyeh0719/pytorch-efficientnet-baseline-inference-tta" target="_blank">TTA(Test Time Augmentation)</a><ul> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/214559#1171803" target="_blank">10 is the minimum number to establish a stable result.</a></li></ul></li> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/210921#1153396" target="_blank">Include original image on TTA.</a><ul> <li>Weighted ensemble of original image inference and augmented one.</li></ul></li> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/206489" target="_blank">Light augmentation for inference</a></li> </ul> <h3>๐Ÿ’ก Tips</h3> <ul> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/207450" target="_blank">A few things for easy start</a></li> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/203594" target="_blank">Sharing some improvements and experiments</a></li> <li><a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/208402" target="_blank">Important points to boost the LB score</a></li> <li><a href="https://www2.slideshare.net/RyuichiKanoh/practical-tips-for-handling-noisy-data-and-annotaiton-204195412" target="_blank">Practical tips for handling noisy data and annotaiton</a></li> </ul> <p>Thank you again. I will try to be expert next competition!</p> <p>Thank you very much! ๐Ÿ˜† I always learned many things, from what you discussed. I'll keep trying!</p> <p>Thank you! ๐Ÿ‘</p> <p>Thank you! ๐Ÿ˜„</p> <p>Thank you! โœจ</p> <p>Thank you so much! ๐Ÿ˜†</p> <p>Thank you! โœจ</p> <p>Thank you! ๐Ÿ˜„</p> <blockquote> <p>CV 0.9429 is too high</p> </blockquote> <p>Yeah, I don't know how to validate ensemble models in fact.<br> So, this value came from the inference of training data after training.<br> It means all data is leaked.<br> I would like to learn how to validate ensemble models using trained modelsโ€ฆ</p>
Cassava Leaf Disease Classification
Private 606th / Public 34th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค ใ…Žใ…Ž nevret ๋‹˜๋„ ์ˆ˜๊ณ ํ•˜์…จ์Šต๋‹ˆ๋‹ค!!</p> <p>๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค! Heroseo ๋‹˜๋„ ์ •๋ง ์ˆ˜๊ณ ํ•˜์…จ์Šต๋‹ˆ๋‹ค!!</p> <p>ใ…‹ใ…‹ใ…‹ใ…‹ใ…‹ ๊ธˆ๋ฉ”๋‹ฌ ๊ฐ€์ฆˆ์•„ใ…ใ… ๊ธฐํšŒ๋˜๋ฉด ๋‹ค์Œ์— ๊ฐ™์ด ๋Œ€ํšŒํ•ด์š”!</p> <p>๋ฉ”๋‹ฌ ์ •๋ง ์ถ•ํ•˜๋“œ๋ฆฌ๊ณ  ๊ณ ์ƒ ๋งŽ์œผ์…จ์–ด์š”!!<br> ์ด๋ฒˆ ๋Œ€ํšŒ๋Š” ์•„์‰ฝ์ง€๋งŒ, ๋” ์—ด์‹ฌํžˆ ๊ณต๋ถ€ํ•ด์„œ ๋‹ค์Œ ๋Œ€ํšŒ๋ฅผ ๋…ธ๋ ค๋ด์•ผ๊ฒ ์–ด์š” ใ…Žใ…Ž</p> <p>Hi everyone!</p> <p>First of all, congratulations to all the winners and did a great job on all participants' hard works!</p> <h1>Overview</h1> <p>I selected two versions based on best CV &amp; LB scores because I'm really worried about a huge shakeup. (However, I did not make it out :cool_crying:)</p> <ul> <li>Public LB: 34th, 0.907</li> <li>Private LB: 606th, 0.898</li> </ul> <h2>Base Model</h2> <p>I only got Kaggle GPU, so I couldn't experiment with many models with various recipes.<br> I tried 6 models (ResNeSt50, ResNeSt50 4s2x40d, EffNet-B3, 4, ViT-L/16, DeiT-B/16) and decided to use <strong>ResNeSt50 4s2x40d</strong>, <strong>EffNet-B4</strong> based on LB &amp; CV scores.</p> <table> <thead> <tr> <th>arch</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>ResNeSt50-fast-4s2x40d (tta4)</td> <td>0.891</td> <td>0.898</td> <td>0.895</td> </tr> <tr> <td>EfficientNet-B3 NS (tta4)</td> <td>0.895</td> <td>0.896</td> <td>0.895</td> </tr> <tr> <td>EfficientNet-B4 NS (tta4)</td> <td>0.891</td> <td>0.900</td> <td>0.893</td> </tr> </tbody> </table> <p>To increase the robustness, I trained the model on different training recipes with the same architecture. I chose to use <code>focal cosine loss</code> &amp; <code>cross-entropy w/ label smoothing 0.2</code>.</p> <table> <thead> <tr> <th>arch</th> <th>loss</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>EfficientNet-B4 NS (tta3)</td> <td>focal cosine</td> <td>0.890</td> <td>0.900</td> <td>0.895</td> </tr> <tr> <td>EfficientNet-B4 NS (tta4)</td> <td>cross entropy w/ label smooth</td> <td>0.891</td> <td>0.900</td> <td>0.893</td> </tr> </tbody> </table> <p>Lastly, the pseudo label boosts the CV score +0.007. (I generated a pseudo label based on my best CV score, 0.9051)</p> <table> <thead> <tr> <th>arch</th> <th>pseudo</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>EfficientNet-B4 NS (tta4)</td> <td>no</td> <td>0.891</td> <td>0.900</td> <td>0.893</td> </tr> <tr> <td>EfficientNet-B4 NS (tta4)</td> <td>yes</td> <td>0.898</td> <td>0.898</td> <td>0.893</td> </tr> </tbody> </table> <h2>Ensemble</h2> <h3>Weights</h3> <p>Single model's CV &amp; LB scores seem not good, But, ensembling raises the CV &amp; LB scores to 0.005. Also, TTA increases LB scores +0.002.</p> <p>I tuned the ensemble weights with <code>Optuna</code> &amp; <a href="https://www.kaggle.com/daisukelab/optimizing-ensemble-weights-using-simple" target="_blank"><code>Simple</code></a> on CV.</p> <p>Here is the script : <a href="https://www.kaggle.com/kozistr/valid-optimize-cv" target="_blank">optimive cv</a></p> <h3>Combinations</h3> <ul> <li>best LB : 2 x ResNeSt50-fast-4s2x40d + 2 x EffNet-B4</li> <li>best CV : 3 x ResNeSt50-fast-4s2x40d + 3 x EffNet-B4 + 1 x EffNet-B3</li> </ul> <table> <thead> <tr> <th>baseline</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>best LB</td> <td>0.9003</td> <td>0.907</td> <td>0.897</td> </tr> <tr> <td>best CV</td> <td>0.9051</td> <td>0.905</td> <td>0.896</td> </tr> </tbody> </table> <p>Here is the script : <a href="https://www.kaggle.com/kozistr/inference-rns50-effnet-b4" target="_blank">inference</a></p> <h2>TTA</h2> <p>Simply using <code>CenterCrop + horizontal &amp; vertical flop</code> achieves better CV &amp; LB scores.</p> <h2>Training Recipe</h2> <ul> <li>Data : 2019 + 2020 datasets, only 2020, w/ pseudo labels</li> <li>Validation : stratified kfold - 5 folds</li> <li>Augmentations : FMix, CutMix, SnapMix, MixUp, Cutout, etcโ€ฆ</li> <li>Optimizer : AdamW, AdamP, RAdam</li> <li>Loss : focal cosine, cross-entropy w/ label smoothing, bi-tempered</li> <li>Scheduler : Cosine Anealling w/o warm-up (1e-4 to 1e-6)</li> <li>Epochs : 10 ~ 20</li> <li>Resolution : 512x512</li> <li>Auxiliary : freeze BN layer</li> <li>TTA : n_iters 4 is best in my case.</li> </ul> <p>Experiment : <a href="https://www.kaggle.com/kozistr/cassava-leaf-disease-experiments" target="_blank">note</a></p> <h3>Works</h3> <ol> <li>Label Smoothing (alpha = 0.2)</li> <li>TTA (n_iters 4 is best)</li> <li>Adam, AdamW to RAdam (slight improvement)</li> <li>external dataset (2019 dataset)</li> </ol> <h3>Not Works</h3> <ol> <li>noise-tolerant(?) loss like bi-tempered loss</li> </ol> <h2>Reflections</h2> <ol> <li>ensemble weights</li> </ol> <p>I guess tuning ensemble weights might be one of the reasons, especially where there're noisy labels. Usually, the tuned versions tend to have a lower score than the average one. (-0.001 ~ -0.002 gap on Private LB score). But, Ironically best Private score is the tuned version (not the selected one).</p> <ol> <li>multiple scales (image resolution)</li> </ol> <p>I think training &amp; inferencing with multiple scales would be helpful to improve the score. On several experiments, a lower resolution (384x384) works better than a higher resolution (512x512).</p> <p>Thank you very much!</p>
Cassava Leaf Disease Classification
Private 7th | Public 71th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Welkin, The highest 1-fold auc of the same model may reach 0.905, while the lowest 0.891. 0.905 can only show a better effect on the 1 fold validation set. In more test sets, a model with 1fold cv=0.891 may have better results, so it is recommended to use an average of 5fold for ensemble</p> <blockquote> <p>The experience of this competition is extraordinary for me, due to the noise data of PB, I think I am really lucky to enter the gold medal zone, because most people's PB scores are very close. I really appreciate all the discussion topics and public notebooks, and I learn a lot from it. Thank you very much๏ผCongratulations to all the people and teams who won the medals !</p> </blockquote> <h1>Solution</h1> <p>two schemes were selected to submit:</p> <h2>1. the first submission: no secondary processing for noise label</h2> <h4>CV 0.90398 | LB 0.906 | PB 0.901</h4> <p><strong>this scheme is the best CV in local</strong>. It uses a lot of augmentation, including randomcrop, H/V flip, a large number of RGB transform, cutout, grid distortion, etc., </p> <p><strong>6modelx4TTA:</strong></p> <table> <thead> <tr> <th>model</th> <th>loss</th> <th>size</th> </tr> </thead> <tbody> <tr> <td>efficientnet_b4_ns</td> <td>bi-tempered</td> <td>512</td> </tr> <tr> <td>seresnext101</td> <td>bi-tempered</td> <td>512</td> </tr> <tr> <td>seresnext101</td> <td>focalcos</td> <td>512</td> </tr> <tr> <td>seresnext50</td> <td>bi-tempered</td> <td>512</td> </tr> <tr> <td>VIT_base16</td> <td>focalcos</td> <td>384</td> </tr> <tr> <td>VIT_base16</td> <td>bi-tempered</td> <td>384</td> </tr> </tbody> </table> <p>the results show that the scores of all ensemble submitted PB without secondary processing for noise label were between 0.898 and 0.902. </p> <p>there are two results of PB 0.902:</p> <p><strong>3model and 5xrandomTTA:</strong><br> <strong>CV --- | LB 0.902 | PB 0.902</strong></p> <table> <thead> <tr> <th>model</th> <th>loss</th> <th>size</th> </tr> </thead> <tbody> <tr> <td>efficientnet_b4_ns</td> <td>bi-tempered</td> <td>512</td> </tr> <tr> <td>seresnext101</td> <td>bi-tempered</td> <td>512</td> </tr> <tr> <td>VIT_base16</td> <td>bi-tempered</td> <td>384</td> </tr> </tbody> </table> <p>randomTTA is the same as the augmentation method in training, randomly 5 times</p> <p><strong>5model and 5xTTA:</strong><br> <strong>CV --- | LB 0.901 | PB 0.902</strong></p> <table> <thead> <tr> <th>model</th> <th>loss</th> <th>size</th> </tr> </thead> <tbody> <tr> <td>efficientnet_b4_ns</td> <td>bi-tempered</td> <td>512</td> </tr> <tr> <td>efficientnet_b4_ns</td> <td>focalcos</td> <td>384</td> </tr> <tr> <td>seresnext101</td> <td>bi-tempered</td> <td>512</td> </tr> <tr> <td>VIT_base16</td> <td>focalcos</td> <td>384</td> </tr> <tr> <td>regnety120</td> <td>focalcos</td> <td>512</td> </tr> </tbody> </table> <h2>2. the second submission: based on the prediction results of the first scheme, knowledge distillation is carried out, and the student model results and the teacher model results are weighted by a ratio of 4:6</h2> <h4>CV 0.9063 | LB 0.902 | PB 0.900</h4> <p>the augmentation and the model structure is unchanged</p> <p><strong>6 student model and 4xTTA</strong><br> <strong>CV 0.9079 | LB 0.901 | PB 0.898</strong></p> <table> <thead> <tr> <th>model</th> <th>loss</th> <th>size</th> </tr> </thead> <tbody> <tr> <td>efficientnet_b4_ns</td> <td>CE</td> <td>512</td> </tr> <tr> <td>seresnext101</td> <td>CE</td> <td>512</td> </tr> <tr> <td>VIT_base16</td> <td>CE</td> <td>384</td> </tr> <tr> <td>resnext101</td> <td>CE</td> <td>512</td> </tr> <tr> <td>resnest101</td> <td>CE</td> <td>512</td> </tr> </tbody> </table> <p>I think the student model <strong>overfitting</strong> the training set, so I tried various weighting methods with the predicted probability of the first scheme. The range of pb is between 0.898 and 0.902</p> <h2>3. something else (all changes to labels or delete labels are only for the training set, not the label of the validation set, keep the TRUE label in validation set)</h2> <p>(as a comparison of the following, no processing has been applied to the noisy label: <br> <strong>efficientnet-b4 5fold 5tta: PB 0.894</strong>, the following attempts all use this benchmark)</p> <ul> <li>images with prediction probability greater than 0.9 and incorrect predictions are deleted. <strong>PB 0.895</strong></li> <li>the labels of images with prediction probability greater than 0.9 and incorrect predictions are changed. <strong>PB 0.896</strong></li> <li>simply aug + snapmix, and gradually increase the probability of snapmix by epoch. <strong>PB 0.896</strong></li> <li>2019+2020 Dataset, since the 2019 data set contains multiple sizes, and any resize/crop form will affect the final CV, I finally did not use the 2019 data ( The data for 2019 was deleted in the verification set) <strong>PB 0.895</strong></li> <li>focalcos loss will increase CV, but some models will reduce the prediction probability of true label. </li> <li>in my submit, when using random TTA, the score of PB and LB are close.</li> <li>RegNet PB is higher than LB, but RegNet LB is too low, RegNet is not selected for submit.</li> </ul> <p>I am very glad to get my first solo gold, and I really look forward to 1st solution</p> <p>Thank you! I use 6 models avg softmax predictions * 0.7 and don't change the original label, like this:</p> <pre><code>original label: [0, 0, 0, 1] 6 models avg softmax pred: [0.6, 0.3, 0, 0.1] new label: [0.42, 0.21, 0, 1] </code></pre> <p>Thank you again ! :)</p> <p>Yes~ <br> The time limit for this competition is 9 hours. When the average size of single model is about 150M and bs = 64, 4model + 7tta takes about 8-9 hours</p>
Cassava Leaf Disease Classification
Private LB 101 / Public LB was 433 (+332 ) - Silver solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> Many thanks for your Kernel <a href="https://www.kaggle.com/piantic/no-tta-cassava-resnext50-32x4d-inference-lb0-903/output" target="_blank">https://www.kaggle.com/piantic/no-tta-cassava-resnext50-32x4d-inference-lb0-903/output</a> !</p> <p>Hi Kagglers!</p> <p>101 place Kernel is here:<br> <a href="https://www.kaggle.com/izotov/public-lb-score-0-901-priate-0-900-101st-place" target="_blank">https://www.kaggle.com/izotov/public-lb-score-0-901-priate-0-900-101st-place</a><br> Public LB was only 433.</p> <p>Many thanks to contributors!<br> Special thanks to contributors of source Kernels:<br> <a href="https://www.kaggle.com/piantic/no-tta-cassava-resnext50-32x4d-inference-lb0-903/output" target="_blank">https://www.kaggle.com/piantic/no-tta-cassava-resnext50-32x4d-inference-lb0-903/output</a><br> <a href="https://www.kaggle.com/underwearfitting/clean-inference-kernel-8xtta-lb902/data" target="_blank">https://www.kaggle.com/underwearfitting/clean-inference-kernel-8xtta-lb902/data</a><br> <a href="https://www.kaggle.com/shubham108/cassava-ensemble-efnetb3-resnet50" target="_blank">https://www.kaggle.com/shubham108/cassava-ensemble-efnetb3-resnet50</a></p> <p>And my conrgatulations to winners!</p>
Cassava Leaf Disease Classification
Private LB 105th / Public LB 970th solution and first medal ๐Ÿฅณ
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a>, your <a href="https://www.kaggle.com/piantic/no-tta-cassava-resnext50-32x4d-inference-lb0-903" target="_blank">no tta kernel</a> was super helpful. Definitely my favourite kernel from this competition.</p> <p>First of all, congratulations to all the competitors. Huge thanks to Kaggle, Makerere University AI Lab for organising this competition. <br> I'd also like to thank my amazing teammate, <a href="https://www.kaggle.com/kjkr73" target="_blank">@kjkr73</a>. He did most of the heavy lifting. He'll be publishing a kernel soon.</p> <p>โš  We actually jumped to 105. :)</p> <h2>Data</h2> <ul> <li>We only used the competition data.</li> </ul> <h2>Validation Strategy</h2> <ul> <li>Stratified 5-fold.</li> <li>My CV score and LB score were correlated from the very start so I didn't worry about it too much.</li> <li>Trusted our CV but didn't disregard Public LB entirely.</li> </ul> <h2>Image Preprocessing And Augmentations</h2> <ul> <li>We trained models on 224*224, 256*256, 384*384 and 512*512 image size. Larger image size always had better scores.</li> <li>We tried multiple combinations of augmentations. Following combinations performed the best:<ul> <li>Cutout, HorizontalFlip, VerticalFlip, ShiftScaleRotate, HueSaturationValue, RandomBrightnessContrast.</li> <li>Cutout, CoarseDropout, HorizontalFlip, VerticalFlip.</li></ul></li> </ul> <h2>Neural Network Architectures</h2> <ul> <li>We tried ResNet34, ResNext50_32x4d, EfficientNet B1 NS, EfficientNet B3 NS, EfficientNet B4 NS, ViT-Base patch 16, and DeiT-Base patch 16 224.</li> </ul> <h2>Miscellaneous</h2> <ul> <li>Loss(es):<ul> <li>Bi-tempered loss with label smoothing</li> <li>Taylor Crossentropy with label smoothing</li></ul></li> <li>Optimizer:<ul> <li>Adam</li></ul></li> <li>LR Scheduler(s):<ul> <li>OneCycleLR</li> <li>CosineAnnealingWarmRestarts</li></ul></li> <li>Hardware/ platform:<ul> <li>Kaggle kernels</li></ul></li> </ul> <h2>Final Submissions</h2> <ul> <li>Selected Submission 1 (Private 0.8999, Public 0.9009) is a blend of:<ul> <li>EfficientNet B1 NS on 512*512 image size [1xTTA].</li> <li>ResNext50_32x4d on 512*512 image size [1xTTA].</li> <li>ResNext50_32x4d on 512*512 image size [No TTA].</li> <li>EfficientNet B3 NS on 512*512 image size [No TTA]. </li></ul></li> <li>Selected Submission 2 (Private 0.8981, Public 0.9002) is a blend of:<ul> <li>EfficientNet B1 NS on 512*512 image size [1xTTA].</li> <li>ResNext50_32x4d on 512*512 image size [No TTA].</li> <li>EfficientNet B3 NS on 512*512 image size [No TTA]. </li></ul></li> </ul> <h2>Some observations/ tips for competing in a CV comp with Kaggle Kernels only:</h2> <ul> <li>Data preprocessing != data augmentation. Resize images to the intended size before training, it will save you some GPU quota and speed up your experiments. Credits: <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>; read more <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/161446" target="_blank">here</a>.</li> <li>Create a proof of concept kernel which is essentially a smaller version of the training kernel for fast experimentation or to check the feasibility of an idea. This kernel should run quickly and return an OOF CV score. You can make this kernel fast with a smaller number of CV folds/ smaller number of epochs. If you see improvement in this CV score, copy the config and run it on your training kernel.</li> <li>Write modular, maintainable, and readable code.</li> </ul> <h2>Credits</h2> <ul> <li><code>pandas</code> <a href="https://github.com/pandas-dev/pandas" target="_blank">https://github.com/pandas-dev/pandas</a></li> <li><code>torch</code> <a href="https://github.com/pytorch/pytorch" target="_blank">https://github.com/pytorch/pytorch</a></li> <li><code>timm</code> <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">https://github.com/rwightman/pytorch-image-models</a>.</li> <li><code>albumentations</code> <a href="https://github.com/albumentations-team/albumentations" target="_blank">https://github.com/albumentations-team/albumentations</a></li> <li><code>lycon</code> <a href="https://github.com/ethereon/lycon" target="_blank">https://github.com/ethereon/lycon</a></li> </ul> <p>PS: This is our (my and <a href="https://www.kaggle.com/kjkr73" target="_blank">@kjkr73</a>'s) first medal ๐Ÿฅณ<br> Edit: Updated LB scores and rank.</p> <p>Thank you!</p> <blockquote> <p>Did you exhaust your weekly kaggle allocations?</p> </blockquote> <p>Yes, but rarely. I used to exhaust weekly quotas frequently in the past competitions (like SIIM Melanoma Classification) but ever since Kaggle started allocating <a href="https://www.kaggle.com/product-feedback/173129" target="_blank">'floating' quota</a> I rarely run out of GPU hours. I guess limited resources force you to come up with <em>creative</em> solutions. :)</p> <blockquote> <p>Does it mean something like creating a dataset that was resized to save processing time?</p> </blockquote> <p>Yes. I'm sure there are better ways to do it but <a href="https://www.kaggle.com/spoon69/spoon-cassava-resize-384" target="_blank">here</a>'s how I do it.</p> <p>Thank you!</p> <p>Since the final submission will also consider the fourth decimal place digit (to be announced) of the score and in the worst case our submission will tie with teams having 0.8990 and will be latest submission of all, in that case we will fall to 245. You can read more <a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220602" target="_blank">here</a>.</p>
Cassava Leaf Disease Classification
Private LB 34th / Public LB 30th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The procedure for stacking is as follows.</p> <p>1.Prepare out of fold (oof) for the number of models</p> <ul> <li>EfficientnetB4_1 5fold oof. shape=(21397,5)</li> <li>ResNeSt101e 5fold oof. shape=(21397,5)</li> <li>EfficientnetB4_2 5fold oof. shape=(21397,5)</li> <li>SE-Resnext50_32x4d 5fold oof. shape=(21397,5)</li> <li>Vit 10fold oof. shape=(21397,5)</li> </ul> <p>โ€ปcolumn is 5class confidence score.</p> <p>2.Predict test set using average of 5fold for each model</p> <ul> <li>EfficientnetB4_1 5fold test set prediction. shape=(15000,5)</li> <li>ResNeSt101e 5fold test set prediction. shape=(15000,5)</li> <li>EfficientnetB4_2 5fold test set prediction. shape=(15000,5)</li> <li>SE-Resnext50_32x4d 5fold test set prediction. shape=(15000,5)</li> <li>Vit 10fold test set prediction. shape=(15000,5)</li> </ul> <p>3.Concatenate 1 and 2 data (Pseudo labels)</p> <ul> <li>EfficientnetB4_1 5fold oof + test set prediction. shape=(36197,5)</li> <li>ResNeSt101e 5fold oof + test set prediction. shape=(36197,5)</li> <li>EfficientnetB4_2 5fold oof + test set prediction. shape=(36197,5)</li> <li>SE-Resnext50_32x4d 5fold oof + test set prediction. shape=(36197,5)</li> <li>Vit 10fold oof + test set prediction. shape=(36197,5)</li> </ul> <p>4.Combining data into one. shape=(36167, 1, 5, 5)</p> <p>5.Divide combining data into 5fold by StratifiedKfold.</p> <p>6.Train 5fold with torch.nn.Conv2dโ†’mlp 2-layer model</p> <p>7.Use average of 5fold models as final prediction</p> <p>Thank you to all my teammates and participants. I would like to share a summary of our solution.</p> <p>โ€ปSome teams were removed. I changed Topic Title because our LB changed (private LB 38โ†’34th, public LB 31โ†’30th).</p> <h1>Summary</h1> <p><img src="https://github.com/riron1206/kaggle_Cassava/blob/master/Cassava_Leaf_Disease_Classification_Model_Pipeline.jpg?raw=true" alt="img1"></p> <p>Private LB 34th / Public LB 30th Inference Notebook: <br> <a href="https://www.kaggle.com/anonamename/cassava-emsemble-v2-stacking?scriptVersionId=54065022" target="_blank">https://www.kaggle.com/anonamename/cassava-emsemble-v2-stacking?scriptVersionId=54065022</a></p> <h3>TTA</h3> <p>We increased number of TTA for the 224*224 Vit model because accuracy of TTA using RandomResizedCrop with small image size was not stable.<br> Since public LB did not increase with stronger augmentation such as RandomBrightnessContrast, we only did flip.<br> Depending on random number of TTA, public LB changed by about 0.001-0.003, but this problem could not be solved.</p> <h3>Blending</h3> <p>We blended ViT-B/16, EfficientnetB4, ResNeSt101e, and SE-Resnext50_32x4d model.<br> We used Unsupervised Data Augmentation (Semi-Supervised Learning) or BYOL (Self-Supervised Learning) on 2019 data, but it did not contribute much to accuracy improvement.<br> By averaging k-fold confidence and blending five or more models, we were able to achieve a stable public LB over 0.900.<br> We blended based on confusion matrix to get a higher cv, but private LB became worse.</p> <h3>Stacking</h3> <p>We were able to increase public LB from 0.905 to 0.907 by stacking using Conv2d+MLP.<br> We used predictive label of blended model as Pseudo label.<br> To avoid overfiting, we trained the model by adding gaussian noise to the feature.</p> <hr> <h3>Comment</h3> <p>For simple average blending without stacking, there was a submit that was 0.902 for both public and private LB.<br> <img src="https://github.com/riron1206/kaggle_Cassava/blob/master/sub_img.png?raw=true" alt="img2"></p> <p>As discussed in other discussions, simple average blending was better for noisy test data sets.</p>
Cassava Leaf Disease Classification
Private LB 36th place/Public LB 1138th place [Silver medal] solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Wow what a competition! We jumped from our public leaderboard position of 1138th place to the private position of 36th place, giving us a silver medal. Our solution is quite simple, we didn't utilize any techniques to deal with noise. Did I mention our submissions finished just minutes before the deadline?</p> <h1>Solution:</h1> <p>The components of our silver medal solution:</p> <ol> <li>EfficientNet-B7</li> <li>EfficientNet-B3a from timm library</li> <li>SE-ResNext50</li> </ol> <p><strong>EfficientNet-B7:</strong><br> This is basically the same as <a href="https://www.kaggle.com/abhishek/tez-faster-and-easier-training-for-leaf-detection" target="_blank">this kernel</a> but with an EfficientNet-B7 model, 20 epochs, and image size of 512x512. 5-fold CV of:<br> This model was submitted with a 3x TTA of the following augmentations:</p> <pre><code> RandomResizedCrop, Transpose(p=0.5), HorizontalFlip(p=0.5), VerticalFlip(p=0.5), RandomBrightnessContrast( brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.5 ), </code></pre> <p>We will note that during the private submissions leak (~1.5 months ago), we noticed this model leaderboard submission was 9th place. </p> <p><strong>EfficientNet-B3a:</strong></p> <p>Training for this model was inspired by the training procedure in <a href="https://www.kaggle.com/keremt/progressive-label-correction-paper-implementation" target="_blank">this kernel</a> (but without label correction). Basically, the EfficientNet-B3a (a version of EfficientNet-B3 with weights trained by Ross Wightman) with the model body frozen and the fastai model head unfrozen was <br> trained for 3 epochs. Following this, the whole model was unfrozen and trained for 10 epochs. 512x512 images were used, batch size of 32. The augmentations used were as follows:</p> <pre><code> Dihedral(p=0.5), Rotate(p=0.5, max_deg=45), RandomErasing(p=0.5, sl=0.05, sh=0.05, min_aspect=1., max_count=15), Brightness(p=0.5, max_lighting=0.3, batch=False), Hue(p=0.5, max_hue=0.1, batch=False), Saturation(p=0.5, max_lighting=0.1, batch=False), RandomResizedCropGPU(size, min_scale=0.4), CutMix() </code></pre> <p>Label smoothing loss function with Ranger optimizer was used. The CutMix+label smoothing was quite helpful, and we noticed a significant CV boost because of this. The model with the best validation loss was saved. This resulted in a model with a 5-fold CV of: 0.8914<br> The model was submitted with 5x TTA with the training augmentations (<code>learn.tta(n=5,beta=0)</code>).</p> <p><strong>SE-ResNext50:</strong></p> <p>I just changed the model from the previous pipeline to SE-ResNext50. A 5-fold CV of: 0.8938</p> <p>These three models were ensembled and submitted. </p> <p>Public LB: 0.898<br> Private LB: 0.900</p> <p>We also had another SE-ResNext50 model with different augmentations, LR schedule, etc. along with stochastic weight averaging during the end of training. This was submitted with the EfficientNet-B3a and the EfficientNet-B7 as an ensemble, and this got Public LB: 0.897, Private LB: 0.901 (potential gold territory), but this was not selected unfortunately. However, we are grateful that we even managed to make a 1135 rank jump!</p> <p><strong>A few of the other things we tried:</strong><br> - Tried Bi-tempered logistic loss, observed no CV improvement<br> - Tried progressive label correction, observed no CV improvement<br> - Tried pseudolabeling on the test set, it was a last minute idea and public LB was 0.139 so likely an error in implementation</p> <p><strong>Reflections:</strong><br> - This competition was quite interesting, even though the dataset was quite messy and filled with errors<br> - Even with many errors, the model training was robust, leading to decent performance on the CV and private leaderboard<br> - While we all started out quite enthusiastic regarding this competition, other tasks/duties took higher priority for many of us, and only a couple of us dedicated significant time to it. I wonder what the situation would be if all team members could dedicate more time to the competition. Would we reach an even higher place? Or would we overfit to the leaderboard? </p> <p>Thanks to my great teammates, especially <a href="https://www.kaggle.com/tanulsingh077" target="_blank">@tanulsingh077</a> and <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a>!</p>
Cassava Leaf Disease Classification
Private score 90.14 ( gold ) solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all those who moved up by miles to get into Medal positions, rest who got affected by shake up its always a learning at end of day.</p> <p>Just for those who wish to find out the solution to get to 90.1 score on private lb. Still to know how near is 90.1 was to gold place.</p> <p><a href="https://www.kaggle.com/jaideepvalani/with-b4-pytorch-efficientnet-baseline?scriptVersionId=54134057" target="_blank">https://www.kaggle.com/jaideepvalani/with-b4-pytorch-efficientnet-baseline?scriptVersionId=54134057</a></p> <p>Above is note book. </p> <p>Below is our solution outline<br> 1)Ensemble models <br> a) B4 Fastai- Trained using Label Smoothing cross Entropy EPS 0.35<br> b) Resnext IBN/Resnext101 /ViT- Trained using Tempered Softmax loss(BiT) <br> Thanks to my team mate <a href="https://www.kaggle.com/cswwp347724" target="_blank">@cswwp347724</a> for these models</p> <p>and also <a href="https://www.kaggle.com/rohitsingh9990" target="_blank">@rohitsingh9990</a> for putting very hard efforts to help us reach 90.7 at lastday </p> <p>Its sad that we couldnt select our best submission even though there was second thought about selecting a conservative Public LB score. But never mind we are fortunate that we are still in silver. <br> Our all models were trained using similar methodology.</p> <p>I was hoping for <a href="https://www.kaggle.com/mobassir" target="_blank">@mobassir</a> and <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> team to have stayed at winning position or atleast in gold, but its always a good learning at the end of every competition.</p> <p>Well done you guys !</p>
Cassava Leaf Disease Classification
Public 13th / Private 184th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Cassava Leaf Disease Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>ํ—‰ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์ตœ์ข…์—์„œ ๋งŽ์ด ๋–จ์–ด์ ธ๊ฐ€์ง€๊ณ  ใ„ฒใ„ฒโ€ฆ ํŒŒ์ผ ์ˆ˜์ •ํ–ˆ๋Š”๋ฐ ์•„์ง๋„ ์‚ฌ์ง„ ์•ˆ๋ณด์ด์‹œ๋‚˜์š”?? <a href="https://github.com/choco9966/Cassava-Leaf-Disease-Classification/tree/main/image" target="_blank">https://github.com/choco9966/Cassava-Leaf-Disease-Classification/tree/main/image</a> ์š” ๋งํฌ์— ์žˆ๋Š” submission ์‚ฌ์ง„ ๋‘๊ฐœ์˜€๋Š”๋ฐ ์บ๊ธ€ ์ง€๊ธˆ ์ด๋ฏธ์ง€ ์—…๋กœ๋“œ๊ฐ€ ์ด์ƒํ•œ ๊ฒƒ ๊ฐ™๋„ค์š” ๐Ÿ˜ญ๐Ÿ˜ญ</p> <h3>1. Overview</h3> <p>We selected two versions by increasing the variety of models and models with the highest score on the leaderboard.</p> <p><strong>Submission1 (Variety Models)</strong></p> <ul> <li>Public LB : 904 Private LB : 899</li> <li>Inference Code : <a href="https://www.kaggle.com/hihunjin/avengers-assemble-3-model10-tta2?scriptVersionId=54655767" target="_blank">https://www.kaggle.com/hihunjin/avengers-assemble-3-model10-tta2?scriptVersionId=54655767</a></li> </ul> <p><img src="https://github.com/choco9966/Cassava-Leaf-Disease-Classification/blob/main/image/submission1.png?raw=true" alt=""></p> <p><strong>Submission2 (Highest LB)</strong> </p> <ul> <li>Public LB : 908 Private LB : 897</li> <li>Inference Code : <a href="https://www.kaggle.com/chocozzz/sub4-effx3-regx3-tta-3?scriptVersionId=54655742" target="_blank">https://www.kaggle.com/chocozzz/sub4-effx3-regx3-tta-3?scriptVersionId=54655742</a></li> </ul> <p><img src="https://github.com/choco9966/Cassava-Leaf-Disease-Classification/blob/main/image/submission2.png?raw=true" alt="Submission2"></p> <h3>2. Tricks or Magics</h3> <ul> <li>Augmentations : LB 899 to 900 with GridMask</li> </ul> <pre><code> def get_train_transforms(): return Compose([ RandomResizedCrop(CFG['img_size'], CFG['img_size']), Transpose(p=0.5), HorizontalFlip(p=0.5), VerticalFlip(p=0.5), ShiftScaleRotate(p=0.5), HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5), RandomBrightnessContrast(brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.5), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0), CoarseDropout(p=0.5), GridMask(num_grid=3, p=0.5), # 899 to 900 ToTensorV2(p=1.0), ], p=1.) </code></pre> <ul> <li><p>Label Smoothing Loss : LB 900 to 901 </p> <ul> <li>alpha : 0.2 (When testing values between 0.05 and 0.3, 0.2 was the best score.)</li></ul></li> <li><p>SWA : LB 901 to 902 </p></li> <li><p>TTA Inference : LB 902 to 905 </p></li> </ul> <pre><code> def get_inference_transforms(): return Compose([ OneOf([ Resize(CFG['img_size'], CFG['img_size'], p=1.), CenterCrop(CFG['img_size'], CFG['img_size'], p=1.), RandomResizedCrop(CFG['img_size'], CFG['img_size'], p=1.) ], p=1.), Transpose(p=0.5), HorizontalFlip(p=0.5), VerticalFlip(p=0.5), Resize(CFG['img_size'], CFG['img_size']), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0), ToTensorV2(p=1.0), ], p=1.) </code></pre> <ul> <li>only CenterCrop : 902 </li> <li>using RandomResized + CenterCrop : 904</li> <li>using Resize + CenterCrop + RandomResizedCrop : 905 </li> <li>Optimizer : Adam to Adamp (EfficientNet has reduced performance)<ul> <li>VIT : 901 to 902 </li> <li>RegNetY40 : 904 to 905 </li> <li>Eff : 905 to 904</li></ul></li> </ul> <p>Ensemble (Eff 905 / RegNetY 905 / VIT 902)</p> <ul> <li>EFF(2019+2020) + EFF(2020) + EFF(SEED 720) + RegNetY(2019+2020) + RegNetY(2020) + RegNetY(Distillation) : 908</li> <li>EFF(2019+2020) + EFF(2020) + RegNetY(2019+2020) + RegNetY(2020) : 907</li> <li>EFF + RegNetY : 907 </li> <li>EFF + VIT : 906 </li> <li>EFF + VIT + RegNetY : 906 </li> </ul> <h3>3. Trial and Error</h3> <ul> <li>Preprocessing : Noise Label Delete (using oof score), Clean Lab, Merging 2019 Dataset </li> <li>Augmentation : Cutmix, Cutout, Fmix, , 5 Crop etc.</li> <li>Loss : Bi-Tempered Logistic Loss, Focal Loss, Taylor Loss etc.</li> <li>Models : NFNet, BotNet, DeiT, Distillation, Fixmatch </li> <li>Optimizer : Adamw, Adamp (Sometimes it works for each model, and sometimes it doesn't) , RAdam, SAM </li> <li>Ensemble : SEED Ensemble, VIT + RegNetY40 + EfficientNet etc. </li> <li>Others : Batch Normalization Freeze, Hyper parameter Tuning (lr, gradient accum etc), Global Average Pooling to Concatenate Max Pooling and GAP etc. </li> </ul> <p>You can see our All Training code in the GitHub below. </p> <ul> <li>GitHub : <a href="https://github.com/choco9966/Cassava-Leaf-Disease-Classification" target="_blank">https://github.com/choco9966/Cassava-Leaf-Disease-Classification</a></li> </ul> <p>Lastly, I would like to say thank you to the team members who worked hard for 2 months.@Minyong Shin <a href="https://www.kaggle.com/bellwood" target="_blank">@bellwood</a> <a href="https://www.kaggle.com/hihunjin" target="_blank">@hihunjin</a> <a href="https://www.kaggle.com/cysohn" target="_blank">@cysohn</a> </p> <p>ํ†ต๊ณ„์™•ํ˜•๋‹˜๋„ ๋ฉ”๋‹ฌํ•˜๋‚˜๊ฐ€ ๋–จ์–ด์กŒ๊ตฐ์š” ใ… ใ…  ๊ณ ์ƒํ•˜์…จ์–ด์š”~ ๋‚˜์ค‘์— ํ•จ ์‹œ๊ฐ„์žก์•„์„œ ์–˜๊ธฐ ๋‚˜๋ˆ ์š” ใ…‹ใ…‹ใ…‹</p> <p>๊ณ ์ƒํ•˜์…จ์Šต๋‹ˆ๋‹ค~~!!๐Ÿ‘๐Ÿ‘</p>