Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
1st and Future - Player Contact Detection | 14th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank our hosts for organizing the competition.<br>
It was a task I've never solved before, and it was both educational and a lot of fun trying different approaches!</p>
<h1>Summary</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1146523%2Fe323a35a9255e8e50131807aa19b56ef%2Fsolution.png?generation=1677715798541475&alt=media" alt=""></p>
<h1>Model Detail</h1>
<h2>3D-CNN (cv: 0.770)</h2>
<ul>
<li>backbone: r3d_18 (from torchvision: <a href="https://pytorch.org/vision/stable/models/generated/torchvision.models.video.r3d_18.html#torchvision.models.video.R3D_18_Weights" target="_blank">https://pytorch.org/vision/stable/models/generated/torchvision.models.video.r3d_18.html#torchvision.models.video.R3D_18_Weights</a>)</li>
<li>use 63 frames(20fps)</li>
<li>predict 19 steps</li>
<li>train every 9 steps</li>
<li>StepLR Scheduler(~2epochs: lr=1e-3/1e-4)</li>
</ul>
<h2>2.5D3D-CNN (cv: 0.768)</h2>
<ul>
<li>Almost same as DFL's 1st solution by Team Hydrogen (<a href="https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout/discussion/359932" target="_blank">https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout/discussion/359932</a>)</li>
<li>backbone: legacy_seresnet34</li>
<li>use 123 frames(20fps)</li>
<li>predict 3 frames</li>
<li>down sampling (g: 10%, contact: 30%)</li>
<li>label smoothing (0.1-0.9)</li>
</ul>
<h2>Both 3D, 2.5D3D</h2>
<ul>
<li>Linear layers for g and contact</li>
</ul>
<pre><code>
x_contact = model_contact(x)
x_g = model_g(x)
not_is_g = (is_g == )
x = x_contact * not_is_g + x_g * is_g
</code></pre>
<ul>
<li>output 3 prediction and calculate loss: only sideline, only endzone, concat sideline-endzone feature.</li>
</ul>
<pre><code>
():
x_sideline = cnn(x_sideline_image)
x_endzone = cnn(x_endzone_image)
fc(torch.cat([x_sideline, x_endzone])), fc_sideline(x_sideline), fc_endzone(x_endzone)
</code></pre>
<h2>LGBM (cv: 0.740)</h2>
<ul>
<li>about 1100 features</li>
<li>feautres<ul>
<li>player's distance (tracking, helmet)</li>
<li>lag, diff</li>
<li>top_n nearest player's distance (n: parameters)</li>
<li>number of people within distance n (n: parameters)</li></ul></li>
<li>groupby<ul>
<li>game_play</li>
<li>is_g</li>
<li>is_same_team</li>
<li>number of people within distance n </li></ul></li>
</ul>
<h2>ensemble</h2>
<p>Weighted ensemble, G and contact respectively.</p>
<h2>What worked for me</h2>
<ul>
<li>image preprocessing<ul>
<li>draw bbox -> draw bbox and paint out</li>
<li>use 2 colors(g, contact) -> use 3 colors(g, same team contact, different team contact)</li>
<li>crop the image with keeping the aspect ratio</li></ul></li>
</ul>
<pre><code> bbox_left_ratio =
bbox_right_ratio =
bbox_top_ratio =
bbox_down_ratio =
col [, , , ]:
df[col] = df[[, ]].mean(axis=)
df[] = df[[, ]].mean(axis=)
df[] = df.groupby([, , ])[].transform()
series = df.iloc[]
left = (series[] - series[] * bbox_left_ratio)
right = (series[] + series[] * bbox_right_ratio)
top = (series[] + series[] * bbox_top_ratio)
down = (series[] - series[] * bbox_down_ratio)
img = img[down:top, left:right]
img = cv2.resize(img, (, ))
</code></pre>
<ul>
<li>StepLR with warmup scheduler</li>
<li>label smoothing (worked for 2.5D3D, but not worked for 3D)</li>
</ul>
<h2>What not worked for me</h2>
<ul>
<li>Transformers<ul>
<li>use top 100~400 features of lgbm feature importances</li>
<li>tuned hard but got cv 0.02 lower than lgbm.</li></ul></li>
<li>2D->1D CNN<ul>
<li>contact score is same as 2.5D3D, 3D but very poor G score in my work.</li></ul></li>
<li>interpolate bbox</li>
</ul>
<h2>Other</h2>
<ul>
<li>tools: I make tools to investigate wrong inference and make a hypothesize to improve score.<br>
<a href="https://github.com/kurupical/nfl_contact_detection/blob/master/58218_003210_contact_0.506591796875_score0.0_H23_V10.gif" target="_blank">https://github.com/kurupical/nfl_contact_detection/blob/master/58218_003210_contact_0.506591796875_score0.0_H23_V10.gif</a></li>
</ul> |
1st and Future - Player Contact Detection | 16th place solution - Team : Deimon Devil Bats | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your comment.<br>
The implementation is as follows</p>
<pre><code> gb = test.groupby([, , ])
test[] = gb[col_name].shift(-)
test[] = gb[col_name].shift()
</code></pre>
<p>Thanks to the organizers and the kaggle team for organizing the contest. EDA(match watching) was a lot of fun. Thanks to all participants for their hard work. I'll be reading and learning from your solutions!</p>
<p>Also, thanks to the team, I could do best until the finish. Thanks <a href="https://www.kaggle.com/yokuyama" target="_blank">@yokuyama</a> <a href="https://www.kaggle.com/shimishige" target="_blank">@shimishige</a> !</p>
<p>During the first half of the competition, each team member tried to create models in their own way (3D segmentaion, CenterNet , etc.), but unfortunately, the scores did not increase at all (LB score < 0.7). With 3 weeks remaining, the policy was changed to proceed on the basis of public notebooks.</p>
<h2>Summary</h2>
<p>This is a 2-stage model of Deep Learning (2.5D CNN, Transformer) and GBDT. Each is based on two public notebooks. Thanks <a href="https://www.kaggle.com/zzy990106" target="_blank">@zzy990106</a> (<a href="https://www.kaggle.com/code/zzy990106/nfl-2-5d-cnn-baseline-inference" target="_blank">2.5DCNN</a>) , <a href="https://www.kaggle.com/columbia2131" target="_blank">@columbia2131</a> (<a href="https://www.kaggle.com/code/columbia2131/nfl-player-contact-detection-simple-xgb-baseline" target="_blank">GBDT</a> ).</p>
<p>Deep is poor at the level of worrying about the correctness of the CV calculation, but it seems to have been sufficient as a feature to GBDT. deep has CV calculation with dist<2 only.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3381868%2F6d49493565d812fb14f36fded4e4e005%2Fnfl_summary.png?generation=1677776470936082&alt=media" alt=""></p>
<h2>1st stage</h2>
<p><strong>2.5D CNN</strong><br>
We created a 1-class output model that predicts player contact and G in the same class, and a 2-class output model that predicts them separately. Two models were created for Endzone and Sideline, respectively, for a total of 4 models.</p>
<ul>
<li>Common settings<ul>
<li>input : Image (±4frame), Tracking data</li>
<li>backbone : tf_efficientnet_b0_ns</li>
<li>Image cropping based on predicted player helmet size (max(width, height)*5)</li>
<li>Prediction only for distance<2</li>
<li>mixup</li></ul></li>
<li>1class<ul>
<li>Train data downsmpling (negative sample to 40,000sample)</li></ul></li>
<li>2class<ul>
<li>Helmet position heatmap for player 1 and 2 (<a href="https://www.kaggle.com/competitions/nfl-impact-detection/discussion/208947" target="_blank">reference</a>)</li>
<li>Temporal Shift Module (<a href="https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout/discussion/360236#2003353" target="_blank">reference code</a> Thanks <a href="https://www.kaggle.com/bamps53" target="_blank">@bamps53</a>)</li></ul></li>
</ul>
<p><strong>Transformer + LSTM</strong></p>
<ul>
<li>30% skip connection Transformer (<a href="https://www.kaggle.com/code/cdeotte/tensorflow-transformer-0-112?scriptVersionId=79039122&cellId=21" target="_blank">reference code</a> Thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> )</li>
<li>LSTM in last layer</li>
<li>25 features based on tracking data</li>
<li>Scaling with RobustScaler</li>
</ul>
<h2>2nd stage</h2>
<ul>
<li>catboost was a little better than XGB</li>
<li>Features (public notebook +)<ul>
<li>Tracking data : diff, shift, product</li>
<li>Deep model prob : shift, cummax, cumsum</li>
<li>helmet size, etc.</li></ul></li>
</ul>
<p>↓Adding Deep model predictions (especially CNN) improves the score</p>
<table>
<thead>
<tr>
<th></th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>Table only</td>
<td>0.7030</td>
</tr>
<tr>
<td>+ 2.5D CNN 1class</td>
<td>0.7540</td>
</tr>
<tr>
<td>+ 2.5D CNN 2class</td>
<td>0.7681</td>
</tr>
<tr>
<td>+ Transformer+LSTM</td>
<td>0.7683</td>
</tr>
</tbody>
</table>
<hr>
<p><strong>Not work</strong></p>
<ul>
<li>Interpolation of box undetected helmets by homography transformation. Probably there was a lot of noise, and the score worsened when the box interpolated by the transformation was used honestly.</li>
<li>Using embedding of CNN</li>
<li>Batch prediction of players in images (3D segmentaion, CenterNet, etc.)</li>
</ul> |
1st and Future - Player Contact Detection | 18th place solution : 2d-cnn / 1d-cnn / XGB / 1d-cnn | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for the feedback: indeed for me stage1 was already very effective in highlighting the people in contact, stage2 just had the effect of smoothing its values. However in stage3 the association of the players was much more noizy so the gain after this stage was much greater using time. For training I used 64 steps windows (for batch optimization) and then in inference mode I used the whole game as input</p>
<p>First of all, I want to thank the hosts of this competition and my team: <a href="https://www.kaggle.com/chenlin1999" target="_blank">@chenlin1999</a> and <a href="https://www.kaggle.com/hanzhou0315" target="_blank">@hanzhou0315</a></p>
<h1>Summary</h1>
<p>Our solution is made of 4 stages :</p>
<ol>
<li><strong>2d-cnn</strong> : The model predicts for each player if the player is in contact as well as if the player is on the ground</li>
<li><strong>1d-cnn</strong> : This stage is intended to smooth the prediction of 1. using the temporality</li>
<li><strong>XGB</strong> : It is at this moment that we associate the contacts between players</li>
<li><strong>1d-cnn</strong> : This stage is intended to smooth the prediction of 3. using the temporality</li>
</ol>
<h1>Validation methodology</h1>
<p>We opt for a Stratified group 5-Fold cross-validation by <code>game_play</code> : this strategy seemed to be the most correlated with LB and the most obvious. We have a final solution that reaches a CV score : 0.77174 for a LB : 0.77004 and a public LB : 0.76219</p>
<h1>Stage1 : 2d-cnn</h1>
<p>At this stage it is very easy to overfit on the data so we only trained for 2 epochs. We used the timm models: efficientnetv2_rw_s and convnext_base_in22k for the final submission.</p>
<p>The input is composed of 2 RGB images for the Endzone and the Sideline, then we concatenate the features to make a prediction. To add supervision to this model we used features created from the tabular data. Tt's a bit similar to <a href="https://www.kaggle.com/competitions/petfinder-pawpularity-score/discussion/301015" target="_blank">this</a><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6012866%2Fccac7fba20baef4c47053060a748ff75%2Fimage_2023-03-03_164244573.png?generation=1677879764555718&alt=media" alt=""></p>
<h1>Stage2 : 1d-cnn</h1>
<p>It is a simple CNN with 5 layers and kernels of 3. To ensure temporal consistency we sorted by <code>["game_play","nfl_player_id","step"]</code></p>
<h1>Stage3 : XGB</h1>
<p>It's XGB like <a href="https://www.kaggle.com/code/columbia2131/nfl-player-contact-detection-simple-xgb-baseline" target="_blank">this</a> by <a href="https://www.kaggle.com/columbia2131" target="_blank">@columbia2131</a> and we added the features from the previous stage</p>
<h1>Stage4 : 1d-cnn</h1>
<p>It is a simple CNN with 5 layers and kernels of 3. To ensure temporal consistency we sorted by <code>["game_play","nfl_player_id_1","nfl_player_id_2","step"]</code></p>
<h1>Final results</h1>
<table>
<thead>
<tr>
<th>Stage1</th>
<th>Stage2</th>
<th>Stage3</th>
<th>Stage4</th>
<th>CV</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>✓</td>
<td></td>
<td>0.65</td>
<td>0.645</td>
<td>0.645</td>
</tr>
<tr>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td>0.718</td>
<td>0.715</td>
<td>0.716</td>
</tr>
<tr>
<td>✓</td>
<td></td>
<td>✓</td>
<td>✓</td>
<td>0.731</td>
<td>0.729</td>
<td>0.725</td>
</tr>
<tr>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>0.771</td>
<td>0.762</td>
<td>0.770</td>
</tr>
</tbody>
</table>
<p>Yes I think you are right : I have the same thing in mind</p> |
1st and Future - Player Contact Detection | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ul>
<li>dropout 0.5 is natural for me.</li>
<li>At times, I utilize a larger dropout rate, such as 0.6 or 0.7. If my model appears to overfit easily, one of the initial experiments I aim to conduct is to increase the dropout rate and adjust the intensity of data augmentation.</li>
</ul>
<p>To be honest, I'm unsure. From what I've observed, several teams have achieved a high public score within a short inference time, so I guess that that they might be unintentionally filtering out too many true positives from their private test data based on their CV score and public leaderboard feedback. In contrast, I used a filtering threshold that was five times lower than in my CV score to ensure that I didn't miss out on too many true positives in my submission.</p>
<p>Thanks to NFL and Kaggle for hosting this interesting competition.<br>
My approach comprises three main components</p>
<ul>
<li>A weak xgb model to remove easy negative samples</li>
<li>A CNN to classify contact</li>
<li>A xgb model to post-process the output.</li>
</ul>
<p>Since my xgb preprocessing was not really good compare to other teams (CV ~ 0.72),I will only elaborate on my CNN and post-processing method in this write-up.<br>
<strong>1. 3D CNN for Video Classification</strong><br>
<strong><em>1.1 Input generator</em></strong></p>
<p>I separate the modeling and training of player-player (PP) and player-ground (PG) contacts.<br>
The PP model is trained using input from three sources, namely endzone video, sideline video, and tracking data. On the other hand, the PG model is trained using input from only two sources, namely endzone video and sideline video. Notably, including tracking data does not result in improved performance for the PG model.</p>
<p><strong><em>1.1.1 Input generator for PP model</em></strong></p>
<p>The endzone and sideline videos are processed similarly. Firstly, I extract 18 images from neighboring frames, namely {frame[-44], -37, -30, -24, -18, -13, -8, -4, -2, 0, 2, 4, 8, 13, 18, 24, 30, frame[37]}. The frame[-44] represents 44 frames prior to the current sample's estimated frame. This sampling technique enables the model to observe more frames close to the estimated frame. </p>
<p>Next, I mask the players' heads in contact with a black or white circle to guide the model's attention to the relevant players. Rather than using an additional channel, I mark the players' heads directly into the image. I made this decision to maintain the input's 3-channel format, which maximizes the utilization of the pretrained weight file. Finally, I crop each image around the players' contact area using a crop size of 10 times the mean helmet box size within the specified frame range.</p>
<p>To enable the tracking data to be stacked with images from the endzone and sideline, I simulate the tracking data as images. To accomplish this, I use the OpenCV cv2.circle function to plot each player's position in a specific step on a black background. I assign two different colors to represent the two teams, and players in contact are depicted with bigger and brighter circles (radius is 5, and pixel value is 255), while background players are depicted with smaller and darker circles (radius is 3, and pixel value is 127). By integrating this information into the input, the model can learn the interaction of all players from a bird's eye view. The input to the PP model is displayed in the GIF below.</p>
<p><img src="https://i.ibb.co/sKJ5zHP/output.gif" alt=""></p>
<p><strong><em>1.1.2 Input generator for PG model</em></strong><br>
The endzone and sideline videos are processed similarly to the PP model, with the exception that the PG model uses a longer input sequence of 23 neighboring frames, ranging from [-54, -48, -42, -36, -30, -24, -18, -13, -8, -4, -2, 0, 2, 4, 8, 13, 18, 24, 30, 36, 42, 48, 54]. </p>
<p>The PG model does not include simulated tracking images as they do not improve the PG CV score. </p>
<p>Unlike the PP model, I can use a longer sequence of images in the PG model because the tracking images are not included. In the PP model, the maximum sequence length that can fit into my GPU is 18 images.</p>
<p><strong><em>1.2 Model</em></strong><br>
Given that the input appears to resemble an action classification task rather than a standard 3D classification, I opted to use an action recognition model to address this problem. After reviewing the <a href="https://github.com/open-mmlab/mmaction2" target="_blank">mmaction2 repository</a>, it became clear that the CSN series delivered the most impressive results in the Kinetics-400 dataset. As a result, I selected resnet50-irCSN and fine-tuned it for this particular task.</p>
<p><strong><em>1.3 Training</em></strong></p>
<p>During training, I apply the following augmentations to the endzone and sideline images and randomly swap them. As for the tracking images, I only use horizontal and vertical flips as augmentations.</p>
<pre><code>base_aug = [
A.RandomResizedCrop(always_apply=False, p=1.0, height=cfg.img_size, width=cfg.img_size, scale=(0.7, 1.2), ratio=(0.75, 1.3), interpolation=1),
A.OneOf([
A.RandomGamma(gamma_limit=(30, 150), p=1),
A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.3, p=1),
A.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2, p=1),
A.HueSaturationValue(hue_shift_limit=20, sat_shift_limit=30, val_shift_limit=20, p=1),
A.CLAHE(clip_limit=5.0, tile_grid_size=(5, 5), p=1),
], p=0.6),
A.HorizontalFlip(p=0.5),
A.ShiftScaleRotate(shift_limit=0.0, scale_limit=0.1, rotate_limit=15,
interpolation=cv2.INTER_LINEAR, border_mode=cv2.BORDER_CONSTANT, p=0.8),
A.Cutout(max_h_size=int(50), max_w_size=int(50), num_holes=2, p=0.5),
]
cfg.train_transform = A.ReplayCompose(base_aug)
</code></pre>
<p>I used a linear scheduler for the learning rate and trained the model for one epoch. In the final submission, I trained the model using all available data with 4 seeds.</p>
<p><strong>2. XGB Postprocessing</strong><br>
I employed a simple xgb model to combine the predictions of pre-xgb and cnn. Through experimentation, I discovered that the optimal feature for post-processing in PP and PG models slightly differs.</p>
<p><strong><em>2.1 PP postprocessing</em></strong><br>
First, I calculated an ensemble probability from the CNN and preprocessing xgb model as follows: prob = 0.2pre_xgb_prob + 0.8cnn_prob. <br>
Then, I used the probability from the 20 neighboring steps as features for the xgb model, i.e., {prob(-10), prob(-9), …, prob(0), prob(1), …, prob(9)}, where prob(-10) represents the probability of the same pair of players in the prior 10 steps.<br>
This postprocessing method improved my PP CV score by approximately 0.005.</p>
<p><strong><em>2.2 PG postprocessing</em></strong><br>
I calculated an ensemble probability from the CNN and preprocessing xgb model as follows: prob = 0.15pre_xgb_prob + 0.85cnn_prob. <br>
The feature to xgb model are </p>
<ul>
<li>The ensemble probability from the 30 neighboring steps {prob(-15), prob(-14), …, prob(0), prob(1), …, prob(14)}, </li>
<li>The pre_xgb_prob and cnn_prob from the 20 neighboring steps.<br>
This postprocessing method improved my PG CV score by approximately 0.04.</li>
</ul>
<p>P/S. Thanks chatGPT for making my explanation better!!</p>
<p>I added it when my cv were still low (~0.73) and it boosted my cv to 0.76. I don't know how much it contributed to my final model (cv 0.79+).</p>
<p>thanks. I used provided baseline helmet boxes</p>
<p>thanks. pp boost my cv around +0.015. I didn't check LB </p>
<p>it is a full oof. ~0.788 for 1 seed, and 0.79+ if ensemble multiple seeds.</p>
<p>wow. is that your final CV or only video base cv?? my CV with post processing also 805+</p>
<p>1x RTX 3090 24Gb VRAM</p>
<p>Thank you. Congratulations on your strong finish and becoming GM. well done</p>
<p>thanks. After filtering, around 1.4 million samples remained for the PP model, with a recall rate of 99.3%. For the PG model, around 130k samples remained with a recall rate of over 97%.</p>
<p>Thanks Rob.<br>
I have just added more detail to my writeup to make it more clear based on your questions. </p>
<ul>
<li><em>How did you determine the frames you used {frame[-44], -37, -30, -24, -18, -13, -8, -4, -2, 0, 2, 4, 8, 13, 18, 24, 30, frame[37]} - was this decided through experimentation or intution?</em><br>
The decision on the frame sampling was based on both intuition and experiments. Initially, I used an equal gap between frames such as […, 8, 4, 0, 4, 8, …]. However, I realized that the model should see more images near the estimated frame to improve performance, so I changed the sampling frames accordingly. It may seem strange that there is no frame[44], but this is because I pre-generated all inputs and saved them to disk for faster data loading (frame[44] is in my pre-generated data). However, a sequence of 19 images caused my GPU to run OOM, so I simply removed the last image (frame[44]) to avoid this issue.</li>
<li><em>What made you select the resnet50-irCSN as your backbone? Did you have any succsess with other architectures?</em><br>
Given that the input appears to resemble an action classification task rather than a standard 3D classification, I opted to use an action recognition model to address this problem. After reviewing the mmaction2 repository, it became clear that the CSN series delivered the most impressive results in the Kinetics-400 dataset. As a result, I selected resnet50-irCSN and fine-tuned it for this particular task. I did tried 2.5D model, 3D model and other action recognition model such as slowfast, but CSN give me best CV score.</li>
<li><em>How did you handle cases where helmet boxes are not be seen for both players in sideline/endzone views? Did you only predict if both players were seen in both views?</em><br>
I use a black image for those frames.</li>
<li><em>In your postprocessing step, you say you combined the 1st stage XGB and CNN outputs like this: prob = 0.2pre_xgb_prob + 0.8cnn_prob. Is there any reason you did not use pre_xgb_prob and cnn_prob directly as features to the postprocessing XGB model?</em><br>
Thank you for the question. I have updated my post-processing part in the write up to explain this.</li>
</ul>
<p>Thank you once again for organizing this fascinating NFL competition series. I had the opportunity to participate in all three challenges, and I thoroughly enjoyed the experience.</p> |
1st and Future - Player Contact Detection | 2nd place solution - Team Hydrogen | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>With "shift the image frame", we refer to an augmentation where we shift all input frames by X frames from the closest match that was calculated based on labels that were given in 10 Hz. We have video data in 60 Hz, so about 6 frames (±3 frames) can get the same label (nearest match from the given labels in 10 Hz). </p>
<p>Thank you for another great NFL challenge! As the previous NFL competitions it was well prepared and had quick feedback cycles anytime that the community had questions. We would like to highlight <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>, one of the hosts, who even supplied a strong tabular baseline to get started. </p>
<h2>Validation</h2>
<p>The test data is rather small compared to the large training set and only consists of 61 plays. Thus, local validation becomes even more important than usual. To evaluate our models, we used Stratified Group KFold cross validation on the <code>game_key</code> and public LB usually followed any local CV improvements with only a small random range of a few points and with blends being a bit more stable than single models (5 folds or a handful of fullfits). Our best local CV was 0.807 for the blend including 2nd stage and about 0.802 for a single model including 2nd stage. </p>
<h2>Models and architecture</h2>
<p>The core ideas and central building blocks of our models are based on our concepts of the previous DFL competition (<a href="https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout/discussion/359932" target="_blank">https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout/discussion/359932</a>) utilizing 2D/3D CNNs capturing temporal aspects of videos. This architecture has already served us well in multiple video sports projects and competitions and also turned out to be highly competitive here.</p>
<p>In this competition we found longer time steps to work better and we got our best single model results using a time step of 24 frames, two times in both directions. We crop the region of interest for each potential contact based on helmet box information. In most models, we resize the crop, so that all boxes have about the same size. We concatenate endzone and sideline views horizontally to enable early fusion. Additionally, we encode tracking data directly into the CNN models. This has the main advantage that we can mostly rely on a single stage solution, and are less prone to overfitting on a 2-stage approach with out-of-fold CNN predictions. We step-wise encoded tracking features based on their importance in tabular models.</p>
<p>The main architecture of our approach looks like the following:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2675447%2F81c89d45e01bd4e3c444b8d64ddcd091%2Farch2.png?generation=1677767191925623&alt=media" alt="model-architecture"></p>
<p>We will now explain in detail each of the channels. We have slight variations of these channels across models in our ensemble, but the core concept is the same. Please note that the order of channels is always the same, and the order is only changed for visual clarity in above architecture visualization, also only showing three channels, while our models use mostly five. Frames 552, 600 and 648 show the first channel in the foreground, while frame 576 shows the second frame, and 624 the fifth frame.</p>
<p><strong>First channel</strong><br>
The first channel depicts the region of interest of the potential contact only using the grayscale image. For each view, we take the center of the two (or one) boxes and then crop a total rectangle of width 128 and height 256. We then put both views next to each other resulting in a 256x256 input size. For most of our models we try to keep the aspect ratio based on box information and crop more information downwards than upwards to better capture the full body of players.</p>
<p><strong>Second channel</strong><br>
Here we put a mask of the boxes to allow the model to clearly learn which players it should try to predict the contact for. We mask the two boxes with a value of 255. If there is only one box, or if there is a ground contact, we only mask this one box. We additionally mask all other boxes in this crop with 128.</p>
<p><strong>Third channel</strong><br>
The most important feature is the distance between two players. The CNN model itself can only learn the distance between players to some degree. So in this channel we directly decode the distance as derived from tracking information. Conveniently, there is a nice cutoff at around 2 yards where basically no contacts are present any longer. So we just multiply the distance by 128, giving us values between 0 and 255 that we encode in this channel.</p>
<p><strong>Fourth channel</strong><br>
A very important feature was whether both players are from the same team. So here we just encode 255 if both are from the same team, and 128 otherwise.</p>
<p><strong>Fifth channel</strong><br>
Finally, we saw that distance traveled of players from the last time point is helpful in tabular models. So similar to distance between players, we encode this feature separately for both players, or one in case of ground attack.</p>
<p>For all tracking feature channels, we stick to uint8 encoding which means we lose some precision for the features, but it helps with overfitting to it and can be seen as a binning between 256 bins similar to what GBM models do. The great benefit of encoding these features is that the CNNs can learn all the spatial and temporal information of such tracking features directly.</p>
<p>As the 2D backbone, we used <code>tf_efficientnetv2_s.in21k_ft_in1k</code> and <code>tf_efficientnetv2_b3</code> architecture and pre-trained weights from the timm library. We train all our models for 4 epochs and cosine schedule decay and AdamW optimizer. Checkpoints are always on last epoch.</p>
<h2>Augmentations</h2>
<p>Specifically mixup proved to be very useful in preventing quick overfitting. While it may appear counterintuitive to work well with the encoded feature channels, it likely acted as a good regularization. <br>
During training, we randomly shifted the image frame within a range of +-3 frames to the closest matching frame calculated from the current step. Furthermore, we used a small shift of +-1 for a subset of the model as test time augmentation in the ensemble. </p>
<h2>Tracking and helmet interpolation</h2>
<p>For the random frame shift augmentation, it was helpful to interpolate the tracking information from 10 Hz to 60 Hz. We tried a few different methods, but simple linear interpolation proved to be sufficient and is robust. We also added missing helmet box information using linear interpolation. While this definitely added some noise and false positives, overall it seemed to have helped catching a few more contacts in very crowded situations. We also use this interpolation for inference in our submissions.</p>
<h2>Ensemble & Inference</h2>
<p>Our final ensemble consists of 6 models, and 3 seeds for each of them. All final models were retrained on the full data. We tried to add some diversity by different crop strategies and step sizes.</p>
<table>
<thead>
<tr>
<th>Backbone</th>
<th>Description</th>
<th>Step size</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>tf_efficientnetv2_s.in21k_ft_in1k</td>
<td>No scaling of the crops</td>
<td>24</td>
<td>0.7899</td>
</tr>
<tr>
<td>tf_efficientnetv2_b3</td>
<td>Slightly zoomed-in crops</td>
<td>24</td>
<td>0.7953</td>
</tr>
<tr>
<td>tf_efficientnetv2_b3</td>
<td>Inverted feature channel encoding</td>
<td>24</td>
<td>0.7987</td>
</tr>
<tr>
<td>tf_efficientnetv2_b3</td>
<td>No interpolation for boxes of other players</td>
<td>24</td>
<td>0.7989</td>
</tr>
<tr>
<td>tf_efficientnetv2_b3</td>
<td>Inverted feature channel encoding</td>
<td>12 (4 times)</td>
<td>0.7988</td>
</tr>
<tr>
<td>tf_efficientnetv2_s.in21k_ft_in1k</td>
<td>Smaller step size</td>
<td>6</td>
<td>0.7890</td>
</tr>
</tbody>
</table>
<p> </p>
<p>We made full use of the recently added kernel with 2 T4 GPUs by parallelizing the pipeline and spawning two threads (1 CPU core for each to preprocess) each covering one half of the plays. All model predictions were averaged and subsequently fed to a stage 2 LGBM model. The final blend has a CV score of around 0.805 before the second stage.</p>
<h3>Stage 2</h3>
<p>We use a LGBM model with only a few carefully selected features including stage 1 ensemble probabilities, <code>nfl_player_id_1</code> to <code>nfl_player_id_2</code> distance and their lags. Other notable features are "step_pct", encoding the current step based on the play length and normalized X and Y positions on the field. Basically, using the average position of the two players and normalizing to one quarter of the field to prevent overfitting to single plays. </p>
<p>In the early stages of the competition, our 2nd stage model gave a great boost in score, specifically after adding the extra tracking features, while in the end the stage 1 predictions were almost on-par, showcasing how the stage 1 CNNs already efficiently learn from the encoded tracking feature channels.</p>
<p>Finally, we blend the LGB predictions with the smoothed raw predictions (window of 3) from the ensemble in a 50:50 ratio.</p>
<p>Our final solution has a CV score of 0.807, a public LB of 0.796, and a private LB of 0.796, exhibiting strong consistency and generalizability.</p>
<p>Huge shoutout to my teammates <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> and <a href="https://www.kaggle.com/ybabakhin" target="_blank">@ybabakhin</a>!</p> |
1st and Future - Player Contact Detection | 3rd place solution, single stage approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!</p>
<p>I should probably clarify, 96 frames is the slice length/duration, I used only every second frame (or even 3rd frame for the last 2 models).</p>
<p>With 2D+3D approach in addition I converted 3 frames to monochrome and used it as an input to 2d CNN, so it was actually 96/(3*2) = 16 combined frames/runs of 224x224 convnext large. With the batch size of 2, it used 19GB of VRAM for ConvNext Large and ~13GB for ConvNext Base during training.</p>
<p>Hi, thank you again for organizing a very interesting competition, it was a pleasure to participate.</p>
<blockquote>
<p>It's really interesting how your model predicted per player instead of per pair. Did you decide that using up to the 7th closest player was sufficient to capture any contact? Thats honestly slightly more than I'd expect.</p>
</blockquote>
<p>I checked the distribution of Nth nearest player with contact (calculated for both players in the contact pair):</p>
<table>
<thead>
<tr>
<th>Nearest player num</th>
<th>number of contacts</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>69207</td>
</tr>
<tr>
<td>2</td>
<td>17584</td>
</tr>
<tr>
<td>3</td>
<td>5244</td>
</tr>
<tr>
<td>4</td>
<td>2000</td>
</tr>
<tr>
<td>5</td>
<td>785</td>
</tr>
<tr>
<td>6</td>
<td>271</td>
</tr>
<tr>
<td>7</td>
<td>135</td>
</tr>
<tr>
<td>8</td>
<td>59</td>
</tr>
<tr>
<td>9</td>
<td>22</td>
</tr>
<tr>
<td>10</td>
<td>23</td>
</tr>
<tr>
<td>11</td>
<td>15</td>
</tr>
<tr>
<td>12</td>
<td>14</td>
</tr>
<tr>
<td>13</td>
<td>8</td>
</tr>
<tr>
<td>14</td>
<td>12</td>
</tr>
<tr>
<td>15</td>
<td>37</td>
</tr>
</tbody>
</table>
<p>When I checked contacts only within the distance of 2.4 (edited/fixed):</p>
<table>
<thead>
<tr>
<th>Nearest player num</th>
<th>number of contacts</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>69177</td>
</tr>
<tr>
<td>2</td>
<td>17569</td>
</tr>
<tr>
<td>3</td>
<td>5222</td>
</tr>
<tr>
<td>4</td>
<td>1992</td>
</tr>
<tr>
<td>5</td>
<td>768</td>
</tr>
<tr>
<td>6</td>
<td>263</td>
</tr>
<tr>
<td>7</td>
<td>119</td>
</tr>
<tr>
<td>8</td>
<td>48</td>
</tr>
<tr>
<td>9</td>
<td>15</td>
</tr>
<tr>
<td>10</td>
<td>12</td>
</tr>
<tr>
<td>11</td>
<td>10</td>
</tr>
<tr>
<td>12</td>
<td>4</td>
</tr>
<tr>
<td>13</td>
<td>1</td>
</tr>
</tbody>
</table>
<p>Since the contact prediction is averaged when evaluated from both players of view, some (likely most or even all)<br>
contacts would still be checked. For example if player2 is 8th nearest player for player1 in contact, player1 may be the 5th nearest player for player 2, so the contact would still be evaluated from player2 point of view.</p>
<p>I have not tested the model score with the different number of nearest players, but since the model can accept the variable size input, I tried one of the models on one of folds:</p>
<table>
<thead>
<tr>
<th>Number of nearest players</th>
<th>threshold for the best score</th>
<th>score</th>
</tr>
</thead>
<tbody>
<tr>
<td>15</td>
<td>0.5800</td>
<td>0.7626</td>
</tr>
<tr>
<td>13</td>
<td>0.5400</td>
<td>0.7708</td>
</tr>
<tr>
<td>11</td>
<td>0.5200</td>
<td>0.7784</td>
</tr>
<tr>
<td>9</td>
<td>0.4400</td>
<td>0.7881</td>
</tr>
<tr>
<td>8</td>
<td>0.4000</td>
<td>0.7910</td>
</tr>
<tr>
<td>7</td>
<td>0.3400</td>
<td>0.7926</td>
</tr>
<tr>
<td>6</td>
<td>0.3000</td>
<td>0.7938</td>
</tr>
<tr>
<td>5</td>
<td>0.2200</td>
<td>0.7921</td>
</tr>
<tr>
<td>4</td>
<td>0.1800</td>
<td>0.7900</td>
</tr>
<tr>
<td>3</td>
<td>0.1200</td>
<td>0.7873</td>
</tr>
<tr>
<td>2</td>
<td>0.1000</td>
<td>0.7824</td>
</tr>
<tr>
<td>1</td>
<td>0.0600</td>
<td>0.7499</td>
</tr>
</tbody>
</table>
<p>So looks like the selected 7 players choice was reasonable, 6 players worked slightly better with the score of 0.7938. Maybe when trained on the 15 players input the model would learn better how such messy cases are annotated.</p>
<blockquote>
<p>I'm not clear on how the model was able to identify which of surrounding players in the video were associated with the player tracking (NGS) features that you provided the decoder. Did you add any additional masking to the images or did the model learn these relationships on it's own?</p>
</blockquote>
<p>I added the position encoding (grid of sin/cos values at different frequencies, like used with NLP) to 7x7 grid of video encoders activations (starting from -128, -128 pix to encode positions around the visible area) and I also added similar position encoding for the helmet position on the sideline and endzone views (with different linear projections to allow models to query both views).</p>
<p>This way the similar position encoding is used for both key and query parts of the transformer decoder attention and allows to associate and query parts of images relevant to the player visible position. I allowed to encode positions within 128pix of the visible area to be able to query players with contact but the helmet not visible in the current step.</p>
<p>I accidentally introduced a bug in the dataset class and provided the main player video position to all nearest players and this caused the significant degradation of model performance. I also tried to supply one of the activations from 7x7 grid with the player helmet directly to players features, but I have not noticed the significant difference, looks like the model is able to use the supplied position encodings.</p>
<blockquote>
<p>Did you use any of the helmet bounding box data in the model itself other than identifying the player's helmet to predict for. Also, how did you handle when helmets were not visible in either camera?</p>
</blockquote>
<p>I only used the position of the helmet on views (if visible). If the helmet is outside of [-128pix..crop+128pix] box, the pos encoding for corresponding view values are set to zero. </p>
<p>I run prediction for the current player only for steps when the player is visible on at least one view, but since the prediction is done for a number of steps (for example 16 steps, or +-0.8s from the current timestamp, with the current timestamp sampled at 0.5s steps), it's possible the player will be not visible on the previous or next timestamp. But the model would still predict contacts for steps around the visible interval, using the previously visible frames and tracking information (the self attention part of the encoder which uses attention over all players and all time steps).</p>
<p>If the nearest player is not visible on either view, I think it's still included but model would have access to only tracking information or images of this player from surrounding steps if he was visible (it may be hard to associate players only using the tracking info).</p>
<p>I'd like to thank the I'd like to thank organisers for a very interesting challenge (especially <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> for providing very useful answers and helping teams). It was interesting to participate.</p>
<h2>Overview</h2>
<p>The approach is single-stage, trained end-to-end with a single model executed per player and step interval (instead of per pairs or players) and predicting for all input steps range the ground contact for the current player and contact with 7 nearest players. The model has a video encoder part to process input video frames and a transformer decoder to combine tracking features and video activations.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743064%2Fa93573968664b0e0af1534e3819e9d49%2FKaggle%20model_1.png?generation=1677986296204472&alt=media" alt=""></p>
<h3>Video encoders</h3>
<p>The video encoders used a number of input video frames around requested steps and produced activations at corresponding steps at downsampled resolution, usually for 16 steps with corresponding 96 frames using every second frame for input.</p>
<p>I used a few different models for video encoders:</p>
<ul>
<li>2d imagenet pretrained models + 3d Conv layer (credits to the Team Hydrogen solution of one of previous competitions). 3 input frames around the current step are converted to grayscale and used as an input to 2d model, with the results combined using 3d conv. Usually larger models performed better for me, with the best performing model based on the convnext large backbone. Other Convnext based models or DPN92 also worked ok.</li>
<li>2d imagenet pretrained models + TSM, with the color inputs for every 2nd or 3rd frame and TSM like activation exchange between frames before every convolution. Worked better with smaller models like convnext pico or resnet 34 (would probably work better with larger models if the TSM converted model were pretrained on video tasks).</li>
<li>3D/Video models like CLIP-X (X-CLIP-B/16 was the second best performing model) or the Video Swin Transformer (performed okeish but not included in the final submission).</li>
</ul>
<p>Video frames were cropped to 224x224 resolution with the current player's helmet placed at the center/top part of the frame and scaled so the average size of helmets in surrounding frames would be scaled to 34 pixels.<br>
I applied augmentations to randomly shift, scale, rotate images, shift HUV, added blur and noise.</p>
<p>For video model activations (at the 32x downsampled 7x7 resolution) I added the positional encoding and learnable separate sideline / endzone markers.<br>
Optionally the video activations may be encoded using transformers per frame in a similar way as done in DETR but I found it has little to no impact on the result.</p>
<h3>Transformer player features / video activations decoder</h3>
<p>The idea is to use attention mechanisms to combine the players features with other surrounding players information and to query the relevant parts of the images.</p>
<p>For particular player and step, I selected the current player features for surrounding -7..+8 steps and for every step I selected up to 7 nearest players within 2.4 yards, so in total 16 steps * (7+1) players inputs.</p>
<p>For every player/step input I used the following features, added together using per feature linear transformation to match the transformer features dim:</p>
<ul>
<li>position encoding for the helmet pos on the sideline and endzone video, if within 128 pixels from the crop.</li>
<li>is it visible on sideline and endzone frames</li>
<li>pos encoding for the step number</li>
<li>is player the current selected player</li>
<li>is player from the same team as the current player or not</li>
<li>player position (not xy but the role from the tracking metadata)</li>
<li>speed over +- 2 frames</li>
<li>signed acceleration over +- 2 frames</li>
<li>distance to the current player, both values and one hot encoding over +- 2 frames</li>
<li>relative orientation, of the player relative to player-player0 and of player0 relative to player, encoded as sin and cos over +- 2 frames</li>
<li>for visible helmets, I also added the activations from the video at the helmet position directly to player features. The idea was - it's most likely relevant and may help to avoid using the attention heads for the same task, but I found no difference in the final result.</li>
</ul>
<p>Player/step features are used as inputs/targets for a few iterations of transformer layers:</p>
<ul>
<li>For all step/player input, I applied the transformer decoder layer with the query over video activations from the same step. </li>
<li>For all step/player inputs I applied the transformer encoder with the self attention over all players/steps:</li>
</ul>
<pre><code> # video shape is HW*2 x T*B x C
# player_features shape is P, T, B, C
# where P - players, T - time_steps, B - batch, C - features, HW - video activations dims
x = player_features
for step in range(self.num_decoder_layers):
x = x.reshape(P, T*B, C) # reshape to move time steps to batch to use attention only over the current step
x = self.video_decoders[step](x, video)
x = x.reshape(P*T, B, C) # attention over all players/steps
x = self.player_decoders[step](x)
</code></pre>
<p>I tested with the number of iterations between 2 and 8 and the results were comparable, so I used 2 iterations for most of models.</p>
<h2>Data pre-processing</h2>
<p>Mostly to smooth the predicted helmets trajectory, smoothed the prediction to find and remove outliers and interpolated/extrapolated.<br>
During the early test the impact on the performance was not very large, so not conclusive.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743064%2Feb690ac7195ad8d03205681175d3c979%2Fplayers_trajectory_pp.png?generation=1677916456068367&alt=media" alt=""></p>
<h2>Training</h2>
<p>For training I selected all players and steps with helmet detected on at least one video (so model would have the tracking features for a few steps before or after the player was visible for the first/last time). I have not excluded any samples using other rules.</p>
<p>I used the AdamW optimiser with quite a small batch size of 1 to 4 and CosineAnnealingWarmRestarts scheduler with the epoch size of 1024-2048 samples, trained for 68 epochs. It takes about 6-10 hours to train a single model on 3090 GPU.<br>
I evaluated model every time the scheduler reaches the min rate at epochs 14, 36 and 68.</p>
<p>I used the BCE loss with slight label smoothing of 0.001..0.999 (it was a guess, I have not tuned hyperparameters much).</p>
<p>I added aux outputs to the video models to predict if the current player has contact with other players or ground and heatmap of other player helmets with contacts, but the impact on the score was not very large.</p>
<h2>Prediction</h2>
<p>The prediction is very straightforward, for model with the input interval of 11 or 16 steps I run it with the smaller offset of 5 steps to predict over the overlapped intervals for every player.</p>
<p>predictions = defaultdict(list) # key is (game, step, player1, player2)</p>
<p>Every prediction between the current and another player, it's added to the list at the dictionary key (gameplay, step, min(player0, player), max(player0, player))<br>
and all predictions are averages. Usually predictions for the pair of players at a certain step would include predictions with each player as the current one and a few step intervals when the current step is closer to the beginning, middle and end of the intervals.</p>
<p>When ensembles multiple models, their predictions are added to the same predictions dictionary, with better models added 2-3 times to increase their weight.<br>
In total, I used 7 models for the best submission.</p>
<h2>Individual models performance</h2>
<table>
<thead>
<tr>
<th>Video model type, backbone</th>
<th>Notes</th>
<th>Private LB score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Convnext large, 2D + 3D conv</td>
<td>16 steps/96 frames, skip 1 frame.</td>
<td>0.7915</td>
</tr>
<tr>
<td>Convnext base, 2D + 3D conv</td>
<td>16 steps/96 frames, skip 1 frame.</td>
<td>0.786</td>
</tr>
<tr>
<td>DPN92, 2D + 3D conv</td>
<td>16 steps/96 frames, skip 1 frame.</td>
<td>0.784</td>
</tr>
<tr>
<td>X-CLIP-B/16</td>
<td>11 steps/64 frames, skip 1 frame.</td>
<td>0.791</td>
</tr>
<tr>
<td>X-CLIP-B/32</td>
<td>11 steps/64 frames, skip 1 frame.</td>
<td>0.784</td>
</tr>
<tr>
<td>Convnext pico, TSM</td>
<td>63 steps/384 frames, skip 2 frames.</td>
<td>0.788</td>
</tr>
<tr>
<td>Convnext pico, 2D + 3D conv</td>
<td>64 steps/384 frames, skip 2 frames.</td>
<td>Local CV slightly worse than TSM</td>
</tr>
<tr>
<td>2 best models ensemble</td>
<td>Convnext large and X-CLIP-B/16,</td>
<td>0.7925</td>
</tr>
<tr>
<td>6 models ensemble</td>
<td>Without DPN92, re-trained on full data with original helmets</td>
<td>0.7932</td>
</tr>
<tr>
<td>6 models ensemble</td>
<td>Without DPN92, re-trained on full data with fixed helmets</td>
<td>0.7934</td>
</tr>
<tr>
<td>7 models ensemble</td>
<td>Convnext large added with weight 3 and X-CLIP-B/16 with weight 2. Models trained on different folds.</td>
<td>0.7956</td>
</tr>
</tbody>
</table>
<h2>What did not work</h2>
<ul>
<li>Training Video Encoder model using aux losses before training transformer decoders. Video Encoder overfits.</li>
<li>Adding much more tracking features to player transformer inputs. When added the history over larger number of steps for each player input, the transformer encoder overfits.</li>
<li>Larger models with TSM</li>
<li>Fix players/helmets assignment in the provided baseline helmets prediction. On some folds the impact was negligible, on some the score has improved by ~ 0.005 even without re-training models. On the private LB the score was similar with and without helmets fixed. One submitted model was using the original data pre-processing, another using more complex pipeline with helmets re-assigned.</li>
</ul>
<h2>Local CV challenges</h2>
<p>To check for possible issues with models generalisation, I decided to split to folds using the sorted by game play list of games, with the first 25% of games assigned to fold 0 validation fold and so on.</p>
<p>I found to have not only the difference between folds in score, but models/ideas performing well on one fold may work much worse on another.<br>
For example, I found on the fold 2, the models with the very large receptive field over time/steps (384 steps, over 6 seconds, convnext pico based models in the submission) performed by about 0.008 better than the best larger models, while the score fo such models was by the similar 0.007 worse on the fold 3.</p>
<p>All this made the local validation much more challenging and harder to trust. Taking into account the private dataset is even smaller than every fold, I expected to see a significant shakeup.</p>
<h2>Player helmets re-assignment</h2>
<p>Since it was not part of the best submission, added as a separate post: <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/392392" target="_blank">https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/392392</a></p>
<p>Instead of the data pre-processing described above, I used the estimated tracking -> video transformation to interpolate/extrapolate missing helmets information. The best result was when I discarded the first or the last predicted helmet position and extrapolated by 8 steps maintaining the difference with the position predicted from tracking and tracking->view transformation.</p>
<p>The submission source is available at <a href="https://www.kaggle.com/dmytropoplavskiy/nfl-sub-place3" target="_blank">https://www.kaggle.com/dmytropoplavskiy/nfl-sub-place3</a></p> |
1st and Future - Player Contact Detection | 41st solution(2D-CNN, 1D-CNN, Stacking) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to all the organizers and participants for this amazing competition!<br>
I really enjoyed this competion!</p>
<h1>Overview</h1>
<ul>
<li>2D-CNN prediction [zoom out, zoom in, masked] <br>
input single frame, predict Endzone & Sideline by same model</li>
<li>1D-CNN prediction [tracking, helmet]<br>
input tracking and helmet position data shift(-6~6)<br>
(pos, speed, acc,distance, orientation, direction, sa, helmet position)</li>
<li>Stacking<br>
predcitions and 5 features by 2D-CNN (zoom out, zoom in, masked = 3 models)<br>
3 tracking features by 1D-CNN <br>
tabel features</li>
<li>Moving average post processing<br>
after concat distance > 2 data </li>
</ul>
<h1>Score</h1>
<ul>
<li>CV(Group K fold by game_play) : 0.740</li>
<li>Publie LB : 0.73699</li>
<li>Private LB : 0.7302</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8492034%2Fa09fc34dbc377600c780166cae5bfc22%2FNFL_Solutions.png?generation=1677920953965547&alt=media" alt=""></p> |
1st and Future - Player Contact Detection | 45th place solution (the most simple method to get seliver madel) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all thanks to our teammates.<br>
Our method is makeup by a tabular model and cnn model.<br>
In the tabular model, we are based on <a href="https://www.kaggle.com/code/ahmedelfazouan/nfl-player-contact-detection-helmet-track-ftrs" target="_blank">this model</a><br>
second, we add (mean, std, max, min) from helmet trajectory data(train_player_tracking.csv) to increase LB from 0.650 to 0.684.<br>
Third, we add step_rate (0.684->0.693) from the video.<br>
And then We apply TTA to <a href="https://www.kaggle.com/code/zzy990106/nfl-2-5d-cnn-baseline-inference" target="_blank">cnn model</a> to increase LB from 0.667 to 0.671.<br>
Finally, we add cnn model's prediction to the tabular model.<br>
After doing this, we get 0.724 (public score) and 0.728 (private score).<br>
Our <a href="https://www.kaggle.com/code/yoyobar/cnn-with-feature/notebook" target="_blank">code</a>.<br>
Cheers!</p>
<p>We have thought this way too, I think it will be better. we did not have enough time to debug our cnn model(a lot of bugs), so we gave up this way.</p> |
1st and Future - Player Contact Detection | 4th place solution Overall pipeline & tabular part - Osaka Tigers | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your comment. Camaro model detail is written in comment.</p>
<p>We really appreciated the hosts and the kaggle team for organizing the competition. Moreover, we would also like to thank all the participants who joined. We could enjoy this competition and write up our solutions. </p>
<p>I would like to thank team members, <a href="https://www.kaggle.com/bamps53" target="_blank">@bamps53</a>, <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a> and <a href="https://www.kaggle.com/kmat2019" target="_blank">@kmat2019</a>, who have the top talent to analyze the task. I could discuss and enjoy the competition. </p>
<h1>Overview</h1>
<p>Simple solution outline is attached pic.<br>
<a href="https://postimg.cc/VJ6Rkh2p" target="_blank"><img src="https://i.postimg.cc/pLQ1qbCW/pipeline.png" alt="pipeline.png"></a></p>
<p>In the 1st stage we predict the contact by multiple CNN. In the 2nd stage CNN prediction(s), tracking and helmet data are aggregated and created features to input GDBT. Lastly we compute 5 models averaged value and optimize threshold for both player-player and player-ground contact.</p>
<h1>1st stage CNN</h1>
<h2>k mat model</h2>
<p>Details are written in <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/391719" target="_blank">https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/391719</a>.<br>
We can obtain both Endzone and Sideline prediction values. </p>
<h2>camaro model</h2>
<p>will come up soon</p>
<h1>2nd stage aggregation & binary classification models</h1>
<p>We excluded player-player pairs with distance > 3, and the remaining ~880K rows were used to train 2nd stage models. During inference time, we assigned 0 to pair with distance > 3 and predicted only the remaining data.</p>
<h2>Created features</h2>
<p>Because our CNN predictions are so strong, more than 90% of the top 30 important features were CNN-related features. Below are part of the features we have created.</p>
<h3>Tracking</h3>
<ul>
<li>distance between two players</li>
<li>distance/x_position/y_position from step0</li>
<li>distance from around player (full/same team/different team )</li>
<li>distance between team center</li>
<li>distance to second nearest player</li>
<li>current step / max step</li>
<li>lag / lead of acc, speed, sa etc</li>
<li>max/min/mean of x, y, speed, acc, sa, distance group by (play, step), (play, step, team) and (play, player1, player2) x/y positon diff from step=0</li>
<li>”interceptor” features<ul>
<li>find playerC who meet the following conditions and add distance(A, C) and ∠BAC to the features of playerA-playerB (to detect that C intercepts between A-B)<ul>
<li>∠BAC < 30deg</li>
<li>distance(A, C) < distance(A, B) and distance(B, C) < distance(A, B)</li></ul></li></ul></li>
</ul>
<h3>Helmet</h3>
<ul>
<li>bbox aspect ratio</li>
<li>bbox overlap</li>
<li>lag / lead of bbox coordinates</li>
<li>bbox center x,y std/shift/diff</li>
<li>distance of bbox centers</li>
</ul>
<h3>CNN prediction and meta-features</h3>
<ul>
<li>oof predictions of 1st stage CNNs</li>
<li>max/min/std of predictions group by (play, step) and (play, player1, player2)</li>
<li>5/11/21 rolling features<ul>
<li>to complement CNN predictions on frames without helmets</li></ul></li>
<li>lag / diff</li>
<li>around players’ player-ground prediction value</li>
</ul>
<h4>Combinations</h4>
<ul>
<li>registration errors from helmet-tracking coordinate transform (similar to 6th place solution, and previous NFL’s 1st place solution by K_mat)</li>
</ul>
<h3>Models</h3>
<p>We trained four GBDT models with different combinations of 1st stage CNNs. We also added one NN model ("camaro2" in the figure above) and calculated the simple average of these 5 models. Predictions were binarized with separate thresholds optimized for player-player and player-ground respectively.</p>
<ul>
<li>LightGBM<ul>
<li>K_mat A + Camaro1 Public 0.795/Private 0.792</li>
<li>K_mat B + Camaro 1</li>
<li>K_mat B</li></ul></li>
<li>xgboost<ul>
<li>K_mat B + Camaro 1</li></ul></li>
<li>Camaro 2</li>
</ul>
<h3>tips</h3>
<ul>
<li>rolling features for CNN prediction values are most important in our models.</li>
<li>judging from permutation feature importance, ‘minimum distance between players in the game_play’, ‘distance between away team mean and home team mean’ and ‘player-player distance’ are important tracking features to increase score.</li>
<li>We did not use early-stopping to train the GBDTs because the optimal number of rounds for MCC is always longer than AUC.</li>
</ul>
<h3>not wroked for models</h3>
<ul>
<li>Catboost</li>
<li>Residual fit</li>
<li>Meta Features by non CNN (e.g. logistic regression prediction values/ k-means clustering feature)</li>
<li>Separate player-player and player-ground model</li>
<li>1DCNN</li>
<li>External NFL data</li>
<li>Focal loss</li>
</ul>
<h1>not worked for overall</h1>
<ul>
<li>Adding previous competition pseudo labeling data</li>
<li>Removing noisy label</li>
<li>all29 assignment and its prediction</li>
<li>2.5D or 3D CNN, but should have dug more..</li>
<li>Aggregate near frame information</li>
</ul> |
1st and Future - Player Contact Detection | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the host and kaggle for hosting such an interesting competition.<br>
I would also like to thank all of the participants and teammates( <a href="https://www.kaggle.com/takashisomeya" target="_blank">@takashisomeya</a> <a href="https://www.kaggle.com/nomorevotch" target="_blank">@nomorevotch</a> <a href="https://www.kaggle.com/fuumin621" target="_blank">@fuumin621</a> ) for a great time.</p>
<p>Our solution consists of two stages: NN and GBDT. We will show you how in detail.</p>
<h2>■stage1 NN part overview</h2>
<ul>
<li>tracking data and images as input(player-player distance < 2 and player-ground) </li>
<li>inference of sequential frames at once </li>
<li>CNN + LSTM </li>
</ul>
<h2>Input to NN</h2>
<h3>[1]tracking data</h3>
<p>Use the following tracking data.</p>
<ul>
<li>distance</li>
<li>distance_1(player1)</li>
<li>distance_2(player2)</li>
<li>speed_1</li>
<li>speed_2</li>
<li>acceleration_1</li>
<li>acceleration_2</li>
<li>same_team(bool)</li>
<li>different_team(bool)</li>
<li>G_flag(bool)</li>
</ul>
<p>If player is G, fill distance and XXXX_2 values with -1.<br>
same_team and different_team are flags for whether the players are belong to the same/different team.<br>
G_flag means the player-ground pair flag.</p>
<h3>[2]Images + Bbox</h3>
<ul>
<li>Concat the following three in the channel direction<ul>
<li>video frames of +-1 frame cropped around the helmet. </li>
<li>helmet bbox mask</li></ul></li>
<li>Image size<ul>
<li>player-player pair :crop size = max(average bbox width, average bbox height) * 3</li>
<li>player-ground pair :crop size = max(bbox width, bbox height) * 3</li>
<li>Resize the cropped image to 128x128.</li></ul></li>
</ul>
<p>We used sequential frames containing at least one frame with a distance < 2.
(At this time the data may contain frames of distance > 2.)</p>
<ul>
<li>[1]:B x N x 10 </li>
<li>[2]:B x N x 3 x 128 x 128 <br>
(B:batch_size, N:Sequential frames (e,g. 16,32,48,64)) </li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3584397%2F4e835e68beeb3243da667319ac771c14%2Fcnn_input.jpg?generation=1677942599176145&alt=media" alt=""></p>
<p>Sequential frames (N) are cut out with different strides during training and inference. <br>
training: No duplicate frames (stride == N) <br>
inference: Duplicate frames(stride < N, Duplicate frame results are averaged.) </p>
<h2>Augmentations during training</h2>
<p>Use the following augmentations.</p>
<ul>
<li>HorizontalFlip</li>
<li>RandomBrightnessContrast</li>
<li>OneOf<ul>
<li>MotionBlur</li>
<li>Blur</li>
<li>GaussianBlur </li></ul></li>
<li>Ramdom frame dropout (40-60% for images and 20-60% for tracking data)</li>
</ul>
<h2>NN Model</h2>
<p>The overall NN model architecture is as follows </p>
<ul>
<li>Endzone/sideline images go through a shared CNN backbone. </li>
<li>The CNN backbone uses the TSM module. <br>
<a href="https://www.kaggle.com/competitions/nfl-impact-detection/discussion/209403" target="_blank">https://www.kaggle.com/competitions/nfl-impact-detection/discussion/209403</a> </li>
<li>Concatenate features extracted by CNN with tracking features </li>
<li>BiLSTM layers + FC layer infer sequential frames at once </li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3584397%2Fcc3cb2c68d98704faeb57d48d11ecea4%2Fcnn_model.jpg?generation=1677942632702738&alt=media" alt=""></p>
<h2>■stage2 GBDT part overview</h2>
<p>The key feature in this model is the logit from stage1.<br>
The goal is to further improve the score by combining logit with tracking data and other data to create a binary classification model.</p>
<h2>Data</h2>
<ul>
<li>distance <= 2</li>
<li>swap player1 and player2 features then concatenate them vertically to the original data.</li>
<li>average swap and original features for final prediction</li>
</ul>
<h2>Features</h2>
<h3>Raw value</h3>
<ul>
<li>x_position, y_position, speed, distance, orientation, acceleration, direction, sa, jersey_number of each player</li>
<li>distance between players</li>
<li>frame number</li>
<li>nn_pred</li>
</ul>
<h3>Helmet</h3>
<p><a href="https://www.kaggle.com/code/ahmedelfazouan/nfl-player-contact-detection-helmet-track-ftrs" target="_blank">https://www.kaggle.com/code/ahmedelfazouan/nfl-player-contact-detection-helmet-track-ftrs</a></p>
<h3>Simple computational features</h3>
<p>The following are calculated for x_position, y_position, speed, distance, orientation, acceleration, direction, sa</p>
<ul>
<li>Absolute difference between players, multiplied by</li>
<li>Difference from the average of all players in each frame</li>
</ul>
<h3>Aggregate features</h3>
<p>For distance, nn_pred, sa, distance, speed</p>
<ul>
<li>Aggregate features for (game_play, position), (game_play, player), (game_play, team), (game_play, step)</li>
<li>Aggregate features for each (game_play, player_1, player_2)</li>
<li>shift, diff(-3~3) for each (game_play, player_1, player_2).</li>
</ul>
<h2>model</h2>
<ul>
<li>lgbm</li>
<li>xgboost</li>
</ul>
<h2>■Ensemble</h2>
<h3>stage1 (NN part)</h3>
<p>Created models on different backbones and different sequence lengths as follows</p>
<ul>
<li>backbone<ul>
<li>resnet18,34,50</li>
<li>resnext50</li>
<li>efficientnet b0,b1</li></ul></li>
<li>sequence length<ul>
<li>16,32,48,64</li></ul></li>
</ul>
<h3>stage2 (GBDT part)</h3>
<p>Two models were created with the same features</p>
<ul>
<li>LightGBM</li>
<li>XGBoost</li>
</ul>
<h3>Forward Selection</h3>
<p>Created models for (almost) all combinations of the above, and use Forward Selection </p>
<ul>
<li>Forward Selection was based on the excellent kernel by chris here.<br>
<a href="https://www.kaggle.com/code/cdeotte/forward-selection-oof-ensemble-0-942-private/notebook" target="_blank">https://www.kaggle.com/code/cdeotte/forward-selection-oof-ensemble-0-942-private/notebook</a></li>
<li>It is a simple method. so we expected to avoid overfit.</li>
<li>The following models were finally selected by Forward Selection</li>
</ul>
<table>
<thead>
<tr>
<th>sequence length</th>
<th>backbone</th>
<th>gbdt</th>
<th>cv</th>
</tr>
</thead>
<tbody>
<tr>
<td>64</td>
<td>resnext50</td>
<td>xgb</td>
<td>0.7918</td>
</tr>
<tr>
<td>64</td>
<td>resnext50</td>
<td>lgb</td>
<td>0.7906</td>
</tr>
<tr>
<td>64</td>
<td>effib0</td>
<td>lgb</td>
<td>0.79</td>
</tr>
<tr>
<td>32</td>
<td>resnext50</td>
<td>lgb</td>
<td>0.7935</td>
</tr>
<tr>
<td>32</td>
<td>effib0</td>
<td>lgb</td>
<td>0.7881</td>
</tr>
<tr>
<td>16</td>
<td>resnext50</td>
<td>xgb</td>
<td>0.7906</td>
</tr>
</tbody>
</table>
<ul>
<li>Final submit is CV:0.8016 ,LB : 0.7902, PB : 0.7913</li>
</ul>
<h2>Threshold</h2>
<p>We simply blend predictions of selected models (x5fold), and determined by a single threshold.</p>
<ul>
<li>We used two threshold. <ul>
<li>predictions themselves</li>
<li>percentile of the predictions</li></ul></li>
<li>We also tried voting ensemble , but decided not to use it because the LB score was better with a single threshold.</li>
</ul>
<h2>Other tips</h2>
<p>In the inference notebook, the following were introduced to avoid OOM and timeout.</p>
<ul>
<li>using lru_cache for read image at high speed</li>
<li>PyTurboJPEG loads images faster than OpenCV</li>
<li>Polars helps reducing submission time.</li>
</ul>
<h2>Acknowledgments</h2>
<p>zzy's excellent kernel is very helpful in our pipeline. <br>
<a href="https://www.kaggle.com/code/zzy990106/nfl-2-5d-cnn-baseline-inference" target="_blank">https://www.kaggle.com/code/zzy990106/nfl-2-5d-cnn-baseline-inference</a></p>
<p>Thanks, <a href="https://www.kaggle.com/yururoi" target="_blank">@yururoi</a> <br>
The input to the CNN is the latter, that is (BxN)xCxHxW.</p>
<p>Thank you, <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a></p>
<p>The stage 2 gain depends mainly on the sequential length. <br>
If the sequential length is small, the gain is large, but if the sequential length is large, the gain is almost none.</p>
<table>
<thead>
<tr>
<th>sequence length</th>
<th>backbone</th>
<th>stage1 cv</th>
<th>stage2 cv</th>
</tr>
</thead>
<tbody>
<tr>
<td>16</td>
<td>resnext50</td>
<td>0.7868</td>
<td>0.7906(xgb)</td>
</tr>
<tr>
<td>32</td>
<td>resnext50</td>
<td>0.7929</td>
<td>0.7935(lgb)</td>
</tr>
<tr>
<td>32</td>
<td>effib0</td>
<td>0.785</td>
<td>0.7881(lgb)</td>
</tr>
<tr>
<td>64</td>
<td>resnext50</td>
<td>0.7923</td>
<td>0.7918(xgb)</td>
</tr>
<tr>
<td>64</td>
<td>resnext50</td>
<td>0.7923</td>
<td>0.7906(lgb)</td>
</tr>
<tr>
<td>64</td>
<td>effib0</td>
<td>0.7893</td>
<td>0.79(lgb)</td>
</tr>
</tbody>
</table> |
1st and Future - Player Contact Detection | 9th place solution - Team JK | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank the organizers for such an interesting competition! <br>
We share the Team JK's solution. <br>
Team Member: <a href="https://www.kaggle.com/vostankovich" target="_blank">@vostankovich</a>, <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>, <a href="https://www.kaggle.com/anonamename" target="_blank">@anonamename</a>, <a href="https://www.kaggle.com/yururoi" target="_blank">@yururoi</a>, <a href="https://www.kaggle.com/tomo20180402" target="_blank">@tomo20180402</a><br>
<br></p>
<h1>Overview</h1>
<hr>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2126876%2F4daeeaa70bf25eda3c4bb2519bcac346%2Fjk_solution_image.png?generation=1677991277683737&alt=media" alt=""></p>
<h1>1st stage</h1>
<hr>
<h2>yuki part</h2>
<ul>
<li>(1) of fig.</li>
<li>See <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/392046" target="_blank">yuki's post</a>.</li>
</ul>
<h2>Vladislav part</h2>
<ul>
<li>(2) of fig.</li>
<li>Features are mainly created from sensor data, but helmets bboxes information is also used.</li>
<li>Trained XGB and LGBM models for P2P and P2G individually.</li>
<li>P2P and P2G have different features. There are 133 features for pair contact and 119 features for ground contact.</li>
<li>Here is explanation of some features:<ul>
<li>Excluded speed, since it correlates to distance.</li>
<li>Step (or frame_number), it boosts score a lot.</li>
<li>Player position on field (defense, offense or special)</li>
<li>Twist feature (direction-orientation)</li>
<li>Same team feature</li>
<li>Is home team feature</li>
<li>Number of players/opponents in (1,3,5 meters) is quite good feature</li>
<li>Number of players in opposite orientation</li>
<li>Acceleration of player ratio to mean acceleration of all players per step</li>
<li>Diff of features of same player (in time domain)</li>
<li>Time features (just copy of previous and future steps features)</li>
<li>Difference of features between two players</li>
<li>Euclidean distance is the main feature and other features based on it as well</li>
<li>Features from helmets dataframe (bboxes coordinates, bboxes height & width for each view and perimeter)</li>
<li>IoU helmets features</li></ul></li>
<li>XGB/LGBM models were trained with common hyperperameters that can be seen on public notebooks. Only added reg_alpha = 0.1 for both models.</li>
</ul>
<h2>anonamename part - combined knowledge of team members</h2>
<ul>
<li>(3) of fig.</li>
<li>2-stage model of 2.5D/3D CNN and GBDT (5fold CV:0.778/Public:0.775/Private:0.773)</li>
<li>2.5D/3D CNN<ul>
<li>based <a href="https://www.kaggle.com/code/zzy990106/nfl-2-5d-cnn-baseline-inference" target="_blank">public notebook</a>.</li>
<li>input<ul>
<li>image<ul>
<li>15frames (±7frame, skip_frame=1)</li>
<li>use both view (Endzone and Sideline)</li></ul></li>
<li>tracking data<ul>
<li>64 features (created by <a href="https://www.kaggle.com/vostankovich" target="_blank">@vostankovich</a>)</li></ul></li></ul></li>
<li>model<ul>
<li>based <a href="https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout/discussion/359932" target="_blank">DFL competition 1st solution</a>.</li>
<li>pipeline : 15frames 2.5D -> Residual3DBlock -> GeM (created by <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>)</li>
<li>2.5D backborn : tf_mobilenetv3_small_minimal_100.in1k</li>
<li>multi-label classification (created by <a href="https://www.kaggle.com/anonamename" target="_blank">@anonamename</a>)<ul>
<li>num_classes=2(Player-Player contact(P2P) and Player-Ground contact(P2G)) + nn.BCEWithLogitsLoss</li></ul></li>
<li>fold : StratifiedGroupKFold(n_splits=5).split(y="contact_org", groups="game_id") (created by <a href="https://www.kaggle.com/tomo20180402" target="_blank">@tomo20180402</a>)<ul>
<li>Set different labels for contacts between same team, different teams and ground.</li>
<li>train data under sampling : positive:negative = 1:5 (change under sampling data for each epoch)</li></ul></li></ul></li>
<li>optimaizer : AdamW(lr=1e-3->1e-5 CosineAnnealingLR, weight_decay=1e-5)</li>
<li>epoch : 15</li>
<li>augmentation<ul>
<li>HorizontalFlip, ShiftScaleRotate, MotionBlur, OpticalDistortion, CoarseDropout</li>
<li>Mixup at the last layer (like a <a href="https://arxiv.org/abs/1806.05236" target="_blank">Manifold mixup</a>. created by <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>)</li></ul></li>
<li>TTA : HorizontalFlip</li></ul></li>
<li>GBDT<ul>
<li>Create xgboost and lightgbm for P2P and P2G individually.</li>
<li>tracking feature (created by <a href="https://www.kaggle.com/vostankovich" target="_blank">@vostankovich</a>)</li>
<li>2.5D/3D CNN prob feature<ul>
<li>groupby(["game_play", "nfl_player_id_1", "nfl_player_id_2"]) : shift(), diff(), mean(), max(), min(), std()</li></ul></li></ul></li>
</ul>
<h2>tomo part</h2>
<ul>
<li>(4) of fig.</li>
<li>single-stage NN model (3fold CV:0.771/Public:0.759/Private:0.760)</li>
<li>multi-class classification : P2P (same team), P2P (different team), P2G<ul>
<li>output is 6 labels which are used as features of team's 2nd stage</li></ul></li>
<li>execution time : 2h</li>
<li>validation : StratifiedGroupKFold(n_splits=3).split(y="contact_org", groups="game_id")<ul>
<li>same as anonamename part </li></ul></li>
<li>dataset<ul>
<li>train data under sampling : Reduce negative sample of P2P contact (same team) by one-third.</li></ul></li>
<li>feature<ul>
<li>table feature : 54<ul>
<li>3 types distance : euclidean, chebyshev, cityblock</li>
<li>3 types distance rank : among all, same team, different team</li>
<li>median of helmet width and height</li>
<li>normalized distance by mean of helmet width and height<ul>
<li>The mean of helmet width and height are calculated from all players.</li></ul></li>
<li>total rank from the center coordinates of 2player's helmets</li>
<li>ratio of helmet detection exist : both players, each player</li>
<li>cosine similarity : direction, orientation</li>
<li>predicted euclidean distance</li>
<li>other simple features : step, is_same_team, ground_flag, etc.</li></ul></li>
<li>image feature<ul>
<li>10 images in 2.5D CNN<ul>
<li>5frames each for Sideline and Endline (n-4, n-2, n, n+2, n+4)</li>
<li>image_size = (256, 256)</li></ul></li>
<li>cropping method<ul>
<li>Change the cropping method depending on whether both players’ helmets exist.<ul>
<li>both players exist : Make sure both players are visible.</li>
<li>one player exist : Make sure the player is in the center.</li></ul></li>
<li>Crop the image with the mean of helmet width and height as a variable.</li>
<li>Give priority to the downward direction.</li></ul></li>
<li>mean of image exist : 4<ul>
<li>each for Sideline and Endline</li></ul></li></ul></li></ul></li>
<li>TTA<ul>
<li>flip sensor and image in one of three models inferences<ul>
<li>sensor : exchange player1,2</li>
<li>image : HorizontalFlip</li></ul></li></ul></li>
</ul>
<h1>2nd stage</h1>
<hr>
<ul>
<li>model : lgbm × 4</li>
<li>feature : shift features of each models’ predictions and sensor data (-13~+13)</li>
<li>postprocessing : 4 predictions by lgbm -> simple average -> moving average -> final prediction</li>
</ul> |
1st and Future - Player Contact Detection | Result of Late Submission: 2D-CNN + XGB + 1D-CNN (Private LB: 0.78703) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. I decided to share my solutions because I thought my approach of using strict cross fold is unique in this competition. I wanted to show how effective this approach is. Hope it help you.</p>
<p>I will share experiment result after competition (LB: 0.78703) for future reference. My original solution is available <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/391607" target="_blank">here</a> (Private LB: 0.76741).</p>
<p>First of all, I appreciate all the solution write-ups by others. I get much insight from their solutions. For example, giving numeric features on CNN with isolated channels is from Team Hidrogen's solution, and using player-anyone & player-ground contact information is adopted from Qishen and Bo's solution (and from 18th place team).</p>
<h2>What I Did on the Late Submissions</h2>
<ol>
<li>use <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/393400" target="_blank">less-player-duplicated fold split</a></li>
<li>5-channel 2D-CNN (channel design is the same as <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/391740" target="_blank">Team Hydrogen's solution</a> except for not using 2.5D)</li>
<li>add group feature & lag feature of 1st/2nd stage prediction score (+group +lag)</li>
<li>add player-anyone contact feature (+group +lag)</li>
<li>add player-ground contact feature on player-player model (+group +lag)</li>
<li>apply player-player sequence-level pruning</li>
<li>add 4th-stage 1D-CNN</li>
</ol>
<h2>Discussion</h2>
<p>5-channel 2D-CNN gave me notable boost from my original architecture: 3-channel 2.5D-CNN (+0.54%) it also much reduces train/scoring time.</p>
<p>One of the largest gains are from strictly split CV. As I already wrote on <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/393570" target="_blank">the post</a>, the fold split with less player duplication drastically improves CV/LB correlation. Thanks to this fold split, I can boost LB further with additional features on the 3rd stage (+0.87%).</p>
<p>Sequence level pruning (+0.16%) and 1D-CNN (+0.26%) boosted score further although there are no gain on CV. One of the possible reason is the CV I used is too strict and test data may contain few players that also appeared on train data.</p>
<h2>Tricks to speed up scoring time</h2>
<p>I also reduced scoring time by the following tricks.</p>
<ul>
<li>use 2D CNN instead of 2.5D (3-4h -> 2h)</li>
<li>use <code>@lru_cache</code> when loading image (2h -> 1h)</li>
<li>use numpy array instead of JPEG (1h -> 45 min)</li>
</ul>
<h2>What didn't worked</h2>
<ul>
<li>using prediction score of additional CNN which trains labels with player-anyone contact (it only scores tie comparing to the group features extracted from player-player contact CNN)</li>
<li>2.5D-CNN (it only tie scores to 2D-CNN)</li>
</ul>
<h2>Score Results</h2>
<table>
<thead>
<tr>
<th>Submissions</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
<th>architecture</th>
<th>description</th>
<th>#features(p2g)</th>
<th>#features(p2g)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0.7950</td>
<td>0.7701</td>
<td>0.7738</td>
<td>XGB + 2D-CNN(5-channel) + XGB</td>
<td>stage 1 feats. + stage-2 pred score</td>
<td>1032</td>
<td>1032</td>
</tr>
<tr>
<td>2</td>
<td>0.8038</td>
<td>0.7806</td>
<td>0.7773</td>
<td>XGB + 2D-CNN(5-channel) + XGB</td>
<td>#1 + lag & group feats. of stage-2 pred score</td>
<td>1057</td>
<td>1057</td>
</tr>
<tr>
<td>3</td>
<td>0.8035</td>
<td>0.7788</td>
<td>0.7799</td>
<td>XGB + 2D-CNN(5-channel) + XGB</td>
<td>#2 + lag & group feats. of stage-1 pred score</td>
<td>1083</td>
<td>1083</td>
</tr>
<tr>
<td>4</td>
<td>0.8039</td>
<td>0.7799</td>
<td>0.7782</td>
<td>XGB + 2D-CNN(5-channel) + XGB</td>
<td>#3 + stage 2 p2anyone feats (+group)</td>
<td>1087</td>
<td>1087</td>
</tr>
<tr>
<td>5</td>
<td>0.8040</td>
<td>0.7800</td>
<td>0.7808</td>
<td>XGB + 2D-CNN(5-channel) + XGB</td>
<td>#3 + stage 2 p2anyone feats (+group +lag)</td>
<td>1104</td>
<td>1104</td>
</tr>
<tr>
<td>6</td>
<td>0.8055</td>
<td>0.7815</td>
<td>0.7820</td>
<td>XGB + 2D-CNN(5-channel) + XGB</td>
<td>#5 + stage 1 p2anyone feats (+group)</td>
<td>1108</td>
<td>1108</td>
</tr>
<tr>
<td>7</td>
<td>0.8053</td>
<td>0.7808</td>
<td>0.7819</td>
<td>XGB + 2D-CNN(5-channel) + XGB</td>
<td>#5 + stage 1 p2anyone feats (+group +lag)</td>
<td>1125</td>
<td>1125</td>
</tr>
<tr>
<td>8</td>
<td>0.8064</td>
<td>0.7827</td>
<td>0.7825</td>
<td>XGB + 2D-CNN(5-channel) + XGB</td>
<td>#6 + p2g feat on p2p model (+group +lag)</td>
<td>1108</td>
<td>1130</td>
</tr>
<tr>
<td>9</td>
<td>0.8051</td>
<td>0.78368</td>
<td>0.78412</td>
<td>XGB + 2D-CNN(5-channel) + XGB</td>
<td>#8 + sequence level pruning</td>
<td>1108</td>
<td>1130</td>
</tr>
<tr>
<td>10</td>
<td>0.8053</td>
<td>0.78687</td>
<td>0.78672</td>
<td>XGB + 2D-CNN(5-channel) + XGB + 1D-CNN</td>
<td>#9 + 4th stage (1D-CNN; input stages 1-3 output)</td>
<td>1108</td>
<td>1130</td>
</tr>
<tr>
<td>11</td>
<td>0.80577</td>
<td>0.78713</td>
<td>0.78703</td>
<td>XGB + 2D-CNN(5-channel) + XGB + 1D-CNN</td>
<td>#9 + 4th stage (1D-CNN; input only stage 3 output)</td>
<td>1108</td>
<td>1130</td>
</tr>
</tbody>
</table> |
2018 Data Science Bowl | 10th place Code+Datasets (LB: 0.591) Mask R-CNN single model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Note: (currently 7th place while leaderboard is being updated) </p>
<h2>TLDR: Resources related to this project</h2>
<ul>
<li><a href="https://www.kaggle.com/gangadhar/nuclei-segmentation-in-microscope-cell-images">Data used to build the models</a></li>
<li><a href="https://github.com/gangadhar-p/NucleiDetectron">Code</a></li>
<li><a href="https://github.com/gangadhar-p/NucleiDetectron">Detailed Writeup</a></li>
<li><a href="https://www.kaggle.com/gangadhar/nuclei-detectron-models-for-2018-data-science-bowl">Pretrained model, predictions and visualization of submission</a></li>
</ul>
<h2>Preview of training data</h2>
<img src="https://i.imgur.com/YBdPS79.jpg">
<p></p>
<h2>Dataset preparation</h2>
<ul>
<li>There were several nuclei datasets with outlines as annotations.
<ul><li>Applied classical computer vision techniques to convert ground truth from outlines to masks.</li>
<li>This involved adding boundary pixels to the image so all contours are closed.</li>
<li>Given outlines of cells with overlaps/touching or at border,
<ul><li>Mark an outer contour to encompass contours that are at image edges.</li>
<li>then do cv2.findContours to get the polygons of mask.</li>
<li>Ref <a href="https://github.com/gangadhar-p/NucleiDetectron/blob/master/lib/datasets/nuclei/mask_encoding.py#L184">parse_segments_from_outlines</a></li></ul></li></ul></li>
<li>Standardized all datasets into COCO mask RLE JSON file format.
<ul><li>You can use <a href="https://github.com/cocodataset/cocoapi">cocoapi</a> to load the annotations.</li></ul></li>
<li>Cut image into tiles when images are bigger than 1000 pixels
<ul><li>This was necessary since large image features did not fit in GPU memory.</li></ul></li>
</ul>
<h2>Preprocessing</h2>
<ul>
<li>Cluster images into classes based on the color statistics.</li>
<li>Normalize classes size
<ul><li>Oversample/undersample images from clusters to a constant number of images per class in each epoch.</li></ul></li>
<li>Fill holes in masks</li>
<li>Split nuclei masks that are fused
<ul><li>Applied morphological Erosion and Dilation to seperate fused cells</li>
<li>Use statistics of nuclie sizes in an image to find outliers</li></ul></li>
<li><a href="http://ufldl.stanford.edu/wiki/index.php/Whitening">ZCA whitening of images</a></li>
<li>Zero mean unit variance normalization</li>
<li>Grey scale: <a href="http://tdlc.ucsd.edu/SV2013/Kanan_Cottrell_PLOS_Color_2012.pdf">Color-to-Grayscale: Does the Method Matter in Image Recognition</a>.
<ul><li>Very important how you convert to grey scale. Many algorithms for the conversion, loss of potential data.</li>
<li>Luminous</li>
<li>Intensity</li>
<li>Value: This is the method I used.</li></ul></li>
<li><a href="https://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html">Contrast Limited Adaptive Histogram Equalization</a></li>
</ul>
<h2>Augmentation</h2>
<p>Data augmentation is one of the key to achieve good generalization in this challenge.</p>
<h3>Training time augmentation</h3>
<ul>
<li>Invert
<ul><li>This augmentation helped in reducing generalization error significantly</li>
<li>Randomly choosing to invert caused the models to generalize across all kids of backgrounds in the local validation set.</li></ul></li>
<li>Geometric
<ul><li>PerspectiveTransform</li>
<li>This is very useful to make the circular looking cells to look stretched</li>
<li>PiecewiseAffine</li>
<li>Flip</li>
<li>Rotate (0, 90, 180, 270)</li>
<li>Crop</li></ul></li>
<li>Alpha blending
<ul><li>Create geometrical blur by affine operation</li>
<li>Shear, rotate, translate, scale</li></ul></li>
<li>Pixel
<ul><li>AddToHueAndSaturation</li>
<li>Multiply</li>
<li>Dropout, CoarseDropout</li>
<li>ContrastNormalization</li></ul></li>
<li>Noise
<ul><li>AdditiveGaussianNoise</li>
<li>SimplexNoiseAlpha</li>
<li>FrequencyNoiseAlpha</li></ul></li>
<li>Blur
<ul><li>GaussianBlur</li>
<li>AverageBlur</li>
<li>MedianBlur</li>
<li>BilateralBlur</li></ul></li>
<li>Texture
<ul><li>Superpixels</li>
<li>Sharpen</li>
<li>Emboss</li>
<li>EdgeDetect</li>
<li>DirectedEdgeDetect</li>
<li>ElasticTransformation</li></ul></li>
</ul>
<h3>Test time augmentation</h3>
<ol>
<li>Invert: Have improved the performance a lot</li>
<li>Multiple Scales 900, 1000, 1100</li>
<li>Flip left right</li>
</ol>
<h2>Architecture changes to baseline Detectron</h2>
<p>Detectron network configuration changes from the baseline e2e_mask_rcnn_X-152-32x8d-FPN-IN5k_1.44x.yaml are:</p>
<ol>
<li>Create small anchor sizes for small nuclei. RPN_ANCHOR_START_SIZE: 8 # default 32</li>
<li>Add more aspect rations for nuclei that are close but in cylindrical structure. RPN_ASPECT_RATIOS: (0.2, 0.5, 1, 2, 5)</li>
<li>Increase the ROI resolution. ROI_XFORM_RESOLUTION: 14</li>
<li>Increase the number of detections per image from default 100. DETECTIONS_PER_IM: 500</li>
</ol>
<h2>Training</h2>
<ol>
<li>Decreased warmup fraction to 0.01</li>
<li>Increased warmup iterations to 10,000</li>
<li>Gave mask loss more weight WEIGHT_LOSS_MASK: 1.2</li>
</ol>
<h2>Segmentation Post processing</h2>
<ul>
<li>Threshold on area to remove masks below area of 15 pixels</li>
<li>Threshold on BBox confidence of 0.9</li>
<li>Mask NMS
<ul><li>On decreasing order of confidence, simple union-mask strategy to remove overlapping segments or cut segments at overlaps if overlap is below 30% of the mask.</li></ul></li>
</ul>
<h2>What worked most</h2>
<ol>
<li>Inversion in augmentation</li>
<li>Blurring and frequency noise</li>
<li>Additional datasets, even though they caused a drop on the public leaderboard, I noticed no drop in local validation set.</li>
</ol>
<h2>What did not work</h2>
<ol>
<li>Mask dilations and erosions
<ul><li>This did not have any improvement in the segmentation in my experiments</li></ul></li>
<li>Use contour approximations in place of original masks
<ul><li>This did not have any improvement either. Maybe this could add a boost if using light augmentations.</li></ul></li>
<li>Randomly apply structuring like open-close</li>
<li>Soft NMS thresh
<ul><li>Did not improve accuracy</li></ul></li>
<li>Color images
<ul><li>Did not perform as well as grey images after augmentations</li></ul></li>
<li>Color style transfer. Take a source image and apply the color style to target image.</li>
<li>Style transfer: Was losing a lot of details on some nuclei but looked good on very few images.</li>
<li>Dilation of masks in post processing, this drastically increased error because the model masks are already good.</li>
<li>Distance transform and split masks during training.</li>
</ol>
<h2>Things I didn't have time to try</h2>
<ol>
<li>Ensemble multiple Mask R-CNN's</li>
<li>Two stage predictions with U-Net after box proposals.</li>
<li>Augmentation smoothing during training
<ul><li>Increase the noise and augmentation slowly during the training phase, like from 10% to 50%</li>
<li>Reduce the augmentation from 90% to 20% during training, for generalization and fitting.</li></ul></li>
<li>Experiment with different levels of augmentation individually across, noise, blur, texture, alpha blending.</li>
<li>Different layer normalization techniques, with batch size more than one image at a time. Need bigger GPU.</li>
<li>Little bit of hyperparameter search on thresholds and network architecture.</li>
</ol>
<h2>Things I did not think of</h2>
<p>U-Net with watershed, did not think this approach would outperform Mask R-CNN</p>
<h2>Acknowledgements:</h2>
<p>Kaggle community was a great source of inspiration and the discussions are very useful. Special thanks to Discussion Gradmaster <a href="https://www.kaggle.com/hengck23">hengck23</a>.</p>
<h2>Code References</h2>
<ul>
<li><p><a href="https://github.com/facebookresearch/detectron">Detectron</a>.
Ross Girshick and Ilija Radosavovic. Georgia Gkioxari. Piotr Doll\'{a}r. Kaiming He.
Github, Jan. 2018.</p></li>
<li><p><a href="https://github.com/aleju/imgaug">Image augmentation for machine learning experiments</a>.
Alexander Jung.
Github, Jan. 2015.</p></li>
<li><p><a href="https://www.kaggle.com/kmader/normalizing-brightfield-stained-and-fluorescence">Normalizing brightfield, stained and fluorescence</a>.
Kevin Mader.
Kaggle Notebook, Apr. 2018.</p></li>
<li><p><a href="https://www.kaggle.com/stainsby/fast-tested-rle-and-input-routines">Fast, tested RLE and input routines</a>.
Sam Stainsby.
Kaggle Notebook, Apr. 2018.</p></li>
<li><p><a href="https://www.kaggle.com/wcukierski/example-metric-implementation">Example Metric Implementation</a>.
William Cukierski.
Kaggle Notebook, Apr. 2018.</p></li>
</ul> |
2018 Data Science Bowl | 14th place insight (LB 0.578) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I will try to add it as a PR as soon as I get a chance. Maybe you should also add the "seed" used. Same parameters for different seed can lead to different results.</p>
<p>Here is an overview of the model that reached place #14 on stage2 with LB 0.578.
Basically, I trained a few Mask-RCNN models on different stratified folds issued from images clustering. Then I performed ensembling on masks predicted from each model. </p>
<p><strong>Training</strong>
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/316192/9211/Train.png" alt="enter image description here"></p>
<p>Additional external data (links available in <a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/47572">related</a> thread):</p>
<ul>
<li>Dataset from TNBC (Breast Cancer) </li>
<li>Dataset from nuclei segmentation benchmark</li>
</ul>
<p>Balanced K-Folds:
Training images split in stratified folds with balanced modalities (see <a href="https://www.kaggle.com/mpware/stage1-eda-microscope-image-types-clustering">kernel</a> published)</p>
<p>Data cleaning: "data quality issues" <a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/47770">thread</a> was a great source.</p>
<ul>
<li>Bad masks and images excluded. </li>
<li>Images available in both training and testing dataset dropped as it may hurt training.</li>
<li>Fix small holes in masks with morphology.</li>
</ul>
<p>Normalization: </p>
<ul>
<li>Contrast/luminosity enhancements with dark background. </li>
<li>Grayscale conversion: For generalization purpose, to avoid stain specifics in model.</li>
</ul>
<p>Architecture:
Mask-RCNN models with regular augmentations (rotations, flips, scale ...).
ResNet101 first as backbone but finally selected ResNet50 as it was enough for 2 classes only.
SGD optimizer choosen even if convergence was slower.
120 epochs minimum before saving the best model.</p>
<p><strong>Execution</strong>
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/316192/9212/Execution.png" alt="enter image description here"></p>
<p>For each MRCNN model:
Simple TTA (flip up/down and left/right only) followed by ensembling based on IoU.
The rule applied: If all models agree then perform union of masks. If one disagree then discard the mask. This rule provided really stable results during stage#1. Also, IoU criteria was a bit better than NMS on boxes (the other <a href="https://www.kaggle.com/mpware/ensembling-on-instance-segmentation-lb-0-419">kernel</a> attempt published).</p>
<p>Final ensemble and cleaning:
Keep predictions (union of masks) of each fold based on majority vote.
Fill small holes in predicted masks</p>
<p>Resources:</p>
<ul>
<li>1 to 3 GPU (Tesla K80 12GB)</li>
<li>Google VM: n1-standard-8 (8vCPU, 50GB RAM)</li>
</ul>
<p>Results:
I think this model can score beyond 0.578 by including BBBC images.
BTW: I've just started a quick run with only 100 additional images from BBBC6 and limited epochs and I'm already at 0.565.</p>
<p>Et voila, I had a lot of fun with this competition, I was totally novice in biology domain. It might be a reason why I did not overfit too much :-).
I tried a few approaches such as UNet and LinkNet with post processing before moving to Mask-RCNN but 1st place solution used UNet so congratulations to topcoders team.
Thanks to Heng CherKeng too for all references , thoughts and code sharing that make the forum animated.</p>
<p>I used both MatterPort (based on Keras/TF) and the one provided by Heng (based on PyTorch) during stage#1. For stage#2 I just used the one from Heng because it was faster.</p>
<p>See below for the code. It computes IoU for 3 set of masks:</p>
<pre><code>def compute_iou_3d(masks0, masks1, masks2):
num_masks0 = len(np.unique(masks0))
num_masks1 = len(np.unique(masks1))
num_masks2 = len(np.unique(masks2))
masks = np.array([masks0.flatten(), masks1.flatten(), masks2.flatten()])
masks = np.moveaxis(masks, [0,1], [1,0])
# Compute intersection between all objects
intersection_all = np.histogramdd(masks, bins=(num_masks0, num_masks1, num_masks2))[0]
intersection01 = np.histogram2d(masks0.flatten(), masks1.flatten(), bins=(num_masks0, num_masks1))[0]
intersection02 = np.histogram2d(masks0.flatten(), masks2.flatten(), bins=(num_masks0, num_masks2))[0]
intersection12 = np.histogram2d(masks1.flatten(), masks2.flatten(), bins=(num_masks1, num_masks2))[0]
intersection01 = np.expand_dims(intersection01, -1)
intersection02 = np.expand_dims(intersection02, 1)
intersection12 = np.expand_dims(intersection12, 0)
area_masks0 = np.histogram(masks0, bins = num_masks0)[0]
area_masks1 = np.histogram(masks1, bins = num_masks1)[0]
area_masks2 = np.histogram(masks2, bins = num_masks2)[0]
area_masks0 = np.expand_dims(area_masks0, -1)
area_masks0 = np.expand_dims(area_masks0, -1)
area_masks1 = np.expand_dims(area_masks1, -1)
area_masks1 = np.expand_dims(area_masks1, 0)
area_masks2 = np.expand_dims(area_masks2, 0)
area_masks2 = np.expand_dims(area_masks2, 0)
# Compute union (https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle)
union = area_masks0 + area_masks1 + area_masks2 - intersection01 - intersection02 - intersection12 + intersection_all
# Exclude background
intersection_all = intersection_all[1:,1:,1:]
union = union[1:,1:,1:]
union[union == 0] = 1e-9
# Compute the intersection over union
iou = intersection_all / union
return iou
</code></pre>
<p>Then you can filter on an IoU threshold and merge masks (here an union):</p>
<pre><code># Convert list of mask to index image (one different color per mask)
masks0 = masks_array_to_index_image(test_masks_array0)
masks1 = masks_array_to_index_image(test_masks_array1)
masks2 = masks_array_to_index_image(test_masks_array2)
iou = compute_iou_3d(masks0, masks1, masks2)
iou_groups = np.where(iou > IoU_THRESHOLD)
results = []
for group_index in range(0, len(iou_groups[0])):
m0_group_index = iou_groups[0][group_index]
m1_group_index = iou_groups[1][group_index]
m2_group_index = iou_groups[2][group_index]
iou_group_index = iou[m0_group_index][m1_group_index][m2_group_index]
result = (group_index, [m0_group_index, m1_group_index, m2_group_index], iou_group_index)
results.append(result)
# Sort result by ascending IoU. Tuple is group_index + [matching mask index in each model] + IoU
results.sort(key=lambda tup: tup[2])
# Merge masks
idx = 0
result = results[idx]
result_iou = result[2]
mask0_group_index = test_masks_array0[result[1][0]]
mask1_group_index = test_masks_array1[result[1][1]]
mask2_group_index = test_masks_array2[result[1][2]]
merge_group_index = mask0_group_index + mask1_group_index + mask2_group_index
</code></pre>
<p>The compute_iou_3d can be generalized to N but I didn't have time to complete it.</p>
<p>Link is available in "Official External Data thread" at <a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/47572">https://www.kaggle.com/c/data-science-bowl-2018/discussion/47572</a></p>
<p>Just search for BBBC.</p> |
2018 Data Science Bowl | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Dear all, please refer to the link <a href="https://github.com/jacobkie/2018DSB">https://github.com/jacobkie/2018DSB</a> for my solution.</p> |
2018 Data Science Bowl | 3th place on private LB: Matterport's Mask-RCNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I have updated the description with a link to the source code where you can view the test time augmentation ensembling. As I mention in the README, this test time augmentation is quite tricky in that it has quite a few parameters to tune and a simple binary dilation operation performs better, although it is easy to say now that we have access to the private leaderboard scores..</p>
<p>I have modified the README with a link to the weights, the file is too large to upload to github</p>
<p>I actually used a bit of both... I started using the code from <a href="https://github.com/matterport/Mask_RCNN">https://github.com/matterport/Mask_RCNN</a> at the beginning of the competition and worked on a clone of that repository on my computer for most of the competition (which I guess is close to Type 1), but that version lacked the "pad64" for predicting on variable size inputs so a few days before the competition ended I moved to the latest version. I have organized all my code and I now have a working version which is a Type 2 with a few changes in the code. I hope to share that code during next week.</p>
<p>The third place solution, tie with #2 jacobkie achieving 0.614 on the Private Leader-board, is based on a single Mask-RCNN model using as code-base Matterport's Mask-RCNN (<a href="https://github.com/matterport/Mask_RCNN">https://github.com/matterport/Mask_RCNN</a>). </p>
<p><strong>Summary</strong></p>
<hr>
<p>I don't think I have done many different things to what others have reported and my solution is quite simple. I'll be happy to share the code once I have the approval by kaggle and sponsors. I tried many different things, but the main two contributions are:</p>
<p>1) <strong>Strong scaling augmentation</strong>, a lot of zooming in and out and aspect ratio changes before taking the 512x512 crops used as inputs to the model during training.</p>
<p>2) <strong>Test time augmentation</strong>, I used 15 different augmentations at test time with different rotations, scalings, channel color shifts, etc. This takes a loooong time (aprox. 2 days for the stage_2 test set) and a binary dilation post-processing actually gives a very similar score, so I would use the latter if asked now (although it is easy to tell now that we can see the PL scores..)</p>
<p><strong>Training data</strong></p>
<hr>
<p>No external data was used, only stage 1 training set. I used the corrected data-set compiled in: <a href="https://github.com/lopuhin/kaggle-dsbowl-2018-dataset-fixes">https://github.com/lopuhin/kaggle-dsbowl-2018-dataset-fixes</a>
I didn't have the time to include any of the external data, I was also afraid that the different annotation styles might introduced unwanted bias in the predictions.
No prepocessing.</p>
<p><strong>Augmentations</strong></p>
<hr>
<p>In addition to the scaling augmentation mentioned above I used left-right and up-down flips, random 90 degree rotations, random additional rotation on top of those, random channel color shifts </p>
<p><strong>Parameters</strong></p>
<hr>
<p>Here are some of the parameters for comparison (<a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/54920">https://www.kaggle.com/c/data-science-bowl-2018/discussion/54920</a>):</p>
<p>CodeBase Type-1 and 2
MEAN_PIXEL [123.7, 116,8, 103,9]
LEARNING_RATE Start 0.001 and down to 3*10^-5
LEARNING_SCHEDULE ~120 always "all"
RPN_ANCHOR_RATIOS [0.5, 1, 2]
USE_MINI_MASK True
MINI_MASK_SHAPE (56,56)
GPU_COUNT 1
IMAGES_PER_GPU 2
STEPS_PER_EPOCH 332
VALIDATION_STEPS 0
BACKBONE resnet101
NUM_CLASSES 1+1
IMAGE_MIN_DIM 512
IMAGE_MAX_DIM Not used
IMAGE_PADDING Not used
RPN_ANCHOR_SCALES 8,16,32,64,128
RPN_ANCHOR_STRIDE 1
BACKBONE_STRIDES 4,8,16,32,64
RPN_TRAIN_ANCHORS_PER_IMAGE 256
IMAGE_MIN_SCALE Not used
IMAGE_RESIZE_MODE crop at training, pad64 for inference
RPN_NMS_THRESHOLD 0.7
DETECTION_MIN_CONFIDENCE 0.9
DETECTION_NMS_THRESHOLD 0.2
TRAIN_ROIS_PER_IMAGE 600
DETECTION_MAX_INSTANCES 512
MAX_GT_INSTANCES 256
init_with coco
DATA_AUGMENTATION scaling, crop, flip-lr, flip-up, 90 rotation, rotation, channel_shift</p>
<hr>
<p>Source code is now available <a href="https://github.com/Gelu74/DSB_2018">follow this link to github</a></p> |
2018 Data Science Bowl | 5th place solution (based only on Mask-RCNN) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, Bing! It's the first bullet of post-processing section. </p>
<ul>
<li>Combined predictions on actual image and horizontally flipped image: took unions of masks with maximum overlap and removed false positive masks with small overlap.</li>
</ul>
<p>Agh, I see what you guys are saying. Yes, <code>data_to_array</code> function should be executed before functions <code>preprocess_external_data</code> and <code>make_n_save_mosaic</code>. I pushed an update to github repo. Thanks for reporting that issue.</p>
<p>Hi Chrisk,</p>
<p>I would suggest dividing up the images into smaller pieces, make predictions for each piece and then patch the predictions into one piece. Easiest you can divide the image into 4 pieces and do the predictions. Hope it helps!</p>
<p>That sounds like a solid plan. For calculating connected domains and merging, I would suggest looking at making mosaics functions in the preprocessing module.</p>
<p><a href="/lvguofeng">@lvguofeng</a> and @JubaerHossain. Sorry for the late response. Glad that you guys have sorted out the issue. The .csv files are now available on the same repository.</p>
<p>I think U-net based solution could work too. A lot of participants have used U-net for this competition. For instance, see <a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/54426">this post</a>. Good luck with your MSc dissertation!</p>
<p>Hi! I think it is definitely possible. I would even split large .svs files into smaller ones to get more training data. </p>
<p>Codes with some comments are now available on GitHub: <a href="https://github.com/mirzaevinom/data_science_bowl_2018">https://github.com/mirzaevinom/data_science_bowl_2018</a>. Hope you find it helpful!</p>
<p>Fixed! Thanks for pointing out.</p>
<p>Thanks, Russ! Congrats on the 4th place too!</p>
<h2>Model overview</h2>
<p>Codes now available on GitHub: <a href="https://github.com/mirzaevinom/data_science_bowl_2018">https://github.com/mirzaevinom/data_science_bowl_2018</a></p>
<p>For this competition, I modified <a href="https://github.com/matterport/Mask_RCNN">Matterport's</a> implementation of <a href="https://arxiv.org/abs/1703.06870">Mask-RCNN</a> deep neural network for object instance segmentation. I adapted the existing model configurations to detect small nuclei in images with varying size and modality. To ensure that the model doesn't overfit, I used an <a href="https://www.kaggle.com/voglinio/external-h-e-data-with-mask-annotations">external dataset</a> and relied heavily on image augmentation. Moreover, generated mosaics from train images based on <a href="https://www.kaggle.com/bonlime/train-test-image-mosaic">this notebook</a>. To improve generalizability of the model, I split (using stratification) the <code>stage1_train</code> dataset into train and validation sets based on 5 image modalities provided by <a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/48130">Allen Goodman</a>. After training the model using Resnet101 as a backbone encoder and Adam as an optimizer, I improved prediction accuracy by test time augmentation and post-processing the masks.</p>
<p><em><strong>Huge thanks to Heng CherKeng!!! I learned a lot from your posts.</strong></em></p>
<h2>Training Method(s)</h2>
<h3>Pre-processing</h3>
<ul>
<li>I noticed some issues with the provided masks. Therefore, used the annotations and mask provided by <a href="https://github.com/lopuhin/kaggle-dsbowl-2018-dataset-fixes">Konstantin Lopuhin</a> in <a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/47572">data quality issues</a> thread.</li>
<li>Removed the alpha channel from the images.</li>
<li>Filled holes in the masks</li>
<li>Splitted (using stratification) the <code>stage1_train</code> dataset into 90% train and 10% validation sets based on 5 image modalities provided by <a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/48130">Allen Goodman</a>.</li>
<li>Used an <a href="https://www.kaggle.com/voglinio/external-h-e-data-with-mask-annotations">external dataset</a> provided in the forum. Divided the images and the masks into 4 pieces due their large sizes. External dataset <a href="https://nucleisegmentationbenchmark.weebly.com/dataset.html">download links</a>.</li>
<li>Generated mosaics from train images based on <a href="https://www.kaggle.com/bonlime/train-test-image-mosaic">Emil's</a> notebook.</li>
</ul>
<h3>Model and Training</h3>
<ul>
<li>Modified <a href="https://github.com/matterport/Mask_RCNN">Matterport's</a> implementation of <a href="https://arxiv.org/abs/1703.06870">Mask-RCNN</a> deep neural network for object instance segmentation.</li>
<li>Tuned hyperparameters to detect small nuclei from the images. (I found <a href="https://engineering.matterport.com/splash-of-color-instance-segmentation-with-mask-r-cnn-and-tensorflow-7c761e238b46">this tutorial</a> very useful for understanding the model hyperparameters)
<ul><li>Original Matterport implementation was validating only on one image so fixed this <a href="https://github.com/matterport/Mask_RCNN/issues/89">validation issue</a>.</li>
<li>Reduced RPN (region proposal network) anchor sizes since the nuclei are mostly small.</li>
<li>Increased number of anchors to be used since the nuclei are small and can be found anywhere on an image.</li>
<li>Increased maximum number of predicted objects since an image can contain 300 or more nuclei.</li>
<li>Increased <code>POST_NMS_ROIS_TRAINING</code> to get more region proposals during training.</li>
<li>Added extra parameter <code>DETECTION_MASK_THRESHOLD</code> to model configuration. Default was hardcoded in the model as 0.5 but setting it to 0.35 helped in detection of small nuclei boundaries.</li>
<li>Resized images and masks to 512x512</li></ul></li>
<li><p>Relied heavily on image augmentation due to small training set:</p>
<ul><li>Random horizontal or vertical flips</li>
<li>Random 90 or -90 degrees rotation</li>
<li><a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/49692">Random rotations</a> in the range of (-15, 15) degrees</li>
<li><a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/49692">Random cropping</a> of bigger images and masks to 256x256x3.</li>
<li><a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/49692">Random scaling</a> of image and mask scaling in the range (0.5, 2.0)</li></ul></li>
<li><p>Used Resnet101 architecture as a backbone encoder but initialized the first 50 layers of the model with pre-trained Resnet50 weights from <a href="https://github.com/fchollet/deep-learning-models/releases/">ImageNet competition</a>.</p></li>
<li>Trained the model with <a href="https://arxiv.org/abs/1412.6980">Adam</a> optimizer for 75 epochs:
<ul><li>25 epochs with learning rate 1e-4</li>
<li>25 epochs with learning rate 1e-5</li>
<li>25 epochs with learning rate 1e-6</li></ul></li>
<li>Did not retrain the model with stage1 test data during stage 2 as I was afraid of overfitting on black and white images.</li>
</ul>
<h3>Post-processing</h3>
<ul>
<li>Combined predictions on actual image and horizontally flipped image: took unions of masks with maximum overlap and removed false positive masks with small overlap.</li>
<li>Due to configured RPN anchor sizes, the model predicts small nuclei very well. However, it struggles at predicting large nuclei. Therefore, if a model predicts no masks for an image then I scale down the image and predict once again.</li>
<li>Removed overlaps between predicted nuclei based on their objectness score. In other words, removed intersections from the masks with lower scores.
<ul><li>If this intersection removal results in multiple objects in that mask, then removing all the small pieces.</li></ul></li>
<li>Closing small holes inside the masks using morphological operations (dilation followed by erosion).</li>
</ul>
<h2>Interesting findings</h2>
<ul>
<li>Mask-RCNN model overfits easily without image augmentation.</li>
<li>Removing false positive mask predictions improves the overall score significantly.</li>
<li>Since images are on different scales, predicting masks on scaled images helps with the model generalizability.</li>
<li>Dilating and then eroding individual masks helped me achieve slightly better result.</li>
<li>Matterport's original implementation was only <a href="https://github.com/matterport/Mask_RCNN/issues/89">validating on only one image</a>. Fixing this issue made the training process reproducible.</li>
<li>I found that the model reaches a local minima faster when trained using Adam optimizer compared to default SGD optimizer.</li>
</ul>
<h2>Unsuccessful approaches tried</h2>
<ul>
<li>Trained the model with Dice Coefficient Loss instead of default binary cross-entropy loss for the masks heads. Although got comparable results couldn't beat prediction accuracy of binary cross-entropy on my validation set.</li>
<li>Trained with random Gaussian and Poisson (or shot) noise for image augmentation. It actually hurt overall model performance.</li>
<li>Tried ensembling actual image predictions with horizontal and vertical flip predictions. Used non-maximum suppression for removing overlaps. Did not improve prediction accuracy on the validation set.</li>
<li>Trained end-to-end without initializing with pre-trained ImageNet weights. Mostly got to Mean IoU score of 0.35 on stage1 test set.</li>
<li>Trained on preprocessed images with adaptive histogram equalization (CLAHE). The model performed way worse.</li>
</ul> |
2018 Data Science Bowl | Hybrid MaskRCNN + UNet solution LB 0.525 (including source code) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Architecture</strong></p>
<p>We used a combined MaskRCNN + Unet network that jointly predicted both instance and semantic segmentation masks. MaskRCNN was based off Matterport's implementation and it shared the convolutional feature backbone (resnet50 or resnet101) with the Unet encoder. We believe jointly training semantic and instance masks better conditioned the backbone network, although we have no strong proof. We used the semantic mask output for post-processing.</p>
<p><strong>Data normalization & augmentation</strong></p>
<p>We converted images to grayscale and normalized images to their max value (on a per-image basis) and applied threshold based inversion. During training we did flips, 0,90,180 and 270 rotations and gaussian blurring. </p>
<p><strong>Computing resources</strong></p>
<p>We used roughly 4-6 x 1080 Ti. </p>
<p><strong>Ensembles / TTA</strong></p>
<p>Both ensembling different models and merging results from TTA followed the same approach: voting + dilation guided by semantic mask predictions. The steps taken were as follows:</p>
<ol>
<li>Consider the boundary box for each mask. If the overlap in boundary boxes between masks is > threshold, then merge the boxes (new boundary box encloses all overlapping boxes) and sum the masks.</li>
<li>Select masks from 1. based on a voting threshold. e.g. if more than 50% of cases designated a pixel as 1, then assign 1 to that pixel.</li>
<li>Average the semantic mask output and select pixels based on a voting threshold.</li>
<li>Dilate each mask within its boundary box limits using the semantic mask from 3. as a guide: if the dilated pixels fall within the boundary box and the semantic mask suggests that the pixel = 1, then assign 1 to that pixel.</li>
</ol>
<p><strong>Lessons learned</strong></p>
<p>Ciara:</p>
<ul>
<li>Teamed up earlier. The real restriction was time: too many ideas to test out within the limits of what we had (basically, it was too much to do in just 2 weeks)</li>
<li>Focused less on trying to optimise parameters (so many moving parts!) and spend that time on different post processing / ensembling approaches</li>
<li>Spend more time on generating external datasets</li>
</ul>
<p>Andres:</p>
<ul>
<li>Teamed up earlier to discuss and partition approaches: when we joined already each member already had some implementation of the pipeline and we were already too focused on MaskRCNN. In retrospect, we should have devoted some time to dig deeper with Unets.</li>
<li>More data: Data is the fuel of deep learning. We should have spend more time getting more diverse data for training, either synthetic datasets or generating new synthetic data. We tried with a CycleGAN to generate pairs from one domain where we did have GT to a different domain where we didn't. While it looked promising we ran out of time so we could not train with it:</li>
</ul>
<p><img src="http://i.imgur.com/vtgP8aL.jpg" alt="CycleGAN Images 1">
<img src="http://i.imgur.com/KWPwzA4.jpg" alt="CycleGAN Images 2"></p>
<p>(left images are originals, right images are images generated using CycleGAN)</p>
<ul>
<li>Systematic approach to augmentations. With so many gauges to dial, in some instances we added a few augmentations or changes at one and we were not sure which piece was contributing positively.</li>
</ul>
<p>Big thanks to all the organizers and participants! </p>
<p>Ciara, Ian, Javi and Andres</p>
<p>For Unet head we only have pixel-wise segmentation at full scale (same as input image), we just stacked all instance masks into 512x512 together and had only one class (nuclei).</p>
<p>One related problem is that we tried (see <a href="https://github.com/ciararogerson/Kaggle_Shared/commit/5fd31827cd41437b462ef34af9508856843ddae8">https://github.com/ciararogerson/Kaggle_Shared/commit/5fd31827cd41437b462ef34af9508856843ddae8</a>) to give the Unet non-downsampled features and the image itself (image, preC1 =>first 2x downsample, C2, etc.) however the net didn't converge, so in essence the Unet we ended up using was upscaling 1:4 for predictions (not good).</p>
<p>Makes you wonder whether you can take the channels of the Unet close to the bottleneck (assuming to have same number of dimensions at that point... or do avgpooling to get there) and feed it to the RPN as well to help it propose regions (gradient backpropagation may be easier since at that point the graph is less deep).</p> |
2018 Data Science Bowl | My Solution 11th place (LB 0.590) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Nicole. Thank you. See you in the next competition. ;)</p>
<p>(currently 8th place while leaderboard is being updated)</p>
<p>Hi all. It is my first time join this competition. All my methods are learned from open discussion. As return, I decide to share my solution to the community. Thanks a lot to those who have made many contributions to the community. </p>
<p>First of all, I thank Allen, John1231983, Keven Wang, Mehul Sampat, Heng CherKeng, CPMP, bsp2020, kamil, YaGana Sheriff-Hussaini, Yan Wang, Konstantin Maksimov, xiapidan, xuan, Waleed for your very impressive discussions. I learned a lot from those discussions. @Mehul Sampat hope we can team up together next time :) thanks again.</p>
<p>My score: Stage1 LB 0.489(87th), Stage2 LB 0.590(11th). I am using matterport mask-rcnn.</p>
<p><strong>Augmentations</strong></p>
<p>Since there are hundreds of training images, we must find some useful augmentations to prevent our models from overfitting and make them generalizable. Here are some methods I have tried but it didn't work for me:</p>
<ul>
<li>add gaussian noise</li>
<li>color to gray</li>
<li>contrast and brightness</li>
<li>random crop 512x512 if image size is bigger than 512 otherwise resize the image to 512x512</li>
<li>mosacis</li>
<li>mosacis+random crop</li>
<li>mosacis+random crop+ h&e</li>
<li>rotate 90 degrees </li>
<li>radnom rotate 90,180,270 degrees</li>
<li>rotate +-5 degrees on top of flip & 90 degree rotation</li>
<li>elastic transform</li>
</ul>
<p>I only use flip up&down&left&right</p>
<p><strong>Additional data</strong></p>
<p>No. I have tried to add h&e dataset to training data, but it didn't improve my performance. I am using <a href="https://github.com/lopuhin/kaggle-dsbowl-2018-dataset-fixes">https://github.com/lopuhin/kaggle-dsbowl-2018-dataset-fixes</a></p>
<p><strong>Ensembling</strong></p>
<p>No. I spend almost one week trying this method but it didn't work so well. I devided the training data into two categories: color & grey. I have seen someone get a high score (0.5+) in stage1 by using this method. I am very interested in this so I really hope that someone can share some solutions about this.</p>
<p><strong>Parameters</strong></p>
<p>train:</p>
<p>inti_with= coco</p>
<p>RESNET_ARCHITECTURE = "resnet101"</p>
<p>MEAN_PIXEL = np.array([0., 0., 0.])</p>
<p>RPN_NMS_THRESHOLD = 0.7</p>
<p>DETECTION_MIN_CONFIDENCE = 0.7</p>
<p>DETECTION_NMS_THRESHOLD = 0.3</p>
<p>TRAIN_ROIS_PER_IMAGE = 600</p>
<p>RPN_TRAIN_ANCHORS_PER_IMAGE = 320</p>
<p>LEARNING_RATE = 1e-3</p>
<p>inference:</p>
<p>RPN_NMS_THRESHOLD = 0.6</p>
<p>DETECTION_NMS_THRESHOLD = 0.1</p>
<p><strong>Training</strong></p>
<p>1e-3 all 20epochs. Choose the best point according to the display of tensorboard, then 1e-4 or 1e-5 train all 20epochs. (if val loss stops decreasing, stops training)
optimizer=Adam.</p>
<p><strong>Post processing</strong>
Use binary_dilation.</p>
<p><strong>Other method</strong>
general standardization.
img = img-mean(img)/std(img).</p>
<p><strong>Some methods might be useful</strong></p>
<p>1.Filter</p>
<p>> We use single mask rcnn, but with special post process to filter noises (i.e. small FPs) based on clustering and outlier detection of mask sizes, which boosted us a lot (kill lots of noises). Less FPs is the key to get high mAP as Heng said. I should thank him a lot</p>
<p>2.fill_holes</p>
<p><a href="https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.morphology.binary_fill_holes.html#scipy-ndimage-morphology-binary-fill-holes">https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.morphology.binary_fill_holes.html#scipy-ndimage-morphology-binary-fill-holes</a></p>
<p>3.wateshed</p>
<p><strong>Some interesting discussions&methods links</strong></p>
<ol>
<li><a href="https://github.com/matterport/Mask_RCNN/issues/230">https://github.com/matterport/Mask_RCNN/issues/230</a></li>
<li><a href="https://github.com/matterport/Mask_RCNN/issues/281">https://github.com/matterport/Mask_RCNN/issues/281</a></li>
<li><a href="https://github.com/killthekitten/kaggle-ds-bowl-2018-baseline/issues/5">https://github.com/killthekitten/kaggle-ds-bowl-2018-baseline/issues/5</a></li>
<li><a href="https://www.kaggle.com/bostjanm/overlapping-objects-separation-method/notebook">https://www.kaggle.com/bostjanm/overlapping-objects-separation-method/notebook</a></li>
<li><a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/52989#30758">https://www.kaggle.com/c/data-science-bowl-2018/discussion/52989#30758</a></li>
</ol>
<p><strong>Last</strong></p>
<p>I have tried a lot of experiments. Some of them i just don't remeber.....sorry :( So here is my email mdlszhengli@@gmail.com. I am very happy to discuss via email or wechat(removed).</p>
<p>Hi, Yueqi. Dilation is generally used to extend edges and fill some small holes. As Heng mentioned before "Deep network is the weakest at the boundary image and strongest are the center.". In this competition, it is very necessary to do some works on boundary detection. It actually improve my result. (2) I have not used SGD, so it is hard to tell which one is more suitable in this competition. As far as i know, Keven Wang and Panpan Zhou is using SGD, you can discuss with them. (3) batch_size=1. Thanks Zheng</p>
<p>Hi YUNFEI. 1. I am using 512*512.
2. Sorry there are some parameters I didn't mentioned above. When I was training, I set DETECTION_NMS_THRESHOLD = 0.3, DETECTION_MIN_CONFIDENCE = 0.7.
POST_NMS_ROIS_TRAINING = 2000, POST_NMS_ROIS_INFERENCE = 2000
3. general standardization can speed up the convergence of your algorithm.</p>
<p>@Mehul Sampat Hi Mehul. Thanks for your explanation. You are right. I only use flip.
Mask rcnn code is from here: <a href="https://github.com/killthekitten/kaggle-ds-bowl-2018-baseline">https://github.com/killthekitten/kaggle-ds-bowl-2018-baseline</a>
I am also very interested in other groups' solutions. I think training/validation data split is very important and it can tell us where is the best point( local optimum). Thanks Zheng</p>
<p>@Nicole Finnie
Hi Nicole. Thank you very much. :) Just left it 0. Here is my code. </p>
<pre><code>from imgaug import augmenters as iaa
AUGMENTATION_AFFINE = {
"order": 0,
"rotate": (-5, 5)}
if config.AUGMENTATION_AFFINE:
augmentations.append(iaa.Affine(**config.AUGMENTATION_AFFINE))
seq=iaa.Sequential(augmentations)
seq_det=seq.to_deterministic()
image, mask =seq_det.augment_image(image), seq_det.augment_image(mask)
</code></pre>
<p>Hi steelrose. When I use general normalization, I guess this method doesn't work well with other methods, so I just use flip. 1st place solution use many augmentation methods and it actually works very well. So always choose the best methods for your model. Good luck. Thanks Zheng</p>
<p>Hi. I only compared the model with pretrained coco and imagenet and i found that coco is better than imagenet in this competition.</p>
<p>Hi YL. Thank you very much.</p> |
2018 Data Science Bowl | Our solution, 4th place on the private LB | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We annotated a number of diverse external images, usually just a few images per type to validate how well model predicts the new types of images and add them to the training dataset</p>
<p>This is correct, I hoped the area would give another signal for the model to predict nuclei size, but the impact was negligible.</p>
<p>Source code for our solution is available at github: <a href="https://github.com/pdima/kaggle_2018_data_science_bowl_solution">https://github.com/pdima/kaggle_2018_data_science_bowl_solution</a></p>
<p>Hi Andy,</p>
<p>I predicted the per pixel area value as one of UNET output layers using mae as a loss, but the main reason for using area was to normalize the loss of vector predicted to nuclei center, otherwise the mistake of center vector prediction for large nuclei would be penalized much more comparing to small nuclei.</p>
<p>I have not seen significant improvements from making the model to predict the area alone.</p>
<p>Hi Liam,</p>
<p>I tried to use different thresholds using different watershed energy levels but overall it worked better to use predicted centers.</p>
<p>What is interesting, asking model to predict the vectors from the border like in the DWT paper improved the centers predictions. Allowing model to process such vectors further with a greater receptive field of the second unet, helped to improve predicted centers slightly more.</p>
<p>More detailed explanation of centers selection following post processing:</p>
<ol>
<li>For each peak in centers prediction, I tried to find which radius circle has sufficient sum of predicted values (this threshold a a function from radius was an important hyperparameter to tune). I then added the
peak to the list of center seeds and cleared the circle, so it does not affect calculation of next centers.</li>
<li>Run watershed using the sum of centers predictions and watershed energy levels as an energy and the first energy level as a mask. Using the first energy level instead of actual mask here helped in a few cases to separate touching nuclei if center prediction failed for one of them.</li>
<li>As described above, I have checked if any of large enough continuous masks in energy level 1 does not have seed but has sufficient total sum of centers prediction, added center of mass as a new seed.</li>
<li>Split some nuclei to two seeds using KNN if total integral of predicted center > ~15.0</li>
<li>Re run watershed using the actual predicted mask and all found center seeds</li>
</ol>
<p>This allowed to better detect nuclei separation, sometimes center prediction worked better and sometimes predicted watershed energy levels, this approach helped to combine both predictions.</p>
<p>Listed steps helped but the improvement was not as huge, I'm talking about local CV or public LB score changes from 0.565 to 0.570 or so.</p>
<p>My algorithm to select nuclei centers as seeds for watershed may have missed some nuclei if it's of complex form or uncertain shape while the integral of predicted center over nuclei area is still close to expected value of 9.0</p>
<p>Such cases were easy to detect and fix by looking for connected areas of the mask (I used one of predicted energy levels) not included into resulting set of instances masks after the watershed transform.</p>
<p>In our team we have evaluated both UNet and Mask-RCNN based solution, but for us Unet worked significantly better so we used Unet based model for submission.</p>
<p>The UNet based solution is inspired by the Deep Watershed Transform paper: <a href="https://arxiv.org/pdf/1611.08303.pdf">https://arxiv.org/pdf/1611.08303.pdf</a></p>
<p>For each pixels, we predicted the x,y components of vector pointing from the instance border like described in the DWT paper and predicted the mask, watershed levels and nuclei centers using the second connected UNet. </p>
<p>Attached the diagram with our model</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/317711/9258/Vector%20unet.png" alt="Model description"></p>
<p>Overall Unet predict the mask pretty well but it was necessary to find a way to reliably segment nuclei.</p>
<h1>Approaches we tried</h1>
<p>The first approach was to predict using the single UNet model:</p>
<ul>
<li>Mask, BCE + DICE loss</li>
<li>Nuclei centers with 3x3 patches around the center of mass as a training label, BCE loss</li>
<li>Area of nuclei used to normalize loss from vectors for large and small nuclei</li>
<li>X,Y of vector to the center of nuclei, MSE loss normalized by nuclei area</li>
</ul>
<p>For touching nuclei the vector value to the centers changes sign, so it changes sharply and the loss is the biggest on the nuclei border which forces model to learn to separate instances. The postprocessing was quite straightforward:</p>
<ol>
<li>Find the centers of nuclei using predicted centers output, expecting the area of each prediction to be approx 9.0 (matring area of 3x3 training patch)</li>
<li>For each pixel in predicted masc, assign it to the cluster nearest to position predicted vector to the center points to.</li>
</ol>
<p>This approach worked and scored over 0.5 on the public leaderboard.</p>
<h2>Improvement:</h2>
<p>The second approach was to try the deep watershed transform idea to predict watershed energy levels (mask eroded by different offsets) by predicting intermediate unit vector fields pointing from the nearest border pixel. Instead of training 3 independent models for segmentation, vector field and watershed energy predictions I used the single UNet to predict everything. With the seeds we used the continuous areas after applying the threshold over the sum of energy levels, in the similar way to the DWT paper. The result was slightly better.</p>
<h2>Improvement:</h2>
<p>Use the predicted centers as seeds for watershed transform instead of the energy level with threshold. The score improvement was more significant comparing to switching to DWT.</p>
<h2>Improvement:</h2>
<p>Predict only vector fields with the first UNet, concatenated predicted fields with the last layer used to predictions of all the other fields using another UNet. This has little to no impact on predicted masks and the energy levels but helped to significantly improve the quality of predicted nuclei centers. </p>
<h2>Improvement:</h2>
<p>Simple mean ensembling of 8 TTA flips/rotations had very small but consistent improvement.</p>
<h2>Improvement:</h2>
<p>Better postprocessing, the idea - since for prediction the center values we are using the 3x3 path of area 9, we can expect the total area of predicted center to be close to 9 as well, even for cases of complex connected nuclei hard to predict.
This allowed to do following post processing improvements:</p>
<ul>
<li>After watershed transform from detected centers, check for missing large masks with the total center prediction > ~5.0, and add them.</li>
<li>Instances with the total center value integral > 9.0 * 1.5 are most likely have two nuclei connected but with very hard to predict centers. Split centers to two clusters using KNN and re-run watershed.</li>
<li>We tried the similar approach to topcoder team of estimating IOU using information about the shape and other predicted instance properties and run optimiser for the score to decide if particular instance should be included to submission or not, but the quality of IOU prediction was not sufficient for this to work reliably. Instead we calculated IOU between masks of the ensemble and individual TTA variants. If the median IOU is low, it means different models predicted different shapes and it’s better to exclude such mask from submission to avoid penalty at high IOU thresholds.</li>
</ul>
<h2>Extra data used</h2>
<p>We added extra annotated datasets listed in the forum thread and annotated some images, mostly color histology images. We also used synthetic data to generate more cases of touching and overlapped nuclei.</p>
<h1>Final model details</h1>
<p>Attached an example of the early model predictions, including predicted vectors, centers and energy levels.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/317711/9259/nuclei_descr.png" alt="Example prediction"></p>
<p>We tried to use the imagenet pretrained models as UNet encoders but the result was the same or worse comparing to UNet trained from scratch.</p>
<p>For unet encoders we used
Conv2d - BN - Relu - Conv2d - Relu
Decoders:
Upsample/concatenate - Conv2d - Relu</p>
<p>We used quite a large number of filters (64/128) even on the high resolution levels as model had to predict many outputs but increased number of filters less than twice as we did not have as much complex semantic informations as with models trained on imagenet.</p>
<p>Unet1 was 6 levels deep (decreased the resolution 2x 6 times) while Unet2 was 4 levels deep. </p>
<p>As input we used B/W image or H channel of HED stain decomposition. H worked slightly better for histology images but slightly worse for some other stains. For the final submission we ensembled 4 models trained on B/W images and 2 on H channel.</p>
<p>We trained on 256x256 patches with significant level of augmentations and predicted on 1024 pixels tiles with 128 pix overlap on each side. We used the “SAME” padding and relied on the large tiles padding and overlap to avoid corner effects.</p>
<p>At the end of competition we tried to train a model predicting only data used for post processing without vector fields and the result was significantly worse. Significantly reducing the loss of vectors to the center of nuclei had a little impact to the results, so most likely vectors from corners are more important. Even while not directly used, it helped to better predict centers and watershed energy levels.</p>
<p><a href="https://github.com/pdima/kaggle_2018_data_science_bowl_solution">Solution source code</a></p>
<p>This is very interesting idea, and the very significant boost to result. Would be very interesting to read in more detail, do you plan to publish you approach (or maybe I have missed it)?</p>
<p>It's also interesting if the random walker approach can be combined with the information about the estimated energy levels, it may work better than either one separately.</p>
<p>[edited] I missed this bit in your team result description, certainly very interesting approach.</p>
<p>The model itself is written in keras, but to prepare vector labels I used scipy.ndimage.morphology</p>
<p>scipy.ndimage.morphology.distance_transform_edt() is very useful to find distance and direction to the border as it can return the index of the nearest pixel outside of mask:</p>
<pre><code>edt, inds = scipy.ndimage.morphology.distance_transform_edt(crop_smooth, return_distances=True, return_indices=True)
border_vector = np.array([
np.expand_dims(np.arange(0, rows), axis=1) - inds[0],
np.expand_dims(np.arange(0, cols), axis=0) - inds[1]])
border_vector_norm = border_vector / (np.linalg.norm(border_vector, axis=0, keepdims=True) + 1e-5)
res_crop[:, :, 0] = border_vector_norm[0]
res_crop[:, :, 1] = border_vector_norm[1]
</code></pre>
<p>Vector to the center is easier to calculate, it's a difference of the pixel position and the mask center of mass:</p>
<pre><code> center_of_mass = scipy.ndimage.measurements.center_of_mass(crop)
current_offset_field = np.zeros((CROP_SIZE, CROP_SIZE, 2))
current_offset_field[:, :, 0] = np.expand_dims(center_of_mass[0] - np.arange(0, CROP_SIZE), axis=1)
current_offset_field[:, :, 1] = np.expand_dims(center_of_mass[1] - np.arange(0, CROP_SIZE), axis=0)
</code></pre> |
2018 Data Science Bowl | Team 42’s solution and food for thought [LB: 0.623 late submission] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We added padding to the input, and then cropped the output accordingly to get the final prediction. </p>
<h2>First Kaggle competition, and first Kaggle sharing</h2>
<p>First of all, congrats to the DSB 2018 winners! This is our first ever Kaggle competition, and we indeed learned so much through out the whole event. Although our result is not among the best, but we are satisfied with our top 1.5% finish as the first Kaggle attempt. (Team name: 42) Would like to conclude this journey with the sharing of our learning in hope that it might be useful for future novice contestants like us. We documented our journey in a sequential development log style. [Github Code] (<a href="https://github.com/samuelschen/DSB2018">https://github.com/samuelschen/DSB2018</a>) </p>
<p>In this competition, we struggled on lack of validation criteria during the stage 2, and ended up submitting worse results than we should. A few days after the competition deadline, we were able to achieve <strong>0.623</strong> LB score using the same model with the exception of changing to pre-trained ResNet and fixing our broken TTA codes. I honestly don't think we would have chosen these best results even if we had it before stage 2 ended. So our question is: how did the leading teams choose which 2 results to submit during stage 2 without knowing the real final test2 dataset? Would love to hear everyone's opinion.</p>
<h2>Motivation</h2>
<p>After finishing several machine learning online courses and trying out past Kaggle competitions, we are excited about the promise of deep learning. We later decided to participate DSB 2018 in order to benchmark our own capability in machine learning. It led us to participate Data Science Bowl 2018 as our 1st Kaggle competiton.</p>
<h2>The first couple weeks, skeleton and data pipeline</h2>
<p>It's emphasized that 'generalizability' is the key goal of this <a href="https://www.kaggle.com/c/data-science-bowl-2018/">competition</a>. Given the small amount of training set and its diversity, we decided that we'll need lots of data augmentation in data pipeline. Furthermore, easy debugging and integration with existing Python modules would also be important. PyTorch sounds to us a better choice than TensorFlow for this matter; we decided to use DSB 2018 as our first PyTorch experiment.</p>
<p>Both <a href="https://arxiv.org/abs/1703.06870">MaskRCNN</a> and <a href="https://arxiv.org/abs/1505.04597">UNet</a> appear to be reasonable approaches for the segmentation problem. In favor of its simplicity, we picked <strong>UNet</strong> and built our own code from scratch, also did experiments of data augmentation (random crop, flip, rotate, resize, color jitter, elasitc distortion, color invert, clahe, and gaussian noise). We believed our heavy data augmentation can be helpful in general, but we also learned that some effects did not yield desired improvements, e.g. color invert, clahe, and gaussian noise.</p>
<p>This vanilla UNet produced Public LB score of <strong>0.34</strong>, trained without external dataset, used <strong>Binary Cross-Entropy (BCE)</strong> loss function. Then loss function was changed to <strong>Jaccard/IoU + BCE</strong>, it helped to improve Public LB score to <strong>0.36</strong>.</p>
<h2>One month on model exploration after the first two weeks</h2>
<p>Reviewed the output visualization of our vanilla UNet, single binary output obviously not very effective against separating the overlapping nuclei. We started to look for methods to improve vanilla UNet model.</p>
<p>The first idea was transfer learning. A well pre-trained model should mitigate data quantity issue and provide richer low level features as first part of UNet model. We tried VGG16, but unfortunately, no improvement was seen at that time. (note: we revisited transfer learning after competition with the adoption of ResNet)</p>
<p>In early experiments of computer vision morphology algorithms, watershed with peak local max, score reached Public LB <strong>0.41</strong>. This confirmed that the competition was indeed an instance segmentation problem.</p>
<p>We found the model performed poorly in visualized prediction of histology images, besides model improvement, external datasets were added to training. At this moment Public LB score reached <strong>0.44</strong></p>
<p>Inspired by <a href="https://arxiv.org/abs/1604.02677">DCAN</a>, we revised the UNet model to be <strong>multitasked</strong>. One head is for semantic prediction, another head is for contour prediction. However, a naive (semantic - contour) as instance segmentation prediction is not good, so we instead used (semantic - contour) as the marker of <a href="http://scikit-image.org/docs/dev/auto_examples/segmentation/plot_watershed.html">watershed algorithm</a>, and it brought the Public LB score to <strong>0.45</strong></p>
<p>Revisited UNet paper, and felt that using 'weight map' to force the network to learn the border pixels could be helpful, we then implemented a weight map, which emphasizes on borders & centroids (especially for small cells), and it brought us to Public LB <strong>0.47</strong>.</p>
<p><img src="https://raw.githubusercontent.com/samuelschen/DSB2018/master/docs/weight_map.jpg" alt="weighted map"></p>
<p>Comparing the partitioning results of watershed and <a href="http://scikit-image.org/docs/dev/auto_examples/segmentation/plot_random_walker_segmentation.html">random walker</a> for touching objects, we felt that the result of "random walker" was more natural from our perspective. Changed the post-processing to "random walker", it improved Public LB score to <strong>0.49</strong>.</p>
<p>Reviewed aforementioned markers (semantic - contour) visually, there are always some poor performing cases and it's hard to design rules for them. Therefore, we decided to let the model learn and predict the markers by itself. The third head was added to predict markers, whose ground truths are derived from shrunk cell masks. We used ('markers head' - 'contour head') as the final markers for random walker to partition the 'semantic head'.</p>
<p><img src="https://raw.githubusercontent.com/samuelschen/DSB2018/master/docs/model_output.jpg" alt="3head model"></p>
<p>In order to address the data imbalance issue, we also implemented <a href="https://arxiv.org/abs/1708.02002">Focal loss</a> in addition to aforementioned Jaccard/IoU loss function and weight map mechanism. It further improved our Public LB score to <strong>0.50</strong>.</p>
<p>By adding synthesized images of touching/overlapping cells to training set, the Public LB improved to <strong>0.52</strong>, ranked #22 at that moment.</p>
<h2>Struggled in last month or so</h2>
<p>We analyzed error cases of stage 1 test data, one of major issues was large cells (scaling), the other one was lack of similar images in Kaggle training dataset.</p>
<p>To address scaling issue, we tried to expand the receptive field of UNet's encoder part with "dilated convolution." We found dilated and non-dilated encoder seems to be complement each other well, and we then ensembled these two models by averaging their pixel-wise prediction of three heads.</p>
<p>Back and forth, we spent lots effort in collecting and validating external dataset to be included. With data manipulation, it brought the scrore to Public LB <strong>0.534</strong>, but it's tedious and required some biology knowledge which we didn't have. :-(</p>
<p>Also Test Time Augmentation (TTA) was experimented by horizontal flip, vertical flip, horizontal and vertical flip. However, an implementation error on reflection padding & flip pipeline misled us to drop TTA before end of competition (Ouch! we revisited and fixed TTA after the competition)</p>
<p><img src="https://raw.githubusercontent.com/samuelschen/DSB2018/master/docs/data_variation.jpg" alt="data variation"></p>
<h2>Stage 2 Test Set Released - PANIC mode</h2>
<p>Surprised by the huge differences between stage1 and stage 2 test datasets, we panicked! Several mistakes were made during the final few days, unfortunately. We added additional BBBC018, BBBC020, and stage 1 test to further train the models. That was the 1st mistake to put everything in training, resulting no objective local cross validataion to judge overfitting, but relied on human evaluation of visualized predictions. (note: see how we learned after competition)</p>
<p>We thought the longer our model was trained the better score it could be based on the learning curve observed in stage 1, (:facepalm: another mistake mentioned in post-competition), yet it turned out that we chose the worse ones on final submission. Our major struggle in final week was that no validation set similar to the test set (Andrew Ng had a great talk about <a href="https://www.youtube.com/watch?v=F1ka6a13S9I">Nuts and Bolts of Applying Deep Learning</a> in this regard), so we solely relied on human to evaluate the visualized predictions (and many of them are on-purposed poisoned!).</p>
<h2>Post-Competition Submissions and Reflections</h2>
<p>Reading some nice writeups of top winning UNet-based solutions when competition ended (hats off to all winners for sharing!) Indeed, we had many technical approaches in common, yet a few practices we did not do right or well. For example:</p>
<ul>
<li>Data Augmentation: channel shuffle and rgb/gray color space transformation (it didn't show improvement in our experiments somehow)</li>
<li>Implement TTA correctly, which turns out help the score a lot (Private LB <strong>0.569</strong> -> <strong>0.580</strong>)</li>
<li>Transfer learning with deep neural network (Resnet), which speed up the training and improved accuracy.</li>
<li>Try shared decoder and separated decoders for multitasks, and ensemble them for final prediction.</li>
</ul>
<p>Last but not least, we used stage 1 test set as validation set (totally isolated from training set), and saw the high variance of Private LB score (<strong>0.56</strong> ~ <strong>0.623</strong>) along the training epochs.</p>
<p><img src="https://raw.githubusercontent.com/samuelschen/DSB2018/master/docs/overfit-stage-2.jpg" alt="stage 2 learning curve"></p>
<p>The figure suggests that the data distribution of stage 1 test set is likely very different than stage 2 test set, it's somehow like a lottery here since we can't have a reasonable local validation set for stage 2 test set (with tons of on-purposed poison images). The fluctuation might also hint the number of images used in stage 2 scoring can be relatively small.</p>
<p>Unfortunately, this remains to be a puzzle for us. We felt a 'generalized' model should perform well on both stage 1 test set and stage 2 test set. Without an objective validation criteria, we don't believe that we could have chosen our best <strong>0.623</strong> result to be our final submission.</p>
<p>Love to hear from experienced Kagglers for any suggestions/practices in this kind of 2-stage competition! Good luck to everyone for future Kaggle competitions.</p>
<h2>Reference</h2>
<p>If you are interested in our code -
- <a href="https://github.com/samuelschen/DSB2018">Github Code</a></p> |
2018 Data Science Bowl | UNet Nuke Solution (#68 LB) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It is only active during training.</p>
<p>Thanks. Sure, now that I've tasted CNNs I'll be back in another image competition soon!</p>
<p>Thanks. I used a decay near the borders of my crops indeed, it helps but did not cure the problem entirely.</p>
<p>Thanks for sharing Nicole. Next time we'll team earlier for sure!</p>
<p>Some people asked for our solution and code, which surprises me a bit given there are so many people in front of us, some with way better scores than us. Watch for their write up as you'll learn more than from this one.</p>
<p>First of all, let me thank my team mates, al.exe, Sven Hafeneger, Nicole Finnie, and Liam Finnie. I enjoyed the last 3 or 4 days of the competition when we tried to combine our approaches like crazy. I also thank those who shared so much, Heng Cher Keng, Allen Goodman, Anne Carpenter, Kjetil Åmdal-Sævik who kickstarted us with his kernel, and many others. Sorry I can't name you all. </p>
<p>Last but not least, I thank Kaggle, Booz Allen Hamilton, and the organizers for setting this competition. Sure, there was some grumbling about stage 2 rules, but all in all the competition went pretty well. </p>
<p>I started this competition to learn about deep learning and images. After reading few papers it was clear that there were 2 avenues for this: UNet or Mask-rcnn. I selected the former because it was way simpler, and also because Kjetil shared an amazing starter kernel. My final submission is a modification of his kernel. </p>
<p>After 3 weeks or so I was in top 50 on the public LB with a 0.457 score, to my surprise. I interrupted my work on this to enter another competition (Toxic comment) and had a bad experience there as I got a gold medal then was removed because a team mate cheated. This is probably why I bugged many here with my post on 'cheating'. Net result however is that I lost motivation and did not resume work on DSB till I met some colleagues 4 days before competition end. They also had entered DSB and had worked on train time augmentation quite a bit. We decided to team, and spent 3 days cross pollinating each other.</p>
<p>Let me now describe what I did, Nicole Finnie, one of my team mates, described their side of the house <a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/54742#315594">in the comments below</a> . </p>
<p><strong>Preprocessing.</strong></p>
<p>Intensity is rescaled (put min to 0 and max to 255) and transform all images into gray images. Then I used Otsu thresholding, and inverted the intensity of images where the majority of pixels were above the threshold. The goal was to get images as similar as possible. I also did not use any clustering, thinking that training a single model would lead to better generalization. I now think that clustering with a small number of clusters, as my colleagues were doing, is better.</p>
<p>Next step in preprocessing is to feed the model. I basically resized images to fit the CNN input size (256x256). This works quite fine except for images with a large aspect ratio, as it distorts the shape of nuclei. I decided to split images in 2 overlapping pieces that are as close as possible to squares. A square image yields 2 copies of it, and a very thin rectangle leads to two, less thin, rectangles. This led to less distortion.</p>
<p>Issue with splitting images was that the boundary of predictions for the overlapping part was a bit wrong. I think it is because I used padding='same' instead of something more in line with the original UNet paper.</p>
<p>I briefly tried using 256x256 crops of the images as an overlapping tiling , but it did not work as well. I must have been doing something wrong here because it worked fine for my team mates.</p>
<p><strong>Model</strong></p>
<p>It is very close to Kjetil model. I only modified the output layer and the loss function, and I added a Speckle noise layer on the input in some runs, thanks to a post form Heng. Here is the code for a Speckle noise layer. it is a multiplicative Gaussian noise. I merely modified the code of the built in Keras additive Gaussian noise.</p>
<pre><code>from keras.engine import Layer
from keras import backend as K
class SpeckleNoise(Layer):
def __init__(self, stddev, **kwargs):
super(SpeckleNoise, self).__init__(**kwargs)
self.supports_masking = True
self.stddev = stddev
def call(self, inputs, training=None):
def noised():
return K.clip(inputs * K.random_normal(shape=K.shape(inputs),
mean=1.,
stddev=self.stddev), 0.0, 1.0)
return K.in_train_phase(noised, inputs, training=training)
def get_config(self):
config = {'stddev': self.stddev}
base_config = super(SpeckleNoise, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def compute_output_shape(self, input_shape):
return input_shape
</code></pre>
<p>I modified the output layer to predict 3 channels with sigmoid activation. First channel predicts common boundary between adjacent nuclei, second channel predicts the union of nuclei, and third channel predicts background. The second channel is the same as in the original UNet model. In order to construct the ground truth for training, all masks are expanded by one pixel, and the union of these intersection is my first channel target. Second channel is the union of all masks, and last one is the complement of the other two. It is important to notice that the first two channels are not mutually exclusive, a pixel can be both on common boundary and in a mask.</p>
<p>The loss function is a pixelwise weighted loss function. I used weights of 30, 3 and 1 for each channel. Weights were selected via cross validation.</p>
<pre><code>import tensorflow as tf
def pixelwise_crossentropy(target, output):
_epsilon = 10e-8
output = tf.clip_by_value(output, _epsilon, 1. - _epsilon)
weight = 30 * target[:,:,:,0:1] + 3 * target[:,:,:,1:2] + 1 * target[:,:,:,2:3]
return - tf.reduce_sum(target * weight * tf.log(output) +
(1 - target) * tf.log(1 - output),
len(output.get_shape()) - 1)
</code></pre>
<p>Given the large weights I sometimes got exploding gradients (loss function grows to unreasonable levels). The way I fixed it was to clip gradient norm in the Adam optimizer. Maybe a better way would have to use SGD with proper settings, but I did not have time to try it. Adam is just working out of the box reasonable well.</p>
<p><strong>Training</strong></p>
<p>I used 6 fold cross validation, with a limit of 20 epochs, and early stopping with a patience of 5. Each of the 6 fold model is used to make predictions on the test data. These predictions are averaged before post processing test images. I shuffled images but made sure that all images (or crops) coming from one original image were in the same fold in order to avoid overfitting. Training time augmentation was limited to flipping and 90° rotations.</p>
<p><strong>Postprocessing</strong></p>
<p>First, images or crops coming from the same original images are merged, and rescaled to the original image size. Then the problem is how to segment the output as usual with UNet.</p>
<p>I started with a watershed, like many, and explored different ways to seed it with good markers. Best results were obtained by defining markers as areas with a high probability to be in a mask (Second channel probability over 0.75), and far from being a common boundary (probability of first channel below 0.6). That use of the common boundary prediction was key in separating adjacent nuclei. Then watershed was expanded into the region where background probability was low (third channel probability below 0.4).</p>
<p>All thresholds were selected by cross validation.</p>
<p>Looking at results, I noticed that in some cases watershed was producing disconnected masks. I therefore tried another way to expand markers: random walker in scikit-image. And this was really better than watershed. When my team mates switched from watershed to random walker they also noticed a significant improvement. In some cases random walker was triggering an error. In such case I used watershed.</p>
<p>Finally, I tried some ensembling. I averaged the NN output over several variants (with or without noise, with various gradient clipping) then post processed it as before. This yields a CV improvement but didn't improve the Private LB score to my surprise.</p>
<p><strong>Things I didn't have time to try</strong></p>
<p>Pretrained weights. I wanted to reuse pretrained weights as the NN I use basically contains a VGG-11 model. Reusing pretrained weights and make these layers not learnable would help fight overfitting. I wanted to reuse <a href="https://github.com/ternaus/TernausNet">TernausNet</a>, but it was in Pytorch, and I did not had time to convert it to TF or Keras.</p>
<p>Mosaic. Many images were obtained by splitting larger images in 4. Working with the original images would help predict the nuclei that are split among two or more split images. Fortunately, my team mates implemented it after we merged, and this proved to be very useful</p>
<p>More training time augmentation. Color shift, blur, were on my plate. Fortunately, my team mates did a lot in that area.</p>
<p>Combine with Mask Rcnn. Thanks to Heng I tried using his output as markers for my postprocessing. It improved LB score by nearly 0.001. I wanted to include a mask rcnn output as input to my modified UNet, but did not had time.</p>
<p><strong>Things I did not think of</strong></p>
<p>Shrinking masks so that there are no adjacent asks, then expand predicted nuclei. This is a brilliant idea shared by Ildoo Kim. It makes UNet postprocessing way easier.</p>
<p>Test time augmentation (ensembling). I did not find a way to merge my team mates predictions with mine, and I am eager to know what others did in that area.</p>
<p>I'm sure Ill get many more from top performers write ups!</p>
<p><strong>Take away</strong></p>
<p>I learned a lot, and I now see there is a lot I have to learn still. One thing I had a hunch for, and that was confirmed here, is the importance of pre and post processing. Many proponents of deep learning say that there is no need to preprocess data. Well, in my case, stage1 score went from 0.263 to 0.457 via pre and post processing alone.</p>
<p>Hope the above is of interest to some.</p>
<p>I shared the code that led to my best stage 1 submission <a href="https://github.com/jfpuget/DSB_2018">on github</a>. My team mate shared the final submission code on <a href="https://github.com/nicolefinnie/kaggle-dsb2018">github</a> too.</p>
<p>We shared our code, links at the end of main post above.</p>
<p>Because I want to find common boundaries, i.e. overlapping areas coming from at least two different masks.</p>
<p>Hi, this is the code form my team mates, I'll let them have a look.</p> |
2018 Data Science Bowl | [ods.ai] topcoders, 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Global avergae pooling will produce different features. Even though it will not break anything that may degrade performance.</p>
<p>For that models, no. Because fully convolutional networks almost do not depend on the input size, unless there are some blocks like SE/SCSE</p>
<ol>
<li>yes, it is an ensemble of different models, each with 4 folds CV</li>
<li>we did not try it on private, but on public the scrore was quite
high as well</li>
<li>it will take ~100 hours on a single 1080 Ti</li>
<li>don't remember exactly, some hours</li>
</ol>
<p>Yes, targets == ground truth masks as @DStjhb already described.
In general, Fully Convolutional Networks can predict and learn as many output channels as you wish. You just need to change the number of filters in the last convolution appropriately. </p>
<ol>
<li>Our networks are not plain UNets. We use pretrained and very deep encoders. That makes a huge difference when you don't have enough data, which is clearly the case. Watershed in that case is quite simple, you just use two thresholds, high for seeds, low for masks (something like 0.6 and 0.3 for a binary mask). Updated the description as well to make it more specific.</li>
<li>Cannot say anything about Mask-RCNN, have not tried it in this competition. But it is designed in the way, that you don't need borders to separate instances. Though you can read MaskLab paper <a href="https://arxiv.org/abs/1712.04837">https://arxiv.org/abs/1712.04837</a> where they added third branch to predict directions. </li>
</ol>
<p>Images are visible in Chrome/Firefox, though in mobile Safari I don't see them . You can open the image by the link <a href="https://www.dropbox.com/s/4igam47pqg0i82q/c43e356beedae15fec60ae3f8b06ea8e9036081951deb7e44f481b15b3acfc37_predict.png?dl=1">https://www.dropbox.com/s/4igam47pqg0i82q/c43e356beedae15fec60ae3f8b06ea8e9036081951deb7e44f481b15b3acfc37_predict.png?dl=1</a></p>
<p>Seems that TNBC dataset negatively affected performance. </p>
<p>Inverting - predictions for neuron tissue were really bad, looks like the models predicted inverted images. </p>
<p>It is quite hard to say how much exactly we gained from augmentations. But reducing them constanly led to better local validation/CV scores and much worse public scores.</p>
<p>Thanks! Good point! Added training description and TTA to the post.
During training random scale was in [0.55, 1.45]. Whenever we reduced augmentations we got better validation results and worse score on public leaderboard.</p>
<p>Actually different models. That gave a good profit from ensembling. </p>
<p>Hi @Heng,
Thanks!
It is hard to estimate the gain on stage2 data. On Public LB this 2nd level model with adaptive thresholds and FP rejection increased the score by ~15.</p>
<h2><strong>Main contributions</strong></h2>
<ul>
<li>targets - we predict touching borders along with the masks to solve
the problem as instance segmentation</li>
<li>loss function - that combines crossentropy and soft dice loss in such
a way that pixel imbalance doesn't affect the results</li>
<li>very deep encoder-decoder architectures that also achieve
state-of-the-art results in other binary segmentation problems
(SpaceNet, Inria and others)</li>
<li>tricky postprocessing that combines watershed, morphological features
and second-level model with Gradient Boosted Trees</li>
<li>task specific data augmentations</li>
</ul>
<p>Combined these tricks allowed to achieve 1st place on public LB even with a single model.</p>
<h2><strong>U-Net vs Mask-RCNN?</strong></h2>
<p>We had experience in segmentation challenges before (Carvana, Urban 3D, Spacenet, Konica Minolta) where we actually competed against each other and therefore the choice was clear - <strong>UNet on steroids</strong>. </p>
<h2><strong>Target masks</strong></h2>
<p>First we tried the simplest approach and added watershed line for binary masks(i.e. modified GT masks to always have gaps between nuclei).
This approach gave us around ~500 on Public LB. Clearly this is not enough to win. </p>
<p>Then we added second channel with contours, where width of contours depended on the nucleus size.
These masks + simple watershed postprocessing gave us ~525, not a breakthrough but a hint to the right direction.</p>
<p>Looking at the mistakes it was clear that networks easily predict contours in non ambiguous places and fail miserably in places where we actually need contours to separate the nuclei. Hence we decided to predict only the borders between the cells. This easily gave us 550+ on the LB with a single network results + watershed postprocessing. </p>
<p>Still if we have a full mask in one channel and a border in another, sometimes the seeds for watershed are not good enough. A better approach is to change nuclei masks and make pixels empty on the borders. This also allows to use softmax as the target activation instead of sigmoid. This better separates nuclei but actually decreases MaP because of high thresholds for IoU. We solved this issue with additional networks trained on full masks and combined the results in the postprocessing step.</p>
<p><strong>Final approach for targets:</strong></p>
<ul>
<li><p>2 channels masks for networks with sigmoid activation i.e. (mask - border, border) or 3 channels masks for networks with softmax activation i.e. (mask - border, border , 1 - mask - border)</p></li>
<li><p>2 channels full masks i.e. (mask, border)</p></li>
</ul>
<p>For c43e356beedae15fec60ae3f8b06ea8e9036081951deb7e44f481b15b3acfc37 it looks like this
<img src="https://www.dropbox.com/s/4igam47pqg0i82q/c43e356beedae15fec60ae3f8b06ea8e9036081951deb7e44f481b15b3acfc37_predict.png?dl=1" alt="Prediction"></p>
<p>Final result after post processing:
<img src="https://www.dropbox.com/s/isbils3jtelj297/c43e356beedae15fec60ae3f8b06ea8e9036081951deb7e44f481b15b3acfc37.png?dl=1" alt="Final result"></p>
<h2><strong>Augmentations</strong></h2>
<p>As there were just some hundreds of training images, we needed to come up with specific augmentations to prevent our models from overfitting and make them more or less generalizable.
We used a lot of heavy augmentations (maybe too heavy)</p>
<ul>
<li>Clahe, Sharpen, Emboss</li>
<li>Gaussian Noise</li>
<li>Color to Gray</li>
<li>Inverting - we should not have used it, some images were not
predicted correctly on stage2 because of this augmentation</li>
<li>Remapping grayscale images to random color images</li>
<li>Blur, Median Blur, Motion Blur</li>
<li>contrast and brightness</li>
<li>random scale, rotates and flips</li>
<li>Heavy geometric transformations: Elastic Transform, Perspective
Transform, Piecewise Affine transforms, pincushion distortion</li>
<li>Random HSV</li>
<li>Channel shuffle - I guess this one was very important due to the
nature of the data</li>
<li>Nucleus copying on images. That created a lot of overlapping nuclei.
It seemed to help networks to learn better borders for overlapping
nuclei.</li>
</ul>
<h2><strong>Network architectures</strong></h2>
<p>We used UNet like encoder-decoder architectures with encoders pretrained on ImageNet.</p>
<p>Surprisingly, simple encoders like VGG16 did not work in this competition at all. They failed on the hard cases when the tissue looked like nucleus but it was not, especially on color images like 59b35151d4a7a5ffdd7ab7f171b142db8cfe40beeee67277fac6adca4d042c4</p>
<p>After these experiments we decided that <strong><em>we have to go deeper</em></strong>!!!
As a result - top performing encoders in this competition were : DPN-92, Resnet-152, InceptionResnetV2, Resnet101. </p>
<h2><strong>2nd Level model / Postprocessing</strong></h2>
<p>LightGBM models trained on predicted nucleus candidates. Each base candidate selected with lowest threshold for separation and tried to be separated with few higher thresholds and erosion. Used few basic morphological features about candidate like solidity, circularity, convexity, area, neighbors median area, count, etc.. Target for prediction - iou with ground truth (0 if iou < 0.5). </p>
<p>Then best separation threshold selected for each candidate according to predicted iou. Candidates with small predicted iou just removed (iou < 0.3 and iou < 0.2 for 2 submissions - it was hard to find this threshold using OOF prediction, because there is small overfit to image types even with such hard augmentation).</p>
<h2><strong>Additional data:</strong></h2>
<p>We added some images from janowczyk, nucleisegmentationbenchmark, isbi2009, BBBC020, TNBC datasets. Some of them decreased our score on Public LB, making less confident predictions on color images. Unfortunately we did not have time to find the culprits and left it as is. </p>
<h2><strong>Ensembling</strong></h2>
<p>We used a simple approach for ensembling where we just averaged our masks before postprocessing</p>
<h2><strong>Training</strong></h2>
<ul>
<li><p>Random Crops: 256x256</p></li>
<li><p>Batch Size: 16</p></li>
<li><p>optimizer: Adam</p></li>
<li><p>learning rate: initial 1e-4 with decay (we had different LR policies, but mostly small LR no more than 1e-4)</p></li>
<li><p>preprocessing: same as on ImageNet depending on network</p></li>
</ul>
<h2><strong>Loss function</strong></h2>
<p>For networks with sigmoid activation and 2 channel masks we used combination of binary_crossentropy with soft_dice per channel. For networks with softmax activation and 3 channel masks we used combination of categorical_crossentropy with soft_dice per channel (soft dice was applied only to mask and border channels).</p>
<h2><strong>Test Time Augmentations (TTA)</strong></h2>
<p>Standard Flips/Rotations(0,90,180,270). </p>
<h2><strong>GitHub repo</strong></h2>
<p><a href="https://github.com/selimsef/dsb2018_topcoders/">https://github.com/selimsef/dsb2018_topcoders/</a></p>
<p>Labels where generated from original data, i.e. for each image there was a single grayscale tif file with labeled instances. Afair there should be some code on kernels/forums which does that.</p>
<p>Applying agressive augmentations was enough to prevent overfitting. Spatial dropout or L2 regularisation only slowed down training speed. </p>
<p>Ordinary Dropout is usually harmful for convolutional layers. But SpatialDropout2D could be used for segmentation tasks and gives good results from my experience.
We did not use it here though because we already had excessive augmentations and adding dropout will increase training time even more.</p>
<p>informally </p>
<ul>
<li>bce - has class imbalance problem, every pixel
is considered as an indepenendent one. This makes predictions a bit fuzzy.</li>
<li>soft dice (and jaccard) is computed for all pixels in the image and
therefore the predictions have better shapes and are not fuzzy. The problem is that they are very
confident i.e. probability is close to 0 or 1 even for wrong pixels.</li>
</ul>
<p>Their combination is a workaround to alleviate the problems existing in pure bce or dice losses. </p>
<p>To combine them the simplest approach is</p>
<pre><code>loss = bce + (1 - dice)
</code></pre>
<p>Depending on the data you can play with the weights for the loss terms and see the outcome</p>
<pre><code>loss = w1 * bce + w2 * (1 - dice)
</code></pre>
<p>I used SpatialDropout2D just before the classification layer</p>
<p>We actually thought about Mask-RCNN-like 2nd lvl model but did not have time to implement this. Most likely the results would be much better. The current approach with predicted masks' morphological features is very limited as it doesn't take image features into account.</p>
<p>Hi Heng,</p>
<ol>
<li>I don't exactly remember what network/s was used for that exact experiment. But yeah, we used at least 8 TTA everywhere.</li>
<li>From resnets we had good results with Resnet101, Resnet152. </li>
<li>a simple watershed postprocessing could be like this
<code></code></li><code>
</code></ol><code>
<pre>def label_mask(mask_img, border_img, seed_ths, threshold, seed_size=8, obj_size=10):
img_copy = np.copy(mask_img)
m = img_copy * (1 - border_img)
img_copy[m <= seed_ths] = 0
img_copy[m > seed_ths] = 1
img_copy = img_copy.astype(np.bool)
img_copy = remove_small_objects(img_copy, seed_size).astype(np.uint8)
mask_img[mask_img <= threshold] = 0
mask_img[mask_img > threshold] = 1
mask_img = mask_img.astype(np.bool)
mask_img = remove_small_objects(mask_img, obj_size).astype(np.uint8)
markers = ndimage.label(img_copy, output=np.uint32)[0]
labels = watershed(mask_img, markers, mask=mask_img, watershed_line=True)
return labels
</pre>
</code><p><code></code></p>
<p>4 You can get the idea how it works looking at these predictions from stage 2 (single resnet152)
<img src="https://www.dropbox.com/s/jsnnsgwdsrp5wg3/00e67980783eaae30d41f6479beb9c04eb91de4608537673898039094800ddb0.png?dl=1" alt="00e6798">
<img src="https://www.dropbox.com/s/vpqy6iqbihzaihd/1edd67f41a030b1c04828e2f58581751b960b5941e6d39239b5cb57653f6b0a7.png?dl=1" alt="1edd67f41"></p>
<p>It is quite easy to do with non vectorized code with loops but it is quite slow.
A better approach is to use labels, dilation, watershed with watershed_line=True etc. Watershed line will be a border between the nuclei.</p>
<p>We did not change original training data. We just produced different masks from the original labels. And added a few external datasets as described in the post. </p>
<p>That's right. Encoders were initialized with pretrained weights from ImageNet. Then we trained models end to end.
From my experience with a frozen encoder it is usually not possible to achieve good segmentation results even on datasets that are more or less similar to ImageNet. </p>
<p>@theNickName I tried to train a plain UNet from scratch using the same pipeline. The results were much worse. </p>
<p>our solution is quite simple, we just use RGB everywhere</p>
<p>HI, I simply changed encoders and replaced valid padding with same padding. <br>
Decoders where of two flavours:
- U-Net like with standard upsampling-conv approach
- a custom FPN like decoder
- their performance was the same </p>
<p>you should not use dice for background channel, that's the problem.</p>
<p>if you use channelwise dice loss then you will not have border/body pixelwise loss imbalance. Resizing should be used only for train (or multiscale prediction during inference) time augmentations - i.e. random sized crop is resized to the same size e.g. 224 or 256. </p>
<p>Training with random crops using proper size usually works fine on all segmentation problems. For this problem crop size 224 or 256 was optimal from my experience. </p> |
2018 Data Science Bowl | repeating topcoders unet results | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2018 Data Science Bowl
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>updated results. please see "20180425". Below are results on training images. The next stage is to do evaluation on stage1 test set. The border pixels are minority class, class balancing are important and tricky. </p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/318873/9292/8efed2e62c919e6d70a2ab548b1a33014877fe8a23f177ef25a9dee25ffe8842.png" alt="enter image description here"></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/318873/9294/2f929b067a59f88530b6bfa6f6889bc3a38adf88d594895973d1c8b2549fd93d.png" alt="enter image description here"></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/318873/9296/a102535b0e88374bea4a1cfd9ee7cb3822ff54f4ab2a9845d428ec22f9ee2288.png" alt="enter image description here"></p>
<p>Yes. They will open source later. But I would like to do an implementation myself first and then compare it with their version. It would be a great learning experience and this process will uncover a lots of implementation details.</p>
<p>just a quick update: i am implementing resnext152 based on inplace BN which will saved 50% memory as claimed here:</p>
<p><a href="https://github.com/mapillary/inplace_abn">https://github.com/mapillary/inplace_abn</a></p>
<p>Unet seems to be an important approach for future kaggle challenge as well. Hence I have decided to repeat topcoder results, making modifications if required. My target is 0.580 on public LB on stage1 dataset first. (there is an evaluation code in my code base for scoring your csv file against the released stage1 test solution csv)</p>
<p>My code base is at: <a href="https://drive.google.com/drive/folders/1EtBdlPK3BVahHBZtB103QQxNUdWaJ6di">https://drive.google.com/drive/folders/1EtBdlPK3BVahHBZtB103QQxNUdWaJ6di</a> </p>
<p>. It is based on pytorch.</p>
<p>It is still in progress. you can follow this thread for update.</p>
<p>The attached ppt shows current progress.</p>
<hr>
<p>latest : 20180427 folder</p>
<ul>
<li><p>unet (fpn se-resnext152 backbone)</p></li>
<li><p>learning watershed line as border</p></li>
</ul>
<p>hi, currently google drive is most convenient for me. I suggest that you can take my code and put on github?</p>
<p>20180420 version (early iterations):
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/317042/9224/unet.png" alt="enter image description here"></p>
<p>results after over fitting train data with more iterations:</p>
<ul>
<li><p>i start to understand why @topcoders want to focus on overlapping boundary and also using size dependent boundary</p></li>
<li><p>one can also thinks of cascade of unet, where the next unet learns the weak points of the previous one and then add up the prediction for watershed post processing</p></li>
<li><p>the next plan is to try if watershed can give close to zero error on these over-fitted train data. this will be the upper bound of the processing pipeline</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/317103/9225/0ea221716cf13710214dcd331a61cea48308c3940df1d28cfc7fd817c83714e1.png" alt="enter image description here"></p></li>
</ul>
<p>maybe cascade is not the correct word. Because the intention is to use watershed as post processing, one can think of the following design cycle for a solution:</p>
<ol>
<li><p>Using the ground truth, we find the best marker and distance transform that would give error-free water transform. Assume the best distance transform is call T_best. We now must think of a way to predict T_best from unet.</p></li>
<li><p>Say we intent to train a unet to output one channel, just the foreground mask. Let call this C1.</p></li>
<li><p>Assume the results are not ideal. We want to improve results by another channel C2. What should C2 be? If we are going add up the channels, then we need T_best = C1+C2, so C2=T_best-C1</p></li>
<li><p>Assume the results is still no good. We think we can learn another channels C3. then we could have use C3=T_best-C2-C1</p></li>
<li><p>By observation, C2 is the border. The mistakes of C2 channel from unet is usually the border at the overlapping nuclei. So we add a channel C3 = mistakes of C2, to correct the mistake of C2</p></li>
</ol>
<p>repeat top kaggler results is to first step to be a top kaggler :)</p>
<p>thanks for the post. I will try to use it</p>
<p>latest results on se-resnext152: here are results on stage1 test (not used in training). It looks good!</p>
<p>However, there something that are not correct and i need to improve:</p>
<ol>
<li><p>the ground truth border should be have different thickness (i use too thick ground truth, making small overlapped nuclei being occluded) </p></li>
<li><p>loss balancing. I will try @Selim_Sef suggestion of bce + soft-dice in the next update</p></li>
<li><p>Train data augmentation. I need to improve this.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/320117/9315/0a849e0eb15faa8a6d7329c3dd66aabe9a294cccb52ed30a90c8ca99092ae732.png" alt="enter image description here"></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/320117/9319/4f949bd8d914bbfa06f40d6a0e2b5b75c38bf53dbcbafc48c97f105bee4f8fac.png" alt="enter image description here"></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/320117/9320/432f367a4c5b5674de2e2977744d10289a064e5704b21af6607b4975be47c580.png" alt="enter image description here"></p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/320117/9317/1962d0c5faf3e85cda80e0578e0cb7aca50826d781620e5c1c4cc586bc69f81a.png" alt="enter image description here"></p></li>
</ol> |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 12