tyriaa commited on
Commit
8078d22
1 Parent(s): f2c64c9

Initial commit

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .DS_Store +0 -0
  2. Dockerfile +17 -0
  3. README.md +4 -4
  4. app.log +0 -0
  5. app.py +824 -0
  6. dataset/.DS_Store +0 -0
  7. dataset/images/.DS_Store +0 -0
  8. dataset/images/train/02_JPG.rf.d6063f8ca200e543da7becc1bf260ed5.jpg +0 -0
  9. dataset/images/train/03_JPG.rf.2ca107348e11cdefab68044dba66388d.jpg +0 -0
  10. dataset/images/train/04_JPG.rf.b0b546ecbc6b70149b8932018e69fef0.jpg +0 -0
  11. dataset/images/train/05_jpg.rf.46241369ebb0749c40882400f82eb224.jpg +0 -0
  12. dataset/images/train/08_JPG.rf.1f81e954a3bbfc49dcd30e3ba0eb5e98.jpg +0 -0
  13. dataset/images/train/09_JPG.rf.9119efd8c174f968457a893669209835.jpg +0 -0
  14. dataset/images/train/10_JPG.rf.6745a7b3ea828239398b85182acba199.jpg +0 -0
  15. dataset/images/train/11_JPG.rf.3aa3109a1838549cf273cdbe8b2cafeb.jpg +0 -0
  16. dataset/images/train/12_jpg.rf.357643b374df92f81f9dee7c701b2315.jpg +0 -0
  17. dataset/images/train/14_jpg.rf.d91472c724e7c34da4d96ac5e504044c.jpg +0 -0
  18. dataset/images/train/15_jpg.rf.284413e4432b16253b4cd19f0c4f01e2.jpg +0 -0
  19. dataset/images/train/15r_jpg.rf.2da1990173346311d3a3555e23a9164a.jpg +0 -0
  20. dataset/images/train/16_jpg.rf.9fdb4f56ec8596ddcc31db5bbffc26a0.jpg +0 -0
  21. dataset/images/train/18_jpg.rf.4d241aab78af17171d83f3a50f1cf1aa.jpg +0 -0
  22. dataset/images/train/20_jpg.rf.4a45f799ba16b5ff81ab1929f12a12b1.jpg +0 -0
  23. dataset/images/train/21_jpg.rf.d1d6dd254d2e5f396589ccc68a3c8536.jpg +0 -0
  24. dataset/images/train/22_jpg.rf.a72964a78ea36c7bebe3a09cf27ef6ba.jpg +0 -0
  25. dataset/images/train/25_jpg.rf.893f4286e0c8a3cef2efb7612f248147.jpg +0 -0
  26. dataset/images/train/26_jpg.rf.a03c550707ff22cd50ff7f54bebda7ab.jpg +0 -0
  27. dataset/images/train/29_jpg.rf.931769b58ae20d18d1f09d042bc44176.jpg +0 -0
  28. dataset/images/train/31_jpg.rf.f31137f793efde0462ed560d426dcd24.jpg +0 -0
  29. dataset/images/train/7-Figure14-1_jpg.rf.1c6cb204ed1f37c8fed44178a02e9058.jpg +0 -0
  30. dataset/images/train/LU-F_mod_jpg.rf.fc594179772346639512f891960969bb.jpg +0 -0
  31. dataset/images/train/Solder_Voids_jpg.rf.d40f1b71d8a801f084067fde7f33fb08.jpg +0 -0
  32. dataset/images/train/gc10_lake_voids_260-31_jpg.rf.479f3d9dda8dd22097d3d93c78f7e11d.jpg +0 -0
  33. dataset/images/train/images_jpg.rf.675b31c5e1ba2b77f0fa5ca92e2391b0.jpg +0 -0
  34. dataset/images/train/qfn-voiding_0_jpg.rf.2945527db158e9ff4943febaf9cd3eab.jpg +0 -0
  35. dataset/images/train/techtips_3_jpg.rf.ad88af637816f0999f4df0b18dfef293.jpg +0 -0
  36. dataset/images/val/025_JPG.rf.b2cdc2d984adff593dc985f555b8d280.jpg +0 -0
  37. dataset/images/val/06_jpg.rf.a94e0a678df372f5ea1395a8d888a388.jpg +0 -0
  38. dataset/images/val/07_JPG.rf.324d17a87726bd2a9614536c687c6e68.jpg +0 -0
  39. dataset/images/val/23_jpg.rf.8e9afa6b3b471e10c26637d47700f28b.jpg +0 -0
  40. dataset/images/val/24_jpg.rf.4caa996d97e35f6ce4f27a527ea43465.jpg +0 -0
  41. dataset/images/val/27_jpg.rf.3475fce31d283058f46d9f349c04cb1a.jpg +0 -0
  42. dataset/images/val/28_jpg.rf.50e348d807d35667583137c9a6c162ca.jpg +0 -0
  43. dataset/images/val/30_jpg.rf.ed72622e97cf0d884997585686cfe40a.jpg +0 -0
  44. dataset/test/.DS_Store +0 -0
  45. dataset/test/images/17_jpg.rf.ec31940ea72d0cf8b9f38dba68789fcf.jpg +0 -0
  46. dataset/test/images/19_jpg.rf.2c5ffd63bd0ce6b9b0c80fef69d101dc.jpg +0 -0
  47. dataset/test/images/32_jpg.rf.f3e33dcf611a8754c0765224f7873d8b.jpg +0 -0
  48. dataset/test/images/normal-reflow_jpg.rf.2c4fbc1fda915b821b85689ae257e116.jpg +0 -0
  49. dataset/test/images/techtips_31_jpg.rf.673cd3c7c8511e534766e6dbc3171b39.jpg +0 -0
  50. dataset/test/labels/.DS_Store +0 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
Dockerfile ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Utiliser une image de base Python légère
2
+ FROM python:3.9-slim
3
+
4
+ # Définir le répertoire de travail
5
+ WORKDIR /app
6
+
7
+ # Copier les fichiers nécessaires dans le conteneur
8
+ COPY . /app
9
+
10
+ # Installer les dépendances
11
+ RUN pip install --no-cache-dir -r requirements.txt
12
+
13
+ # Exposer le port 7860 pour le serveur Flask
14
+ EXPOSE 7860
15
+
16
+ # Commande pour démarrer Flask
17
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
- title: Project
3
- emoji: 📊
4
- colorFrom: indigo
5
- colorTo: blue
6
  sdk: docker
7
  pinned: false
8
  ---
 
1
  ---
2
+ title: Segmentation Project
3
+ emoji: 😻
4
+ colorFrom: red
5
+ colorTo: purple
6
  sdk: docker
7
  pinned: false
8
  ---
app.log ADDED
File without changes
app.py ADDED
@@ -0,0 +1,824 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, render_template, request, jsonify
2
+ from flask_socketio import SocketIO
3
+ import sys
4
+ import os
5
+ sys.path.append(os.path.dirname(os.path.abspath(__file__)))
6
+ import shutil
7
+ import numpy as np
8
+ from PIL import Image
9
+ from sam2.build_sam import build_sam2
10
+ from sam2.sam2_image_predictor import SAM2ImagePredictor
11
+
12
+ class Predictor:
13
+ def __init__(self, model_cfg, checkpoint, device):
14
+ self.device = device
15
+ self.model = build_sam2(model_cfg, checkpoint, device=device)
16
+ self.predictor = SAM2ImagePredictor(self.model)
17
+ self.image_set = False
18
+
19
+ def set_image(self, image):
20
+ """Set the image for SAM prediction."""
21
+ self.image = image
22
+ self.predictor.set_image(image)
23
+ self.image_set = True
24
+
25
+ def predict(self, point_coords, point_labels, multimask_output=False):
26
+ """Run SAM prediction."""
27
+ if not self.image_set:
28
+ raise RuntimeError("An image must be set with .set_image(...) before mask prediction.")
29
+ return self.predictor.predict(
30
+ point_coords=point_coords,
31
+ point_labels=point_labels,
32
+ multimask_output=multimask_output
33
+ )
34
+ from utils.helpers import (
35
+ blend_mask_with_image,
36
+ save_mask_as_png,
37
+ convert_mask_to_yolo,
38
+ )
39
+ import torch
40
+ from ultralytics import YOLO
41
+ import threading
42
+ from threading import Lock
43
+ import subprocess
44
+ import time
45
+ import logging
46
+ import multiprocessing
47
+ import json
48
+
49
+
50
+ # Initialize Flask app and SocketIO
51
+ app = Flask(__name__)
52
+ socketio = SocketIO(app)
53
+
54
+ # Define Base Directory
55
+ BASE_DIR = os.path.abspath(os.path.dirname(__file__))
56
+
57
+ # Folder structure with absolute paths
58
+ UPLOAD_FOLDERS = {
59
+ 'input': os.path.join(BASE_DIR, 'static/uploads/input'),
60
+ 'segmented_voids': os.path.join(BASE_DIR, 'static/uploads/segmented/voids'),
61
+ 'segmented_chips': os.path.join(BASE_DIR, 'static/uploads/segmented/chips'),
62
+ 'mask_voids': os.path.join(BASE_DIR, 'static/uploads/mask/voids'),
63
+ 'mask_chips': os.path.join(BASE_DIR, 'static/uploads/mask/chips'),
64
+ 'automatic_segmented': os.path.join(BASE_DIR, 'static/uploads/segmented/automatic'),
65
+ }
66
+
67
+ HISTORY_FOLDERS = {
68
+ 'images': os.path.join(BASE_DIR, 'static/history/images'),
69
+ 'masks_chip': os.path.join(BASE_DIR, 'static/history/masks/chip'),
70
+ 'masks_void': os.path.join(BASE_DIR, 'static/history/masks/void'),
71
+ }
72
+
73
+ DATASET_FOLDERS = {
74
+ 'train_images': os.path.join(BASE_DIR, 'dataset/train/images'),
75
+ 'train_labels': os.path.join(BASE_DIR, 'dataset/train/labels'),
76
+ 'val_images': os.path.join(BASE_DIR, 'dataset/val/images'),
77
+ 'val_labels': os.path.join(BASE_DIR, 'dataset/val/labels'),
78
+ 'temp_backup': os.path.join(BASE_DIR, 'temp_backup'),
79
+ 'models': os.path.join(BASE_DIR, 'models'),
80
+ 'models_old': os.path.join(BASE_DIR, 'models/old'),
81
+ }
82
+
83
+ # Ensure all folders exist
84
+ for folder_name, folder_path in {**UPLOAD_FOLDERS, **HISTORY_FOLDERS, **DATASET_FOLDERS}.items():
85
+ os.makedirs(folder_path, exist_ok=True)
86
+ logging.info(f"Ensured folder exists: {folder_name} -> {folder_path}")
87
+
88
+ training_process = None
89
+
90
+
91
+ def initialize_training_status():
92
+ """Initialize global training status."""
93
+ global training_status
94
+ training_status = {'running': False, 'cancelled': False}
95
+
96
+ def persist_training_status():
97
+ """Save training status to a file."""
98
+ with open(os.path.join(BASE_DIR, 'training_status.json'), 'w') as status_file:
99
+ json.dump(training_status, status_file)
100
+
101
+ def load_training_status():
102
+ """Load training status from a file."""
103
+ global training_status
104
+ status_path = os.path.join(BASE_DIR, 'training_status.json')
105
+ if os.path.exists(status_path):
106
+ with open(status_path, 'r') as status_file:
107
+ training_status = json.load(status_file)
108
+ else:
109
+ training_status = {'running': False, 'cancelled': False}
110
+
111
+ load_training_status()
112
+
113
+ os.environ["TORCH_CUDNN_SDPA_ENABLED"] = "0"
114
+
115
+ # Initialize SAM Predictor
116
+ MODEL_CFG = r"sam2/sam2_hiera_l.yaml"
117
+ CHECKPOINT = r"sam2/checkpoints/sam2.1_hiera_large.pt"
118
+ DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
119
+ predictor = Predictor(MODEL_CFG, CHECKPOINT, DEVICE)
120
+
121
+ # Initialize YOLO-seg
122
+ YOLO_CFG = os.path.join(DATASET_FOLDERS['models'], "best.pt")
123
+ yolo_model = YOLO(YOLO_CFG)
124
+
125
+ # Configure logging
126
+ logging.basicConfig(
127
+ level=logging.INFO,
128
+ format='%(asctime)s [%(levelname)s] %(message)s',
129
+ handlers=[
130
+ logging.StreamHandler(),
131
+ logging.FileHandler(os.path.join(BASE_DIR, "app.log")) # Log to a file
132
+ ]
133
+ )
134
+
135
+
136
+ @app.route('/')
137
+ def index():
138
+ """Serve the main UI."""
139
+ return render_template('index.html')
140
+
141
+ @app.route('/upload', methods=['POST'])
142
+ def upload_image():
143
+ """Handle image uploads."""
144
+ if 'file' not in request.files:
145
+ return jsonify({'error': 'No file uploaded'}), 400
146
+ file = request.files['file']
147
+ if file.filename == '':
148
+ return jsonify({'error': 'No file selected'}), 400
149
+
150
+ # Save the uploaded file to the input folder
151
+ input_path = os.path.join(UPLOAD_FOLDERS['input'], file.filename)
152
+ file.save(input_path)
153
+
154
+ # Set the uploaded image in the predictor
155
+ image = np.array(Image.open(input_path).convert("RGB"))
156
+ predictor.set_image(image)
157
+
158
+ # Return a web-accessible URL instead of the file system path
159
+ web_accessible_url = f"/static/uploads/input/{file.filename}"
160
+ print(f"Image uploaded and set for prediction: {input_path}")
161
+ return jsonify({'image_url': web_accessible_url})
162
+
163
+ @app.route('/segment', methods=['POST'])
164
+ def segment():
165
+ """
166
+ Perform segmentation and return the blended image URL.
167
+ """
168
+ try:
169
+ # Extract data from request
170
+ data = request.json
171
+ points = np.array(data.get('points', []))
172
+ labels = np.array(data.get('labels', []))
173
+ current_class = data.get('class', 'voids') # Default to 'voids' if class not provided
174
+
175
+ # Ensure predictor has an image set
176
+ if not predictor.image_set:
177
+ raise ValueError("No image set for prediction.")
178
+
179
+ # Perform SAM prediction
180
+ masks, _, _ = predictor.predict(
181
+ point_coords=points,
182
+ point_labels=labels,
183
+ multimask_output=False
184
+ )
185
+
186
+ # Check if masks exist and have non-zero elements
187
+ if masks is None or masks.size == 0:
188
+ raise RuntimeError("No masks were generated by the predictor.")
189
+
190
+ # Define output paths based on class
191
+ mask_folder = UPLOAD_FOLDERS.get(f'mask_{current_class}')
192
+ segmented_folder = UPLOAD_FOLDERS.get(f'segmented_{current_class}')
193
+
194
+ if not mask_folder or not segmented_folder:
195
+ raise ValueError(f"Invalid class '{current_class}' provided.")
196
+
197
+ os.makedirs(mask_folder, exist_ok=True)
198
+ os.makedirs(segmented_folder, exist_ok=True)
199
+
200
+ # Save the raw mask
201
+ mask_path = os.path.join(mask_folder, 'raw_mask.png')
202
+ save_mask_as_png(masks[0], mask_path)
203
+
204
+ # Generate blended image
205
+ blend_color = [34, 139, 34] if current_class == 'voids' else [30, 144, 255] # Green for voids, blue for chips
206
+ blended_image = blend_mask_with_image(predictor.image, masks[0], blend_color)
207
+
208
+ # Save blended image
209
+ blended_filename = f"blended_{current_class}.png"
210
+ blended_path = os.path.join(segmented_folder, blended_filename)
211
+ Image.fromarray(blended_image).save(blended_path)
212
+
213
+ # Return URL for frontend access
214
+ segmented_url = f"/static/uploads/segmented/{current_class}/{blended_filename}"
215
+ logging.info(f"Segmentation completed for {current_class}. Points: {points}, Labels: {labels}")
216
+ return jsonify({'segmented_url': segmented_url})
217
+
218
+ except ValueError as ve:
219
+ logging.error(f"Value error during segmentation: {ve}")
220
+ return jsonify({'error': str(ve)}), 400
221
+
222
+ except Exception as e:
223
+ logging.error(f"Unexpected error during segmentation: {e}")
224
+ return jsonify({'error': 'Segmentation failed', 'details': str(e)}), 500
225
+
226
+ @app.route('/automatic_segment', methods=['POST'])
227
+ def automatic_segment():
228
+ """Perform automatic segmentation using YOLO."""
229
+ if 'file' not in request.files:
230
+ return jsonify({'error': 'No file uploaded'}), 400
231
+ file = request.files['file']
232
+ if file.filename == '':
233
+ return jsonify({'error': 'No file selected'}), 400
234
+
235
+ input_path = os.path.join(UPLOAD_FOLDERS['input'], file.filename)
236
+ file.save(input_path)
237
+
238
+ try:
239
+ # Perform YOLO segmentation
240
+ results = yolo_model.predict(input_path, save=False, save_txt=False)
241
+ output_folder = UPLOAD_FOLDERS['automatic_segmented']
242
+ os.makedirs(output_folder, exist_ok=True)
243
+
244
+ chips_data = []
245
+ chips = []
246
+ voids = []
247
+
248
+ # Process results and save segmented images
249
+ for result in results:
250
+ annotated_image = result.plot()
251
+ result_filename = f"{file.filename.rsplit('.', 1)[0]}_pred.jpg"
252
+ result_path = os.path.join(output_folder, result_filename)
253
+ Image.fromarray(annotated_image).save(result_path)
254
+
255
+ # Separate chips and voids
256
+ for i, label in enumerate(result.boxes.cls): # YOLO labels
257
+ label_name = result.names[int(label)] # Get label name (e.g., 'chip' or 'void')
258
+ box = result.boxes.xyxy[i].cpu().numpy() # Bounding box (x1, y1, x2, y2)
259
+ area = float((box[2] - box[0]) * (box[3] - box[1])) # Calculate area
260
+
261
+ if label_name == 'chip':
262
+ chips.append({'box': box, 'area': area, 'voids': []})
263
+ elif label_name == 'void':
264
+ voids.append({'box': box, 'area': area})
265
+
266
+ # Assign voids to chips based on proximity
267
+ for void in voids:
268
+ void_centroid = [
269
+ (void['box'][0] + void['box'][2]) / 2, # x centroid
270
+ (void['box'][1] + void['box'][3]) / 2 # y centroid
271
+ ]
272
+ for chip in chips:
273
+ # Check if void centroid is within chip bounding box
274
+ if (chip['box'][0] <= void_centroid[0] <= chip['box'][2] and
275
+ chip['box'][1] <= void_centroid[1] <= chip['box'][3]):
276
+ chip['voids'].append(void)
277
+ break
278
+
279
+ # Calculate metrics for each chip
280
+ for idx, chip in enumerate(chips):
281
+ chip_area = chip['area']
282
+ total_void_area = sum([float(void['area']) for void in chip['voids']])
283
+ max_void_area = max([float(void['area']) for void in chip['voids']], default=0)
284
+
285
+ void_percentage = (total_void_area / chip_area) * 100 if chip_area > 0 else 0
286
+ max_void_percentage = (max_void_area / chip_area) * 100 if chip_area > 0 else 0
287
+
288
+ chips_data.append({
289
+ "chip_number": int(idx + 1),
290
+ "chip_area": round(chip_area, 2),
291
+ "void_percentage": round(void_percentage, 2),
292
+ "max_void_percentage": round(max_void_percentage, 2)
293
+ })
294
+
295
+ # Return the segmented image URL and table data
296
+ segmented_url = f"/static/uploads/segmented/automatic/{result_filename}"
297
+ return jsonify({
298
+ "segmented_url": segmented_url, # Use the URL for frontend access
299
+ "table_data": {
300
+ "image_name": file.filename,
301
+ "chips": chips_data
302
+ }
303
+ })
304
+
305
+ except Exception as e:
306
+ print(f"Error in automatic segmentation: {e}")
307
+ return jsonify({'error': 'Segmentation failed.'}), 500
308
+
309
+ @app.route('/save_both', methods=['POST'])
310
+ def save_both():
311
+ """Save both the image and masks into the history folders."""
312
+ data = request.json
313
+ image_name = data.get('image_name')
314
+
315
+ if not image_name:
316
+ return jsonify({'error': 'Image name not provided'}), 400
317
+
318
+ try:
319
+ # Ensure image_name is a pure file name
320
+ image_name = os.path.basename(image_name) # Strip any directory path
321
+ print(f"Sanitized Image Name: {image_name}")
322
+
323
+ # Correctly resolve the input image path
324
+ input_image_path = os.path.join(UPLOAD_FOLDERS['input'], image_name)
325
+ if not os.path.exists(input_image_path):
326
+ print(f"Input image does not exist: {input_image_path}")
327
+ return jsonify({'error': f'Input image not found: {input_image_path}'}), 404
328
+
329
+ # Copy the image to history/images
330
+ image_history_path = os.path.join(HISTORY_FOLDERS['images'], image_name)
331
+ os.makedirs(os.path.dirname(image_history_path), exist_ok=True)
332
+ shutil.copy(input_image_path, image_history_path)
333
+ print(f"Image saved to history: {image_history_path}")
334
+
335
+ # Backup void mask
336
+ void_mask_path = os.path.join(UPLOAD_FOLDERS['mask_voids'], 'raw_mask.png')
337
+ if os.path.exists(void_mask_path):
338
+ void_mask_history_path = os.path.join(HISTORY_FOLDERS['masks_void'], f"{os.path.splitext(image_name)[0]}.png")
339
+ os.makedirs(os.path.dirname(void_mask_history_path), exist_ok=True)
340
+ shutil.copy(void_mask_path, void_mask_history_path)
341
+ print(f"Voids mask saved to history: {void_mask_history_path}")
342
+ else:
343
+ print(f"Voids mask not found: {void_mask_path}")
344
+
345
+ # Backup chip mask
346
+ chip_mask_path = os.path.join(UPLOAD_FOLDERS['mask_chips'], 'raw_mask.png')
347
+ if os.path.exists(chip_mask_path):
348
+ chip_mask_history_path = os.path.join(HISTORY_FOLDERS['masks_chip'], f"{os.path.splitext(image_name)[0]}.png")
349
+ os.makedirs(os.path.dirname(chip_mask_history_path), exist_ok=True)
350
+ shutil.copy(chip_mask_path, chip_mask_history_path)
351
+ print(f"Chips mask saved to history: {chip_mask_history_path}")
352
+ else:
353
+ print(f"Chips mask not found: {chip_mask_path}")
354
+
355
+ return jsonify({'message': 'Image and masks saved successfully!'}), 200
356
+
357
+ except Exception as e:
358
+ print(f"Error saving files: {e}")
359
+ return jsonify({'error': 'Failed to save files.', 'details': str(e)}), 500
360
+
361
+ @app.route('/get_history', methods=['GET'])
362
+ def get_history():
363
+ try:
364
+ saved_images = os.listdir(HISTORY_FOLDERS['images'])
365
+ return jsonify({'status': 'success', 'images': saved_images}), 200
366
+ except Exception as e:
367
+ return jsonify({'status': 'error', 'message': f'Failed to fetch history: {e}'}), 500
368
+
369
+
370
+ @app.route('/delete_history_item', methods=['POST'])
371
+ def delete_history_item():
372
+ data = request.json
373
+ image_name = data.get('image_name')
374
+
375
+ if not image_name:
376
+ return jsonify({'error': 'Image name not provided'}), 400
377
+
378
+ try:
379
+ image_path = os.path.join(HISTORY_FOLDERS['images'], image_name)
380
+ if os.path.exists(image_path):
381
+ os.remove(image_path)
382
+
383
+ void_mask_path = os.path.join(HISTORY_FOLDERS['masks_void'], f"{os.path.splitext(image_name)[0]}.png")
384
+ if os.path.exists(void_mask_path):
385
+ os.remove(void_mask_path)
386
+
387
+ chip_mask_path = os.path.join(HISTORY_FOLDERS['masks_chip'], f"{os.path.splitext(image_name)[0]}.png")
388
+ if os.path.exists(chip_mask_path):
389
+ os.remove(chip_mask_path)
390
+
391
+ return jsonify({'message': f'{image_name} and associated masks deleted successfully.'}), 200
392
+ except Exception as e:
393
+ return jsonify({'error': f'Failed to delete files: {e}'}), 500
394
+
395
+ # Lock for training status updates
396
+ status_lock = Lock()
397
+
398
+ def update_training_status(key, value):
399
+ """Thread-safe update for training status."""
400
+ with status_lock:
401
+ training_status[key] = value
402
+
403
+ @app.route('/retrain_model', methods=['POST'])
404
+ def retrain_model():
405
+ """Handle retrain model workflow."""
406
+ global training_status
407
+
408
+ if training_status.get('running', False):
409
+ return jsonify({'error': 'Training is already in progress'}), 400
410
+
411
+ try:
412
+ # Update training status
413
+ update_training_status('running', True)
414
+ update_training_status('cancelled', False)
415
+ logging.info("Training status updated. Starting training workflow.")
416
+
417
+ # Backup masks and images
418
+ backup_masks_and_images()
419
+ logging.info("Backup completed successfully.")
420
+
421
+ # Prepare YOLO labels
422
+ prepare_yolo_labels()
423
+ logging.info("YOLO labels prepared successfully.")
424
+
425
+ # Start YOLO training in a separate thread
426
+ threading.Thread(target=run_yolo_training).start()
427
+ return jsonify({'message': 'Training started successfully!'}), 200
428
+
429
+ except Exception as e:
430
+ logging.error(f"Error during training preparation: {e}")
431
+ update_training_status('running', False)
432
+ return jsonify({'error': f"Failed to start training: {e}"}), 500
433
+
434
+ def prepare_yolo_labels():
435
+ """Convert all masks into YOLO-compatible labels and copy images to the dataset folder."""
436
+ images_folder = HISTORY_FOLDERS['images'] # Use history images as the source
437
+ train_labels_folder = DATASET_FOLDERS['train_labels']
438
+ train_images_folder = DATASET_FOLDERS['train_images']
439
+ val_labels_folder = DATASET_FOLDERS['val_labels']
440
+ val_images_folder = DATASET_FOLDERS['val_images']
441
+
442
+ # Ensure destination directories exist
443
+ os.makedirs(train_labels_folder, exist_ok=True)
444
+ os.makedirs(train_images_folder, exist_ok=True)
445
+ os.makedirs(val_labels_folder, exist_ok=True)
446
+ os.makedirs(val_images_folder, exist_ok=True)
447
+
448
+ try:
449
+ all_images = [img for img in os.listdir(images_folder) if img.endswith(('.jpg', '.png'))]
450
+ random.shuffle(all_images) # Shuffle the images for randomness
451
+
452
+ # Determine split index
453
+ split_idx = int(len(all_images) * 0.8) # 80% for training, 20% for validation
454
+
455
+ # Split images into train and validation sets
456
+ train_images = all_images[:split_idx]
457
+ val_images = all_images[split_idx:]
458
+
459
+ # Process training images
460
+ for image_name in train_images:
461
+ process_image_and_mask(
462
+ image_name,
463
+ source_images_folder=images_folder,
464
+ dest_images_folder=train_images_folder,
465
+ dest_labels_folder=train_labels_folder
466
+ )
467
+
468
+ # Process validation images
469
+ for image_name in val_images:
470
+ process_image_and_mask(
471
+ image_name,
472
+ source_images_folder=images_folder,
473
+ dest_images_folder=val_images_folder,
474
+ dest_labels_folder=val_labels_folder
475
+ )
476
+
477
+ logging.info("YOLO labels prepared, and images split into train and validation successfully.")
478
+
479
+ except Exception as e:
480
+ logging.error(f"Error in preparing YOLO labels: {e}")
481
+ raise
482
+
483
+ import random
484
+
485
+ def prepare_yolo_labels():
486
+ """Convert all masks into YOLO-compatible labels and copy images to the dataset folder."""
487
+ images_folder = HISTORY_FOLDERS['images'] # Use history images as the source
488
+ train_labels_folder = DATASET_FOLDERS['train_labels']
489
+ train_images_folder = DATASET_FOLDERS['train_images']
490
+ val_labels_folder = DATASET_FOLDERS['val_labels']
491
+ val_images_folder = DATASET_FOLDERS['val_images']
492
+
493
+ # Ensure destination directories exist
494
+ os.makedirs(train_labels_folder, exist_ok=True)
495
+ os.makedirs(train_images_folder, exist_ok=True)
496
+ os.makedirs(val_labels_folder, exist_ok=True)
497
+ os.makedirs(val_images_folder, exist_ok=True)
498
+
499
+ try:
500
+ all_images = [img for img in os.listdir(images_folder) if img.endswith(('.jpg', '.png'))]
501
+ random.shuffle(all_images) # Shuffle the images for randomness
502
+
503
+ # Determine split index
504
+ split_idx = int(len(all_images) * 0.8) # 80% for training, 20% for validation
505
+
506
+ # Split images into train and validation sets
507
+ train_images = all_images[:split_idx]
508
+ val_images = all_images[split_idx:]
509
+
510
+ # Process training images
511
+ for image_name in train_images:
512
+ process_image_and_mask(
513
+ image_name,
514
+ source_images_folder=images_folder,
515
+ dest_images_folder=train_images_folder,
516
+ dest_labels_folder=train_labels_folder
517
+ )
518
+
519
+ # Process validation images
520
+ for image_name in val_images:
521
+ process_image_and_mask(
522
+ image_name,
523
+ source_images_folder=images_folder,
524
+ dest_images_folder=val_images_folder,
525
+ dest_labels_folder=val_labels_folder
526
+ )
527
+
528
+ logging.info("YOLO labels prepared, and images split into train and validation successfully.")
529
+
530
+ except Exception as e:
531
+ logging.error(f"Error in preparing YOLO labels: {e}")
532
+ raise
533
+
534
+
535
+ def process_image_and_mask(image_name, source_images_folder, dest_images_folder, dest_labels_folder):
536
+ """
537
+ Process a single image and its masks, saving them in the appropriate YOLO format.
538
+ """
539
+ try:
540
+ image_path = os.path.join(source_images_folder, image_name)
541
+ label_file_path = os.path.join(dest_labels_folder, f"{os.path.splitext(image_name)[0]}.txt")
542
+
543
+ # Copy image to the destination images folder
544
+ shutil.copy(image_path, os.path.join(dest_images_folder, image_name))
545
+
546
+ # Clear the label file if it exists
547
+ if os.path.exists(label_file_path):
548
+ os.remove(label_file_path)
549
+
550
+ # Process void mask
551
+ void_mask_path = os.path.join(HISTORY_FOLDERS['masks_void'], f"{os.path.splitext(image_name)[0]}.png")
552
+ if os.path.exists(void_mask_path):
553
+ convert_mask_to_yolo(
554
+ mask_path=void_mask_path,
555
+ image_path=image_path,
556
+ class_id=0, # Void class
557
+ output_path=label_file_path
558
+ )
559
+
560
+ # Process chip mask
561
+ chip_mask_path = os.path.join(HISTORY_FOLDERS['masks_chip'], f"{os.path.splitext(image_name)[0]}.png")
562
+ if os.path.exists(chip_mask_path):
563
+ convert_mask_to_yolo(
564
+ mask_path=chip_mask_path,
565
+ image_path=image_path,
566
+ class_id=1, # Chip class
567
+ output_path=label_file_path,
568
+ append=True # Append chip annotations
569
+ )
570
+
571
+ logging.info(f"Processed {image_name} into YOLO format.")
572
+ except Exception as e:
573
+ logging.error(f"Error processing {image_name}: {e}")
574
+ raise
575
+
576
+ def backup_masks_and_images():
577
+ """Backup current masks and images from history folders."""
578
+ temp_backup_paths = {
579
+ 'voids': os.path.join(DATASET_FOLDERS['temp_backup'], 'masks/voids'),
580
+ 'chips': os.path.join(DATASET_FOLDERS['temp_backup'], 'masks/chips'),
581
+ 'images': os.path.join(DATASET_FOLDERS['temp_backup'], 'images')
582
+ }
583
+
584
+ # Prepare all backup directories
585
+ for path in temp_backup_paths.values():
586
+ if os.path.exists(path):
587
+ shutil.rmtree(path)
588
+ os.makedirs(path, exist_ok=True)
589
+
590
+ try:
591
+ # Backup images from history
592
+ for file in os.listdir(HISTORY_FOLDERS['images']):
593
+ src_image_path = os.path.join(HISTORY_FOLDERS['images'], file)
594
+ dst_image_path = os.path.join(temp_backup_paths['images'], file)
595
+ shutil.copy(src_image_path, dst_image_path)
596
+
597
+ # Backup void masks from history
598
+ for file in os.listdir(HISTORY_FOLDERS['masks_void']):
599
+ src_void_path = os.path.join(HISTORY_FOLDERS['masks_void'], file)
600
+ dst_void_path = os.path.join(temp_backup_paths['voids'], file)
601
+ shutil.copy(src_void_path, dst_void_path)
602
+
603
+ # Backup chip masks from history
604
+ for file in os.listdir(HISTORY_FOLDERS['masks_chip']):
605
+ src_chip_path = os.path.join(HISTORY_FOLDERS['masks_chip'], file)
606
+ dst_chip_path = os.path.join(temp_backup_paths['chips'], file)
607
+ shutil.copy(src_chip_path, dst_chip_path)
608
+
609
+ logging.info("Masks and images backed up successfully from history.")
610
+ except Exception as e:
611
+ logging.error(f"Error during backup: {e}")
612
+ raise RuntimeError("Backup process failed.")
613
+
614
+ def run_yolo_training(num_epochs=10):
615
+ """Run YOLO training process."""
616
+ global training_process
617
+
618
+ try:
619
+ device = "cuda" if torch.cuda.is_available() else "cpu"
620
+ data_cfg_path = os.path.join(BASE_DIR, "models/data.yaml") # Ensure correct YAML path
621
+
622
+ logging.info(f"Starting YOLO training on {device} with {num_epochs} epochs.")
623
+ logging.info(f"Using dataset configuration: {data_cfg_path}")
624
+
625
+ training_command = [
626
+ "yolo",
627
+ "train",
628
+ f"data={data_cfg_path}",
629
+ f"model={os.path.join(DATASET_FOLDERS['models'], 'best.pt')}",
630
+ f"device={device}",
631
+ f"epochs={num_epochs}",
632
+ "project=runs",
633
+ "name=train"
634
+ ]
635
+
636
+ training_process = subprocess.Popen(
637
+ training_command,
638
+ stdout=subprocess.PIPE,
639
+ stderr=subprocess.STDOUT,
640
+ text=True,
641
+ env=os.environ.copy(),
642
+ )
643
+
644
+ # Display and log output in real time
645
+ for line in iter(training_process.stdout.readline, ''):
646
+ print(line.strip())
647
+ logging.info(line.strip())
648
+ socketio.emit('training_update', {'message': line.strip()}) # Send updates to the frontend
649
+
650
+ training_process.wait()
651
+
652
+ if training_process.returncode == 0:
653
+ finalize_training() # Finalize successfully completed training
654
+ else:
655
+ raise RuntimeError("YOLO training process failed. Check logs for details.")
656
+ except Exception as e:
657
+ logging.error(f"Training error: {e}")
658
+ restore_backup() # Restore the dataset and masks
659
+
660
+ # Emit training error event to the frontend
661
+ socketio.emit('training_status', {'status': 'error', 'message': f"Training failed: {str(e)}"})
662
+ finally:
663
+ update_training_status('running', False)
664
+ training_process = None # Reset the process
665
+
666
+
667
+ @socketio.on('cancel_training')
668
+ def handle_cancel_training():
669
+ """Cancel the YOLO training process."""
670
+ global training_process, training_status
671
+
672
+ if not training_status.get('running', False):
673
+ socketio.emit('button_update', {'action': 'retrain'}) # Update button to retrain
674
+ return
675
+
676
+ try:
677
+ training_process.terminate()
678
+ training_process.wait()
679
+ training_status['running'] = False
680
+ training_status['cancelled'] = True
681
+
682
+ restore_backup()
683
+ cleanup_train_val_directories()
684
+
685
+ # Emit button state change
686
+ socketio.emit('button_update', {'action': 'retrain'})
687
+ socketio.emit('training_status', {'status': 'cancelled', 'message': 'Training was canceled by the user.'})
688
+ except Exception as e:
689
+ logging.error(f"Error cancelling training: {e}")
690
+ socketio.emit('training_status', {'status': 'error', 'message': str(e)})
691
+
692
+ def finalize_training():
693
+ """Finalize training by promoting the new model and cleaning up."""
694
+ try:
695
+ # Locate the most recent training directory
696
+ runs_dir = os.path.join(BASE_DIR, 'runs')
697
+ if not os.path.exists(runs_dir):
698
+ raise FileNotFoundError("Training runs directory does not exist.")
699
+
700
+ # Get the latest training run folder
701
+ latest_run = max(
702
+ [os.path.join(runs_dir, d) for d in os.listdir(runs_dir)],
703
+ key=os.path.getmtime
704
+ )
705
+ weights_dir = os.path.join(latest_run, 'weights')
706
+ best_model_path = os.path.join(weights_dir, 'best.pt')
707
+
708
+ if not os.path.exists(best_model_path):
709
+ raise FileNotFoundError(f"'best.pt' not found in {weights_dir}.")
710
+
711
+ # Backup the old model
712
+ old_model_folder = DATASET_FOLDERS['models_old']
713
+ os.makedirs(old_model_folder, exist_ok=True)
714
+ existing_best_model = os.path.join(DATASET_FOLDERS['models'], 'best.pt')
715
+
716
+ if os.path.exists(existing_best_model):
717
+ timestamp = time.strftime("%Y%m%d_%H%M%S")
718
+ shutil.move(existing_best_model, os.path.join(old_model_folder, f"old_{timestamp}.pt"))
719
+ logging.info(f"Old model backed up to {old_model_folder}.")
720
+
721
+ # Move the new model to the models directory
722
+ new_model_dest = os.path.join(DATASET_FOLDERS['models'], 'best.pt')
723
+ shutil.move(best_model_path, new_model_dest)
724
+ logging.info(f"New model saved to {new_model_dest}.")
725
+
726
+ # Notify frontend that training is completed
727
+ socketio.emit('training_status', {
728
+ 'status': 'completed',
729
+ 'message': 'Training completed successfully! Model saved as best.pt.'
730
+ })
731
+
732
+ # Clean up train/val directories
733
+ cleanup_train_val_directories()
734
+ logging.info("Train and validation directories cleaned up successfully.")
735
+
736
+ except Exception as e:
737
+ logging.error(f"Error finalizing training: {e}")
738
+ # Emit error status to the frontend
739
+ socketio.emit('training_status', {'status': 'error', 'message': f"Error finalizing training: {str(e)}"})
740
+
741
+ def restore_backup():
742
+ """Restore the dataset and masks from the backup."""
743
+ try:
744
+ temp_backup = DATASET_FOLDERS['temp_backup']
745
+ shutil.copytree(os.path.join(temp_backup, 'masks/voids'), UPLOAD_FOLDERS['mask_voids'], dirs_exist_ok=True)
746
+ shutil.copytree(os.path.join(temp_backup, 'masks/chips'), UPLOAD_FOLDERS['mask_chips'], dirs_exist_ok=True)
747
+ shutil.copytree(os.path.join(temp_backup, 'images'), UPLOAD_FOLDERS['input'], dirs_exist_ok=True)
748
+ logging.info("Backup restored successfully.")
749
+ except Exception as e:
750
+ logging.error(f"Error restoring backup: {e}")
751
+
752
+ @app.route('/cancel_training', methods=['POST'])
753
+ def cancel_training():
754
+ global training_process
755
+
756
+ if training_process is None:
757
+ logging.error("No active training process to terminate.")
758
+ return jsonify({'error': 'No active training process to cancel.'}), 400
759
+
760
+ try:
761
+ training_process.terminate()
762
+ training_process.wait()
763
+ training_process = None # Reset the process after termination
764
+
765
+ # Update training status
766
+ update_training_status('running', False)
767
+ update_training_status('cancelled', True)
768
+
769
+ # Check if the model is already saved as best.pt
770
+ best_model_path = os.path.join(DATASET_FOLDERS['models'], 'best.pt')
771
+ if os.path.exists(best_model_path):
772
+ logging.info(f"Model already saved as best.pt at {best_model_path}.")
773
+ socketio.emit('button_update', {'action': 'revert'}) # Notify frontend to revert button state
774
+ else:
775
+ logging.info("Training canceled, but no new model was saved.")
776
+
777
+ # Restore backup if needed
778
+ restore_backup()
779
+ cleanup_train_val_directories()
780
+
781
+ # Emit status update to frontend
782
+ socketio.emit('training_status', {'status': 'cancelled', 'message': 'Training was canceled by the user.'})
783
+ return jsonify({'message': 'Training canceled and data restored successfully.'}), 200
784
+
785
+ except Exception as e:
786
+ logging.error(f"Error cancelling training: {e}")
787
+ return jsonify({'error': f"Failed to cancel training: {e}"}), 500
788
+
789
+ @app.route('/clear_history', methods=['POST'])
790
+ def clear_history():
791
+ try:
792
+ for folder in [HISTORY_FOLDERS['images'], HISTORY_FOLDERS['masks_chip'], HISTORY_FOLDERS['masks_void']]:
793
+ shutil.rmtree(folder, ignore_errors=True)
794
+ os.makedirs(folder, exist_ok=True) # Recreate the empty folder
795
+ return jsonify({'message': 'History cleared successfully!'}), 200
796
+ except Exception as e:
797
+ return jsonify({'error': f'Failed to clear history: {e}'}), 500
798
+
799
+ @app.route('/training_status', methods=['GET'])
800
+ def get_training_status():
801
+ """Return the current training status."""
802
+ if training_status.get('running', False):
803
+ return jsonify({'status': 'running', 'message': 'Training in progress.'}), 200
804
+ elif training_status.get('cancelled', False):
805
+ return jsonify({'status': 'cancelled', 'message': 'Training was cancelled.'}), 200
806
+ return jsonify({'status': 'idle', 'message': 'No training is currently running.'}), 200
807
+
808
+ def cleanup_train_val_directories():
809
+ """Clear the train and validation directories."""
810
+ try:
811
+ for folder in [DATASET_FOLDERS['train_images'], DATASET_FOLDERS['train_labels'],
812
+ DATASET_FOLDERS['val_images'], DATASET_FOLDERS['val_labels']]:
813
+ shutil.rmtree(folder, ignore_errors=True) # Remove folder contents
814
+ os.makedirs(folder, exist_ok=True) # Recreate empty folders
815
+ logging.info("Train and validation directories cleaned up successfully.")
816
+ except Exception as e:
817
+ logging.error(f"Error cleaning up train/val directories: {e}")
818
+
819
+
820
+ if __name__ == '__main__':
821
+ multiprocessing.set_start_method('spawn') # Required for multiprocessing on Windows
822
+ app.run(debug=True, use_reloader=False)
823
+
824
+
dataset/.DS_Store ADDED
Binary file (8.2 kB). View file
 
dataset/images/.DS_Store ADDED
Binary file (6.15 kB). View file
 
dataset/images/train/02_JPG.rf.d6063f8ca200e543da7becc1bf260ed5.jpg ADDED
dataset/images/train/03_JPG.rf.2ca107348e11cdefab68044dba66388d.jpg ADDED
dataset/images/train/04_JPG.rf.b0b546ecbc6b70149b8932018e69fef0.jpg ADDED
dataset/images/train/05_jpg.rf.46241369ebb0749c40882400f82eb224.jpg ADDED
dataset/images/train/08_JPG.rf.1f81e954a3bbfc49dcd30e3ba0eb5e98.jpg ADDED
dataset/images/train/09_JPG.rf.9119efd8c174f968457a893669209835.jpg ADDED
dataset/images/train/10_JPG.rf.6745a7b3ea828239398b85182acba199.jpg ADDED
dataset/images/train/11_JPG.rf.3aa3109a1838549cf273cdbe8b2cafeb.jpg ADDED
dataset/images/train/12_jpg.rf.357643b374df92f81f9dee7c701b2315.jpg ADDED
dataset/images/train/14_jpg.rf.d91472c724e7c34da4d96ac5e504044c.jpg ADDED
dataset/images/train/15_jpg.rf.284413e4432b16253b4cd19f0c4f01e2.jpg ADDED
dataset/images/train/15r_jpg.rf.2da1990173346311d3a3555e23a9164a.jpg ADDED
dataset/images/train/16_jpg.rf.9fdb4f56ec8596ddcc31db5bbffc26a0.jpg ADDED
dataset/images/train/18_jpg.rf.4d241aab78af17171d83f3a50f1cf1aa.jpg ADDED
dataset/images/train/20_jpg.rf.4a45f799ba16b5ff81ab1929f12a12b1.jpg ADDED
dataset/images/train/21_jpg.rf.d1d6dd254d2e5f396589ccc68a3c8536.jpg ADDED
dataset/images/train/22_jpg.rf.a72964a78ea36c7bebe3a09cf27ef6ba.jpg ADDED
dataset/images/train/25_jpg.rf.893f4286e0c8a3cef2efb7612f248147.jpg ADDED
dataset/images/train/26_jpg.rf.a03c550707ff22cd50ff7f54bebda7ab.jpg ADDED
dataset/images/train/29_jpg.rf.931769b58ae20d18d1f09d042bc44176.jpg ADDED
dataset/images/train/31_jpg.rf.f31137f793efde0462ed560d426dcd24.jpg ADDED
dataset/images/train/7-Figure14-1_jpg.rf.1c6cb204ed1f37c8fed44178a02e9058.jpg ADDED
dataset/images/train/LU-F_mod_jpg.rf.fc594179772346639512f891960969bb.jpg ADDED
dataset/images/train/Solder_Voids_jpg.rf.d40f1b71d8a801f084067fde7f33fb08.jpg ADDED
dataset/images/train/gc10_lake_voids_260-31_jpg.rf.479f3d9dda8dd22097d3d93c78f7e11d.jpg ADDED
dataset/images/train/images_jpg.rf.675b31c5e1ba2b77f0fa5ca92e2391b0.jpg ADDED
dataset/images/train/qfn-voiding_0_jpg.rf.2945527db158e9ff4943febaf9cd3eab.jpg ADDED
dataset/images/train/techtips_3_jpg.rf.ad88af637816f0999f4df0b18dfef293.jpg ADDED
dataset/images/val/025_JPG.rf.b2cdc2d984adff593dc985f555b8d280.jpg ADDED
dataset/images/val/06_jpg.rf.a94e0a678df372f5ea1395a8d888a388.jpg ADDED
dataset/images/val/07_JPG.rf.324d17a87726bd2a9614536c687c6e68.jpg ADDED
dataset/images/val/23_jpg.rf.8e9afa6b3b471e10c26637d47700f28b.jpg ADDED
dataset/images/val/24_jpg.rf.4caa996d97e35f6ce4f27a527ea43465.jpg ADDED
dataset/images/val/27_jpg.rf.3475fce31d283058f46d9f349c04cb1a.jpg ADDED
dataset/images/val/28_jpg.rf.50e348d807d35667583137c9a6c162ca.jpg ADDED
dataset/images/val/30_jpg.rf.ed72622e97cf0d884997585686cfe40a.jpg ADDED
dataset/test/.DS_Store ADDED
Binary file (6.15 kB). View file
 
dataset/test/images/17_jpg.rf.ec31940ea72d0cf8b9f38dba68789fcf.jpg ADDED
dataset/test/images/19_jpg.rf.2c5ffd63bd0ce6b9b0c80fef69d101dc.jpg ADDED
dataset/test/images/32_jpg.rf.f3e33dcf611a8754c0765224f7873d8b.jpg ADDED
dataset/test/images/normal-reflow_jpg.rf.2c4fbc1fda915b821b85689ae257e116.jpg ADDED
dataset/test/images/techtips_31_jpg.rf.673cd3c7c8511e534766e6dbc3171b39.jpg ADDED
dataset/test/labels/.DS_Store ADDED
Binary file (6.15 kB). View file