Dhrumit1314
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,35 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
# Food Vision with EfficientNet
|
5 |
+
|
6 |
+
This repository contains the code for the Food Vision project using EfficientNet. The project involves building a deep learning model to classify food images into 101 different classes using the Food101 dataset.
|
7 |
+
|
8 |
+
|
9 |
+
## Project Overview
|
10 |
+
|
11 |
+
This project utilizes TensorFlow and EfficientNet for image classification. It involves training a model on the Food101 dataset, fine-tuning the model, and evaluating its performance.
|
12 |
+
|
13 |
+
## Dataset
|
14 |
+
|
15 |
+
The [Food101 dataset](https://www.tensorflow.org/datasets/catalog/food101) is used for this project. It consists of 101,000 images across 101 food classes.
|
16 |
+
|
17 |
+
## Data Preprocessing
|
18 |
+
|
19 |
+
The dataset is preprocessed using TensorFlow Datasets (TFDS). Images are resized, normalized, and batched to create an efficient input pipeline for the model.
|
20 |
+
|
21 |
+
## Model Architecture
|
22 |
+
|
23 |
+
The EfficientNetV2B0 architecture is used as the base model for feature extraction. The top layers are added for classification. The model is compiled with a suitable loss function, optimizer, and metrics.
|
24 |
+
|
25 |
+
## Training
|
26 |
+
|
27 |
+
The model is trained on the preprocessed data, and the training process is logged using TensorBoard. Checkpoints are saved to monitor the model's progress.
|
28 |
+
|
29 |
+
## Fine-tuning
|
30 |
+
|
31 |
+
After feature extraction, the model is fine-tuned on the entire Food101 dataset. Learning rate reduction and early stopping callbacks are used to optimize training.
|
32 |
+
|
33 |
+
## Results
|
34 |
+
|
35 |
+
The model's performance is evaluated on the test set, and the results are compared before and after fine-tuning.
|