vivek9chavan
commited on
Commit
•
afc87e4
1
Parent(s):
37c999b
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,58 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Towards Realistic Evaluation of Industrial Continual Learning Scenarios with an Emphasis on Energy Consumption and Computational Footprint
|
2 |
+
|
3 |
+
[[`Paper`](https://openaccess.thecvf.com/content/ICCV2023/html/Chavan_Towards_Realistic_Evaluation_of_Industrial_Continual_Learning_Scenarios_with_an_ICCV_2023_paper.html)] [[`Poster`](https://drive.google.com/file/d/18rZ5_DB3biaHvS2zVbjepI_h-2T9ISL3/view?usp=drive_link)] [[`Summary Video`](https://youtu.be/WvpDmG1UGSY)]
|
4 |
+
|
5 |
+
**Abstract:** Incremental Learning (IL) aims to develop Machine Learning (ML) models that can learn from continuous streams of data and mitigate catastrophic forgetting. We analyze the current state-of-the-art Class-IL implementations and demonstrate why the current body of research tends to be one-dimensional, with an excessive focus on accuracy metrics. A realistic evaluation of Continual Learning methods should also emphasize energy consumption and overall computational load for a comprehensive understanding. This paper addresses research gaps between current IL research and industrial project environments, including varying incremental tasks and the introduction of Joint Training in tandem with IL. We introduce InVar-100 (<ins>In</ins>dustrial Objects in <ins>Var</ins>ied Contexts), a novel dataset meant to simulate the visual environments in industrial setups and perform various experiments for IL. Additionally, we incorporate explainability (using class activations) to interpret the model predictions. Our approach, RECIL (<ins>R</ins>eal-world Scenarios and <ins>E</ins>nergy Efficiency considerations for <ins>C</ins>lass <ins>I</ins>ncremental <ins>L</ins>earning), provides meaningful insights about the applicability of IL approaches in practical use cases. The overarching aim is to tie the Incremental Learning and Green AI fields together and encourage the application of CIL methods in real-world scenarios. Code and dataset are available.
|
6 |
+
|
7 |
+
|
8 |
+
![Poster_img](https://github.com/Vivek9Chavan/RECIL/assets/57413096/a033df28-a033-4294-a4b0-e5641c540c42)
|
9 |
+
|
10 |
+
|
11 |
+
# InVar-100 Dataset
|
12 |
+
|
13 |
+
The **Industrial Objects in Varied Contexts** (InVar) Dataset was internally produced by our team and contains 100 objects in a total of 20,800 images (208 images per class). The objects consist of common automotive, machine, and robotics lab parts. Each class contains 4 sub-categories (52 images each) with different attributes and visual complexities.
|
14 |
+
|
15 |
+
**White background** (D<sub>wh</sub>): The object is against a clean white background, and the object is clear, centred, and in focus.
|
16 |
+
|
17 |
+
**Stationary Setup** (D<sub>st</sub>): These images are also taken against a clean background using a stationary camera setup, with uncentered objects at a constant distance. The images have lower DPI resolution with occasional cropping.
|
18 |
+
|
19 |
+
**Handheld** (D<sub>ha</sub>): These images are taken with the user holding the objects, with occasional occlusion.
|
20 |
+
|
21 |
+
**Cluttered background** (D<sub>cl</sub>): These images are taken with the object placed along with other objects from the lab in the background with no occlusion.
|
22 |
+
|
23 |
+
The dataset was produced by our staff at different workstations and labs in Berlin. Human subjects, when present in the images (e.g. holding the object), remain anonymized. More details regarding the objects used for digitization are available in the metadata file.
|
24 |
+
|
25 |
+
The InVar-100 dataset can be accessed here: http://dx.doi.org/10.24406/fordatis/266.2
|
26 |
+
|
27 |
+
<img src="https://github.com/Vivek9Chavan/RECIL/raw/main/qr-codev2.png" alt="QR Code" width="40%" />
|
28 |
+
|
29 |
+
|
30 |
+
## Acknowledgements
|
31 |
+
|
32 |
+
Our code borrows heavily form the following repositories:
|
33 |
+
|
34 |
+
https://github.com/G-U-N/PyCIL
|
35 |
+
|
36 |
+
https://github.com/facebookresearch/dino
|
37 |
+
|
38 |
+
https://github.com/facebookresearch/VICRegL
|
39 |
+
|
40 |
+
<a name="bibtex"></a>
|
41 |
+
## Citation
|
42 |
+
|
43 |
+
If you find our work or any of our materials useful, please cite our paper:
|
44 |
+
```
|
45 |
+
@InProceedings{Chavan_2023_ICCV,
|
46 |
+
author = {Chavan, Vivek and Koch, Paul and Schl\"uter, Marian and Briese, Clemens},
|
47 |
+
title = {Towards Realistic Evaluation of Industrial Continual Learning Scenarios with an Emphasis on Energy Consumption and Computational Footprint},
|
48 |
+
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
|
49 |
+
month = {October},
|
50 |
+
year = {2023},
|
51 |
+
pages = {11506-11518}
|
52 |
+
}
|
53 |
+
|
54 |
+
```
|
55 |
+
|
56 |
+
---
|
57 |
+
license: cc-by-4.0
|
58 |
+
---
|