File size: 3,204 Bytes
5379a0c f37efa7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
license: gpl-3.0
---
## Adversarial Examples for improving the robustness of Eye-State Classification π π :
### First Aim:
Project aims to improve the robustness of the model by adding the adversarial examples to the training dataset.
We investigated that the robustness of the models on the clean test data are always better than the attacks even though added the pertubated data to the training data.
### Second Aim:
Using adversarial examples, the project aims to improve the robustness and accuracy of a machine learning model which detects the eye-states against small perturbation of an image and to solve the misclassification problem caused by natural transformation.
### Methodologies
* Develop Wide Residual Network and Parseval Network.
* Train Neural Networks using training dataset.
* Construct the AEs using FGSM and Random Noise.
#### The approach for the first aim.
===================================================================
* Train Neural Networks by adding Adversarial Examples (AEs) to the training dataset.
* Evaluate the models on the original test dataset.
#### The approach for the second aim.
===================================================================
* Train Neural Networks using Adversarial Training with AEs.
* Attack the new model with different perturbated test dataset.
### Neural Network Models
#### Wide Residual Network
* Baseline of the Model
#### Parseval Network
* [Orthogonality Constraint in Convolutional Layers](/src/models/Parseval_Networks/constraint.py)
* [Convexity Constraint in Aggregation Layers](/src/models/Parseval_Networks/convexity_constraint.py)
#### Convolutional Neural Network
#### Adversarial Examples
##### Fast Gradient Sign Method
[Examples](src/visualization/Adversarial_Images.ipynb)
### Evaluation
* To evaluate the result of the neural network, Signal to Noise Ratio (SNR) is used as metric.
* Use transferability of AEs to evaluate the models.
## Development
#### Models:
``` bash
adversarial_examples_parseval_net/src/models
βββ FullyConectedModels
βΒ Β βββ model.py
βΒ Β βββ parseval.py
βββ Parseval_Networks
βΒ Β βββ constraint.py
βΒ Β βββ convexity_constraint.py
βΒ Β βββ parsevalnet.py
βββ _utility.py
βββ wideresnet
βββ wresnet.py
```
### Final Results:
* [The results of the first approach with FGSM](logs/AEModels/)
* [The results of the first approach with Random Noise](logs/RandomNoisemodels/)
* [The results of the second approach](logs/images)
References
============
[1] Cisse, Bojanowski, Grave, Dauphin and Usunier, Parseval Networks: Improving Robustness to Adversarial Examples, 2017.
[2] Zagoruyko and Komodakis, Wide Residual Networks, 2016.
```
@misc{ParsevalNetworks,
author= "Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, Nicolas Usunier"
title="Parseval Networks: Improving Robustness to Adversarial Examples"
year= "2017"
}
```
```
@misc{Wide Residual Networks
author= "Sergey Zagoruyko, Nikos Komodakis"
title= "Wide Residual Networks"
year= "2016"
}
```
### Author
Sefika Efeoglu
Research Project, Data Science MSc, University of Potsdam
|