Edit model card

Adversarial Examples for improving the robustness of Eye-State Classification πŸ‘ πŸ‘ :

First Aim:

Project aims to improve the robustness of the model by adding the adversarial examples to the training dataset. We investigated that the robustness of the models on the clean test data are always better than the attacks even though added the pertubated data to the training data.

Second Aim:

Using adversarial examples, the project aims to improve the robustness and accuracy of a machine learning model which detects the eye-states against small perturbation of an image and to solve the misclassification problem caused by natural transformation.

Methodologies

  • Develop Wide Residual Network and Parseval Network.
  • Train Neural Networks using training dataset.
  • Construct the AEs using FGSM and Random Noise.

The approach for the first aim.

===================================================================

  • Train Neural Networks by adding Adversarial Examples (AEs) to the training dataset.
  • Evaluate the models on the original test dataset.

The approach for the second aim.

===================================================================

  • Train Neural Networks using Adversarial Training with AEs.
  • Attack the new model with different perturbated test dataset.

Neural Network Models

Wide Residual Network

  • Baseline of the Model

Parseval Network

Convolutional Neural Network

Adversarial Examples

Fast Gradient Sign Method

Examples

Evaluation

  • To evaluate the result of the neural network, Signal to Noise Ratio (SNR) is used as metric.
  • Use transferability of AEs to evaluate the models.

Development

Models:


adversarial_examples_parseval_net/models
β”œβ”€β”€ FullyConectedModels
β”‚   β”œβ”€β”€ model.py
β”‚   └── parseval.py
β”œβ”€β”€ Parseval_Networks
β”‚   β”œβ”€β”€ constraint.py
β”‚   β”œβ”€β”€ convexity_constraint.py
β”‚   β”œβ”€β”€ parsevalnet.py
β”œβ”€β”€ _utility.py
└── wideresnet
    └── wresnet.py

Final Results:

References

[1] Cisse, Bojanowski, Grave, Dauphin and Usunier, Parseval Networks: Improving Robustness to Adversarial Examples, 2017.

[2] Zagoruyko and Komodakis, Wide Residual Networks, 2016.


@misc{ParsevalNetworks,
  author= "Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, Nicolas Usunier"
  title="Parseval Networks: Improving Robustness to Adversarial Examples"
  year= "2017"
}

@misc{Wide Residual Networks
  author= "Sergey Zagoruyko, Nikos Komodakis"
  title= "Wide Residual Networks"
  year= "2016"
}

Author

Sefika Efeoglu

Research Project, Data Science MSc, University of Potsdam

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .