File size: 2,921 Bytes
6e4d21f
16e6d81
 
 
 
 
 
 
 
 
6e4d21f
 
c0b5b24
 
6e4d21f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0b5b24
6e4d21f
 
 
c0b5b24
6e4d21f
 
 
 
c0b5b24
6e4d21f
 
 
 
 
 
c0b5b24
 
6e4d21f
 
 
 
 
 
 
 
c0b5b24
6e4d21f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80

---
language: en
tags:
- phi-1.5
- unlearning
- TOFU
license: mit
---

# Phi-1.5 TOFU Unlearning Model

**IMPORTANT: This model's checkpoints are stored in separate branches. You MUST specify a revision when loading the model to access a specific checkpoint.**

This model is a variant of the Phi-1.5 model, fine-tuned on the TOFU (Task of Fictitious Unlearning) dataset and then subjected to various unlearning algorithms.

## Model Details

- **Base Model**: Phi-1.5
- **Training**: Fine-tuned on TOFU dataset
- **Unlearning**: Applied various unlearning algorithms

## Unlearning Algorithm

This model uses the `grad_ascent` unlearning algorithm with the following parameters:
- Learning Rate: `1e-05`
- Forget Percentage: `01%`
- Extended Training: Yes

## Revisions

The model is organized into multiple revisions, each representing a checkpoint during the unlearning process. The revision names follow the pattern `checkpoint-X`, where X is the checkpoint number. Each revision is stored in a separate branch.

## Loading the Model

To load a specific revision of this model, you MUST specify the revision parameter. Use the following code:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

# The 'revision' parameter is REQUIRED. Replace 'checkpoint-X' with the desired revision (e.g., 'checkpoint-12')
revision = "checkpoint-X"

model = AutoModelForCausalLM.from_pretrained("locuslab/{model_name}", revision=revision)
tokenizer = AutoTokenizer.from_pretrained("locuslab/{model_name}", revision=revision)
```

**Note: If you don't specify a revision, you will not be able to load the model correctly.**

## TOFU Dataset

TOFU (Task of Fictitious Unlearning) is a dataset designed for training and evaluating unlearning algorithms in language models. It simulates scenarios where certain information needs to be "forgotten" or removed from the model's knowledge.

## Unlearning Process

1. The base Phi-1.5 model was first fine-tuned on the TOFU dataset (checkpoint-625).
2. Various unlearning algorithms were then applied to this fine-tuned model to selectively "forget" certain information.
3. The results of these unlearning processes are captured in the different revisions (branches) of this model.

## Usage and Limitations

This model is primarily intended for research purposes, particularly in the field of machine unlearning and privacy in language models. It may not be suitable for general-purpose language tasks without further evaluation.

## Citation

If you use this model in your research, please cite: 
```
@misc{tofu2024,
      title={TOFU: A Task of Fictitious Unlearning for LLMs}, 
      author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
      year={2024},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
```

## Contact

For questions or issues regarding this model, please contact pratyushmaini@cmu.edu.