haoranxu commited on
Commit
ef79141
•
1 Parent(s): 52c47c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -8
README.md CHANGED
@@ -2,7 +2,6 @@
2
  license: mit
3
  ---
4
  **[ALMA-R](https://arxiv.org/abs/2401.08417)** builds upon [ALMA models](https://arxiv.org/abs/2309.11674), with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners!
5
-
6
  ```
7
  @misc{xu2024contrastive,
8
  title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation},
@@ -28,10 +27,10 @@ Model checkpoints are released at huggingface:
28
  |:-------------:|:---------------:|:---------:|
29
  | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
30
  | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
31
- | **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-R](https://huggingface.co/haoranxu/ALMA-7B-R) |
32
  | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
33
  | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
34
- | **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-R](https://huggingface.co/haoranxu/ALMA-13B-R) |
35
 
36
  **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.**
37
 
@@ -45,14 +44,12 @@ Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!)
45
  A quick start to use our best system (ALMA-13B-R) for translation. An example of translating "我爱机器翻译。" into English:
46
  ```
47
  import torch
48
- from peft import PeftModel
49
  from transformers import AutoModelForCausalLM
50
- from transformers import LlamaTokenizer
51
 
52
  # Load base model and LoRA weights
53
- model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
54
- model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-R")
55
- tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
56
 
57
  # Add the source sentence into the prompt template
58
  prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
 
2
  license: mit
3
  ---
4
  **[ALMA-R](https://arxiv.org/abs/2401.08417)** builds upon [ALMA models](https://arxiv.org/abs/2309.11674), with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners!
 
5
  ```
6
  @misc{xu2024contrastive,
7
  title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation},
 
27
  |:-------------:|:---------------:|:---------:|
28
  | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
29
  | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
30
+ | **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-7B-R) | - |
31
  | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
32
  | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
33
+ | **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-13B-R) | - |
34
 
35
  **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.**
36
 
 
44
  A quick start to use our best system (ALMA-13B-R) for translation. An example of translating "我爱机器翻译。" into English:
45
  ```
46
  import torch
 
47
  from transformers import AutoModelForCausalLM
48
+ from transformers import AutoTokenizer
49
 
50
  # Load base model and LoRA weights
51
+ model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-R", torch_dtype=torch.float16, device_map="auto")
52
+ tokenizer = AutoTokenizer.from_pretrained("haoranxu/ALMA-13B-R", padding_side='left')
 
53
 
54
  # Add the source sentence into the prompt template
55
  prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"