viethoangtranduong commited on
Commit
cfe05c9
1 Parent(s): 664f4f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -4,8 +4,6 @@ datasets:
4
  - snorkelai/Snorkel-Mistral-Self-Improvement
5
  ---
6
 
7
- Original post: [Snorkel link]
8
-
9
  ### Dataset:
10
  Training dataset: [snorkelai/Snorkel-Mistral-Self-Improvement](link)
11
 
@@ -23,16 +21,15 @@ We plan to release more detailed results and findings in the coming weeks on the
23
  ### Key Premises:
24
  - **Specialization Requirement**: For most enterprise use cases, using LLMs "off-the-shelf" falls short of production quality, necessitating additional fine-tuning and alignment.
25
  - **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
26
- - **Programmatic Alignment**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes. We call this **Programmatic Alignment** - capturing domain knowledge in programmatic forms that can be used to guide LLM improvement.
27
 
28
  ### Applications:
29
  Unlike our customers, who have very specific use cases to align LLMs to,
30
- the AlpacaEval 2.0 leaderboard measures the ability of LLMS to follow general user instructions.
31
- Thus, for this demonstration, we use a general-purpose reward model - the performant [PairRM model](https://huggingface.co/llm-blender/PairRM).
 
32
  We use the [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model as our base LLM.
33
 
34
- With this demonstration, we focus on the general approach of programmatic alignment.
35
-
36
  For interest in building your **specialized internal reward models
37
  that reflect your enterprises' needs**, please contact the Snorkel AI team or consider attending our
38
  [**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/)
@@ -43,17 +40,21 @@ On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
43
  - The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**.
44
  After applying the above methodology:
45
  - This model scored **30.2** - ranked 3rd and the highest for an open-source base model at the time of publication.
46
- - When post-processing the model outputs with PairRM-best-of-16, which involved generating 16 responses and select the highest-scoring response by PairRM, we scored **34.86** - ranked 2nd.
47
  The best model on the leaderboard is "gpt-4-turbo", which is also the judge of optimal responses.
48
 
49
  We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the full range of capabilities and performances of LLMs.
50
  However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
51
  Moving forward, we anticipate further contributions from the community regarding new alignment axes, and conduct evaluations using other appropriate benchmarks.
52
 
 
 
 
 
53
  ### Limitations:
54
  The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
55
  It does not have any moderation mechanisms.
56
- We look forward to continuing to engage with the research community and our customers exploring optimal methods for gettings models to respect guardrails,
57
  allowing for deployment in environments requiring moderated outputs.
58
 
59
  ### Contemporary Work and Acknowledgements:
 
4
  - snorkelai/Snorkel-Mistral-Self-Improvement
5
  ---
6
 
 
 
7
  ### Dataset:
8
  Training dataset: [snorkelai/Snorkel-Mistral-Self-Improvement](link)
9
 
 
21
  ### Key Premises:
22
  - **Specialization Requirement**: For most enterprise use cases, using LLMs "off-the-shelf" falls short of production quality, necessitating additional fine-tuning and alignment.
23
  - **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
24
+ - **Programmatic Alignment**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes.
25
 
26
  ### Applications:
27
  Unlike our customers, who have very specific use cases to align LLMs to,
28
+ the AlpacaEval 2.0 leaderboard measures the ability of LLMS to follow user instructions.
29
+ With this demonstration, we focus on the general approach to alignment.
30
+ Thus, we use a general-purpose reward model - the performant [PairRM model](https://huggingface.co/llm-blender/PairRM).
31
  We use the [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model as our base LLM.
32
 
 
 
33
  For interest in building your **specialized internal reward models
34
  that reflect your enterprises' needs**, please contact the Snorkel AI team or consider attending our
35
  [**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/)
 
40
  - The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**.
41
  After applying the above methodology:
42
  - This model scored **30.2** - ranked 3rd and the highest for an open-source base model at the time of publication.
43
+ - When post-processing the model outputs with PairRM-best-of-16, which involved generating 16 responses and selecting the highest-scoring response by PairRM, we scored **34.86** - ranked 2nd.
44
  The best model on the leaderboard is "gpt-4-turbo", which is also the judge of optimal responses.
45
 
46
  We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the full range of capabilities and performances of LLMs.
47
  However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
48
  Moving forward, we anticipate further contributions from the community regarding new alignment axes, and conduct evaluations using other appropriate benchmarks.
49
 
50
+ The Alpaca-Eval 2.0 evaluator, "gpt-4-turbo," exhibits a bias towards longer responses.
51
+ This tendency might also be present in our chosen reward model, resulting in our model producing lengthier responses after DPO iterations.
52
+ Future work could include measures to control response length and other relevant metrics.
53
+
54
  ### Limitations:
55
  The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
56
  It does not have any moderation mechanisms.
57
+ We look forward to continuing to engage with the research community and our customers exploring optimal methods for getting models to respect guardrails,
58
  allowing for deployment in environments requiring moderated outputs.
59
 
60
  ### Contemporary Work and Acknowledgements: