bentleylong
commited on
Commit
•
355cfc1
1
Parent(s):
89a3ff3
Update README.md
Browse files
README.md
CHANGED
@@ -15,8 +15,9 @@ tags:
|
|
15 |
## Table of Contents
|
16 |
|
17 |
1. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b-gguf#model-summary" target="_blank">Model Summary</a>**
|
18 |
-
2. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b-gguf#
|
19 |
-
3. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b-gguf#
|
|
|
20 |
|
21 |
|
22 |
# **Model Summary**
|
@@ -28,10 +29,18 @@ tags:
|
|
28 |
Morph Prover v0 7B, the first open-source model trained as a conversational assistant for Lean users. This model was trained in collaboration with **<a href="https://nousresearch.com/" target="_blank">Nous Research</a>** and the **<a href="https://cs.stanford.edu/~sanmi/" target="_blank">Safe and Trustworthy AI Research (STAIR) group at Stanford</a>** led by professor Sanmi Koyejo, with major contributions by Brando Miranda of Stanford and help from Peter Holderrieth of MIT and Jin Peng Zhou of Cornell. Thanks to **<a href="https://huggingface.co/nomic-ai" target="_blank">Nomic AI's</a>** GPT4All Vulkan support, this model can run on any consumer GPU. Morph Prover v0 7B is a chat fine-tune of **<a href="https://huggingface.co/mistralai/Mistral-7B-v0.1" target="_blank">Mistral 7B</a>** which achieves state of the art results in autoformalization while performing better than the original Mistral model on benchmarks like AGIEval and MMLU. It was trained with a proprietary synthetic data pipeline with code data generated by the **<a href="https://github.com/morph-labs/mci" target="_blank">Morph Code Index</a>**.
|
29 |
|
30 |
|
|
|
31 |
|
32 |
-
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
|
37 |
## Ethical Considerations and Limitations
|
|
|
15 |
## Table of Contents
|
16 |
|
17 |
1. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b-gguf#model-summary" target="_blank">Model Summary</a>**
|
18 |
+
2. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b-gguf#training-format" target="_blank">Training Format</a>**
|
19 |
+
3. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b-gguf#sign-up-for-hosted-use" target="_blank">Sign Up for Hosted Use</a>**
|
20 |
+
4. **<a href="https://huggingface.co/morph-labs/morph-prover-v0-7b-gguf#ethical-considerations-and-limitations" target="_blank">Ethical Considerations & Limitations</a>**
|
21 |
|
22 |
|
23 |
# **Model Summary**
|
|
|
29 |
Morph Prover v0 7B, the first open-source model trained as a conversational assistant for Lean users. This model was trained in collaboration with **<a href="https://nousresearch.com/" target="_blank">Nous Research</a>** and the **<a href="https://cs.stanford.edu/~sanmi/" target="_blank">Safe and Trustworthy AI Research (STAIR) group at Stanford</a>** led by professor Sanmi Koyejo, with major contributions by Brando Miranda of Stanford and help from Peter Holderrieth of MIT and Jin Peng Zhou of Cornell. Thanks to **<a href="https://huggingface.co/nomic-ai" target="_blank">Nomic AI's</a>** GPT4All Vulkan support, this model can run on any consumer GPU. Morph Prover v0 7B is a chat fine-tune of **<a href="https://huggingface.co/mistralai/Mistral-7B-v0.1" target="_blank">Mistral 7B</a>** which achieves state of the art results in autoformalization while performing better than the original Mistral model on benchmarks like AGIEval and MMLU. It was trained with a proprietary synthetic data pipeline with code data generated by the **<a href="https://github.com/morph-labs/mci" target="_blank">Morph Code Index</a>**.
|
30 |
|
31 |
|
32 |
+
## Training Format
|
33 |
|
34 |
+
The model was trained for chat using the Llama 2 chat format. Example as follows:
|
35 |
|
36 |
+
```
|
37 |
+
[INST] <<SYS>>\n You are a helpful assistant. \n<</SYS>>\n\n What is the curry howard isomorphism? [/INST]
|
38 |
+
```
|
39 |
+
|
40 |
+
|
41 |
+
## Sign Up for Hosted Use
|
42 |
+
|
43 |
+
**<a href="https://forms.gle/vr28p6dQCJuRVLAE6" target="_blank">Sign Up Form</a>**
|
44 |
|
45 |
|
46 |
## Ethical Considerations and Limitations
|