move license to bottom and update probabilities
Browse files
README.md
CHANGED
@@ -51,15 +51,10 @@ fact-checking model, despite a small size.**
|
|
51 |
|
52 |
Please first clone our [GitHub Repo](https://github.com/Liyan06/MiniCheck) and install necessary packages from `requirements.txt`.
|
53 |
|
54 |
-
### License
|
55 |
-
|
56 |
-
Free for use for non-commercial purposes. For commercial licensing, please contact company@bespokelabs.ai.
|
57 |
-
|
58 |
### Throughput
|
59 |
|
60 |
We speed up Llama-3.1-Bespoke-MiniCheck-7B inference with [vLLM](https://github.com/vllm-project/vllm). Based on our test on a single A6000 (48 VRAM), Llama-3.1-Bespoke-MiniCheck-7B with vLLM and MiniCheck-Flan-T5-Large have throughputs > 500 docs/min.
|
61 |
|
62 |
-
|
63 |
### Below is a simple use case
|
64 |
|
65 |
```python
|
@@ -76,7 +71,7 @@ scorer = MiniCheck(model_name='Bespoke-MiniCheck-7B', cache_dir='./ckpts')
|
|
76 |
pred_label, raw_prob, _, _ = scorer.score(docs=[doc, doc], claims=[claim_1, claim_2])
|
77 |
|
78 |
print(pred_label) # [1, 0]
|
79 |
-
print(raw_prob) # [0.
|
80 |
```
|
81 |
|
82 |
### Test on our [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact) Benchmark
|
@@ -112,6 +107,10 @@ result_df.loc[len(result_df)] = ['Average', result_df.BAcc.mean()]
|
|
112 |
result_df.round(1)
|
113 |
```
|
114 |
|
|
|
|
|
|
|
|
|
115 |
# Citation
|
116 |
|
117 |
```
|
|
|
51 |
|
52 |
Please first clone our [GitHub Repo](https://github.com/Liyan06/MiniCheck) and install necessary packages from `requirements.txt`.
|
53 |
|
|
|
|
|
|
|
|
|
54 |
### Throughput
|
55 |
|
56 |
We speed up Llama-3.1-Bespoke-MiniCheck-7B inference with [vLLM](https://github.com/vllm-project/vllm). Based on our test on a single A6000 (48 VRAM), Llama-3.1-Bespoke-MiniCheck-7B with vLLM and MiniCheck-Flan-T5-Large have throughputs > 500 docs/min.
|
57 |
|
|
|
58 |
### Below is a simple use case
|
59 |
|
60 |
```python
|
|
|
71 |
pred_label, raw_prob, _, _ = scorer.score(docs=[doc, doc], claims=[claim_1, claim_2])
|
72 |
|
73 |
print(pred_label) # [1, 0]
|
74 |
+
print(raw_prob) # [0.9840446675150499, 0.010986349594852094]
|
75 |
```
|
76 |
|
77 |
### Test on our [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact) Benchmark
|
|
|
107 |
result_df.round(1)
|
108 |
```
|
109 |
|
110 |
+
# License
|
111 |
+
|
112 |
+
Free for use for non-commercial purposes. For commercial licensing, please contact company@bespokelabs.ai.
|
113 |
+
|
114 |
# Citation
|
115 |
|
116 |
```
|