Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
library_name: transformers
|
3 |
+
---
|
4 |
+
|
5 |
+
# ibleducation/ibl-fordham-7b
|
6 |
+
ibleducation/ibl-fordham-7b is a model finetuned on top of openchat/openchat_3.5
|
7 |
+
|
8 |
+
This model is finetuned to answer questions about fordham university.
|
9 |
+
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
+
## Model Details
|
14 |
+
|
15 |
+
- **Developed by:** [IBL Education](https://ibl.ai)
|
16 |
+
- **Model type:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
17 |
+
- **Base Model:** [OpenChat 3.5](https://huggingface.co/openchat/openchat_3.5)
|
18 |
+
- **Language:** English
|
19 |
+
- **Finetuned from weights:** [OpenChat 3.5](https://huggingface.co/openchat/openchat_3.5)
|
20 |
+
- **Finetuned on data:**
|
21 |
+
- [ibleducation/fordham-university](https://huggingface.co/datasets/ibleducation/fordham-university)
|
22 |
+
- **Model License:** Apache 2.0
|
23 |
+
- **Epochs**: 7
|
24 |
+
## How to Use ibl-fordham-7b Model from Python Code (HuggingFace transformers) ##
|
25 |
+
|
26 |
+
### Install the necessary packages
|
27 |
+
|
28 |
+
Requires: [transformers](https://pypi.org/project/transformers/) 4.35.0 or later, and [accelerate](https://pypi.org/project/accelerate/) 0.23.0 or later.
|
29 |
+
|
30 |
+
```shell
|
31 |
+
pip install transformers==4.35.0
|
32 |
+
pip install accelerate==0.23.0
|
33 |
+
```
|
34 |
+
### You can then try the following example code
|
35 |
+
|
36 |
+
```python
|
37 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
38 |
+
import transformers
|
39 |
+
import torch
|
40 |
+
|
41 |
+
model_id = "ibleducation/ibl-fordham-7b"
|
42 |
+
|
43 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
44 |
+
model = AutoModelForCausalLM.from_pretrained(
|
45 |
+
model_id,
|
46 |
+
device_map="auto",
|
47 |
+
)
|
48 |
+
pipeline = transformers.pipeline(
|
49 |
+
"text-generation",
|
50 |
+
model=model,
|
51 |
+
tokenizer=tokenizer,
|
52 |
+
)
|
53 |
+
prompt = "<s>What programmes are offered at fordham university?</s>"
|
54 |
+
|
55 |
+
response = pipeline(prompt)
|
56 |
+
print(response['generated_text'])
|
57 |
+
```
|
58 |
+
**Important** - Use the prompt template below for ibl-fordham-7b:
|
59 |
+
```
|
60 |
+
<s>{prompt}</s>
|
61 |
+
```
|