chansurgeplus
commited on
Commit
•
2e34f15
1
Parent(s):
09e14f0
Update README.md
Browse files
README.md
CHANGED
@@ -29,8 +29,8 @@ OpenBezoar-SFT is built upon the Open Llama 3B v2 architecture and has been fine
|
|
29 |
|
30 |
### Model Sources
|
31 |
|
32 |
-
- **Repository:** [
|
33 |
-
- **Paper :** [
|
34 |
|
35 |
## Instruction Format
|
36 |
|
@@ -93,7 +93,14 @@ Refer to our self-reported evaluations in our paper (Section 4).
|
|
93 |
## Citation
|
94 |
If you find our work useful, please cite our paper as follows:
|
95 |
```
|
96 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
```
|
98 |
|
99 |
## Model Authors
|
|
|
29 |
|
30 |
### Model Sources
|
31 |
|
32 |
+
- **Repository:** [Bitbucket Project](https://bitbucket.org/paladinanalytics/workspace/projects/OP)
|
33 |
+
- **Paper :** [Pre-Print](https://arxiv.org/abs/2404.12195)
|
34 |
|
35 |
## Instruction Format
|
36 |
|
|
|
93 |
## Citation
|
94 |
If you find our work useful, please cite our paper as follows:
|
95 |
```
|
96 |
+
@misc{dissanayake2024openbezoar,
|
97 |
+
title={OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data},
|
98 |
+
author={Chandeepa Dissanayake and Lahiru Lowe and Sachith Gunasekara and Yasiru Ratnayake},
|
99 |
+
year={2024},
|
100 |
+
eprint={2404.12195},
|
101 |
+
archivePrefix={arXiv},
|
102 |
+
primaryClass={cs.CL}
|
103 |
+
}
|
104 |
```
|
105 |
|
106 |
## Model Authors
|