Add citation in README.md
Browse files
README.md
CHANGED
@@ -75,9 +75,20 @@ We utilize [PowerInfer](https://arxiv.org/pdf/2312.12456.pdf), a state-of-the-ar
|
|
75 |
|
76 |
- Bamboo, having undergone training with only 200B tokens, may still exhibit performance gaps in certain tasks.
|
77 |
- The Bamboo model has only been trained on English-language datasets, hence its capabilities in other languages are still lacking.
|
78 |
-
|
79 |
- The model may produce unexpected outputs due to its size and probabilistic generation paradigm.
|
80 |
|
81 |
## License
|
82 |
|
83 |
-
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
- Bamboo, having undergone training with only 200B tokens, may still exhibit performance gaps in certain tasks.
|
77 |
- The Bamboo model has only been trained on English-language datasets, hence its capabilities in other languages are still lacking.
|
|
|
78 |
- The model may produce unexpected outputs due to its size and probabilistic generation paradigm.
|
79 |
|
80 |
## License
|
81 |
|
82 |
+
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage.
|
83 |
+
|
84 |
+
## Citation:
|
85 |
+
|
86 |
+
Please kindly cite using the following BibTeX:
|
87 |
+
|
88 |
+
```
|
89 |
+
@misc{bamboo,
|
90 |
+
title={Bamboo: Explore the boundry between sparsity and performace in LLM},
|
91 |
+
author={Yixin Song, Haotong Xie, Zeyu Mi},
|
92 |
+
year={2024}
|
93 |
+
}
|
94 |
+
```
|