RajuKandasamy
commited on
Commit
•
e563402
1
Parent(s):
ba9e6a8
Update README.md
Browse files
README.md
CHANGED
@@ -44,8 +44,7 @@ widget:
|
|
44 |
---
|
45 |
|
46 |
## Tamillama_Tiny: A 30M tiny llama model trained to tell stories in Tamil
|
47 |
-
### TL;DR:
|
48 |
-
|
49 |
This is an experimental model inspired by the paper https://arxiv.org/abs/2305.07759 - How Small Can Language Models Be and Still Speak Coherent English?.
|
50 |
|
51 |
Extended the same concept for Tamil. A 30M parameter LLaMA architecture model that outputs coherent Tamil is preseted here.
|
@@ -64,7 +63,6 @@ This is not fit for any practical purpose other than for research/experimentatio
|
|
64 |
|
65 |
Usage:
|
66 |
```
|
67 |
-
python
|
68 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
69 |
|
70 |
tokenizer = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m")
|
|
|
44 |
---
|
45 |
|
46 |
## Tamillama_Tiny: A 30M tiny llama model trained to tell stories in Tamil
|
47 |
+
### TL;DR:
|
|
|
48 |
This is an experimental model inspired by the paper https://arxiv.org/abs/2305.07759 - How Small Can Language Models Be and Still Speak Coherent English?.
|
49 |
|
50 |
Extended the same concept for Tamil. A 30M parameter LLaMA architecture model that outputs coherent Tamil is preseted here.
|
|
|
63 |
|
64 |
Usage:
|
65 |
```
|
|
|
66 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
67 |
|
68 |
tokenizer = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m")
|