prasadsachin
commited on
Commit
•
a4e1293
1
Parent(s):
d390e5e
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,11 @@
|
|
1 |
---
|
2 |
library_name: keras-hub
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
## Model Overview
|
5 |
Llama 2 is a set of large language models published by Meta. Both pretrained and instruction tuned models are available, and range in size from 7 billion to 70 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
|
@@ -178,4 +184,4 @@ llama_lm = keras_hub.models.LlamaCausalLM.from_preset(
|
|
178 |
dtype="bfloat16"
|
179 |
)
|
180 |
llama_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
|
181 |
-
```
|
|
|
1 |
---
|
2 |
library_name: keras-hub
|
3 |
+
license: llama2
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- text-generation
|
8 |
+
- keras
|
9 |
---
|
10 |
## Model Overview
|
11 |
Llama 2 is a set of large language models published by Meta. Both pretrained and instruction tuned models are available, and range in size from 7 billion to 70 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
|
|
|
184 |
dtype="bfloat16"
|
185 |
)
|
186 |
llama_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
|
187 |
+
```
|