Instructions to use hf-internal-testing/tiny-random-StableLmModel with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hf-internal-testing/tiny-random-StableLmModel with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="hf-internal-testing/tiny-random-StableLmModel")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-StableLmModel") model = AutoModel.from_pretrained("hf-internal-testing/tiny-random-StableLmModel") - Notebooks
- Google Colab
- Kaggle
Update tiny models for StableLmModel
#43
by hf-transformers-bot - opened
- config.json +1 -1
- model.safetensors +1 -1
config.json
CHANGED
|
@@ -24,7 +24,7 @@
|
|
| 24 |
"rope_theta": 10000,
|
| 25 |
"tie_word_embeddings": false,
|
| 26 |
"torch_dtype": "float32",
|
| 27 |
-
"transformers_version": "4.
|
| 28 |
"type_vocab_size": 16,
|
| 29 |
"use_cache": true,
|
| 30 |
"use_qkv_bias": false,
|
|
|
|
| 24 |
"rope_theta": 10000,
|
| 25 |
"tie_word_embeddings": false,
|
| 26 |
"torch_dtype": "float32",
|
| 27 |
+
"transformers_version": "4.40.0.dev0",
|
| 28 |
"type_vocab_size": 16,
|
| 29 |
"use_cache": true,
|
| 30 |
"use_qkv_bias": false,
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 455008
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6de7a65e6bbb6f71d1d43a28d373d9c3a345283b3f6ca7fbf91294181811ab3b
|
| 3 |
size 455008
|