Search is not available for this dataset
modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 356
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Ori/lama-2-13b-peft-strategyqa-retrieval-at-1-v2-seed-1 | Ori | "2023-09-09T14:54:23Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | "2023-09-09T14:52:02Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
teasan/endlessnate | teasan | "2023-09-08T10:53:28Z" | 0 | 2 | diffusers | [
"diffusers",
"anime",
"art",
"stable-diffusion",
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-09-08T10:16:45Z" | ---
license: creativeml-openrail-m
language:
- ja
tags:
- anime
- art
- stable-diffusion
library_name: diffusers
---

# endlessnateについて
## 概要
イラスト寄りのセミリアルモデルで、キャラクターを出力することをメインにしたモデルとなります。
描き込み量などはその辺を加味してバランスのいい配分としています。
## CHANGE LOG
- endlessnateV1の追加
## 使い方
モデルをcloneもしくはDLした後、以下に格納してください。
```
webui\models\Stable-diffusion\
```
## 推奨設定(マージ時と作例画像出力時の設定)
- Steps: 30 or 50
- Sampler: DPM++ 2M Karras
- CFG scale: 11
- Denoising strength: 0.55
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 10
- Hires upscaler: Latent系 か R-ESRGAN系
- VAE:mse840000_klf8anime_klf8anime2
## 推奨NP(マージ時と作例画像出力時の設定)
```
NP:EasyNegative, bad-hands-5, [ :(negative_hand-neg:1.2):15 ], (worst quality, bad quality:1.4), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger, Incomplete hand:1.5), worst quality hands, monochrome, text, nsfw,
```
## 作例
<details>
<summary>endlessnateV1</summary>
<div>

```
▲プロンプト
absurdres, highres, ultra detail, 1girl, one eye covered hair, dirty blonde hair, tightline eyeliner, light blue eye, bikini, (GIGANTIC HUGE BREAST:0.6), girl sitting,
```

```
▲プロンプト
absurdres, highres, ultra detail, 1girl, high twintails hair, dark blonde hair, smokey eyeliner, coral eye, (GIGANTIC HUGE BREAST:0.6), zettai ryouki, gyal, maid,
```

```
▲プロンプト
absurdres, highres, ultra detail, 2+mechanical girl, (( [blue | shining] eye ), very long hair:1.2), (machine made joints, machanical limbs, wires and cables attaching to head and body:1.1), wires and cables attaching to head and body, wires and cables attaching, (fractal art:1.2), zentangle, line effects, machanical effects,
```

```
▲プロンプト
absurdres, highres, ultra detail, close view, 1girl, shining sky, vast world, gazing, awe-inspiring expression, distant horizon, clouds, high hill, natural beauty, inspiration, night sky, Shining Stars,
```
~~~~~~~~~
</div>
</details>
---
# 免責事項
- 本モデルを使用して作成された画像に関しては、個々の利用者に委ねておりますので、生成された画像に関する如何なる問題や係争について、モデル製作者は一切の責任を負いません。
- 本モデルはアダルトコンテンツを目的とした用途を想定しておりません。成人向けコンテンツを生成し、発生した問題についてはモデル製作者は一切の責任を負いません。
- ライセンスに関して問題が発生した場合は、本モデルを予告なく削除させて頂く可能性があります。ご了承ください。
- 犯罪への利用や医療用などの専門的な用途への使用は禁止されております。ライセンス不履行による過失については、モデル製作者は一切の責任を負いません。
---
# Stable Diffusionのライセンスについて
- このモデルはオープンアクセスで誰でも利用可能であり、CreativeML OpenRAIL-Mライセンスでさらに権利と使用方法が規定されています。
- CreativeML OpenRAILライセンスでは、次のように規定されています。
1. このモデルを使用して、違法または有害な出力やコンテンツを意図的に作成したり、共有したりすることはできません。
2. 作者はあなたが生成した出力に対していかなる権利も主張しません。あなたはそれらを自由に使用することができますが、ライセンスで定められた規定を守ってください。利用は自己責任でお願いします。
3. あなたはウェイトを再配布し、モデルを商業的またはサービスとして使用することができます。その場合、ライセンスにあるものと同じ使用制限を含め、CreativeML OpenRAIL-Mのコピーをあなたのすべてのユーザーに共有しなければならないことに注意してください(ライセンスを完全にかつ注意深く読んでください)。
- (ライセンスの全文: [https://huggingface.co/spaces/CompVis/stable-diffusion-license](https://huggingface.co/spaces/CompVis/stable-diffusion-license))
---
# 作者について
twitter:<a href="https://twitter.com/wims_Tea" target="_blank"> https://twitter.com/wims_Tea</a>
--- |
chchen/Qwen2.5-7B-Instruct-PsyCourse-doc-fold10 | chchen | "2025-02-04T05:05:08Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-04T04:08:40Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct-PsyCourse-doc-fold10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct-PsyCourse-doc-fold10
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the course-doc-train-fold10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2103 | 0.3951 | 10 | 0.1745 |
| 0.1083 | 0.7901 | 20 | 0.1138 |
| 0.1106 | 1.1852 | 30 | 0.0822 |
| 0.115 | 1.5802 | 40 | 0.0723 |
| 0.0903 | 1.9753 | 50 | 0.0675 |
| 0.0737 | 2.3704 | 60 | 0.0649 |
| 0.1055 | 2.7654 | 70 | 0.0633 |
| 0.108 | 3.1605 | 80 | 0.0624 |
| 0.0542 | 3.5556 | 90 | 0.0623 |
| 0.0684 | 3.9506 | 100 | 0.0619 |
| 0.0812 | 4.3457 | 110 | 0.0616 |
| 0.0876 | 4.7407 | 120 | 0.0615 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
SoyGema/english-spanish | SoyGema | "2023-09-04T15:05:36Z" | 110 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"es",
"dataset:opus100",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | "2023-09-04T10:20:28Z" | ---
language:
- en
- es
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: english-spanish
results:
- task:
name: Translation
type: translation
dataset:
name: opus100 en-es
type: opus100
config: en-es
split: validation
args: en-es
metrics:
- name: Bleu
type: bleu
value: 15.8604
pipeline_tag: translation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english-spanish
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus100 en-es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1056
- Bleu: 15.8604
- Gen Len: 40.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3 |
Zoyd/01-ai_Yi-1.5-34B-Chat-16K-5_0bpw_exl2 | Zoyd | "2024-05-21T00:48:13Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | "2024-05-20T23:06:00Z" | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_5bpw_exl2)**</center> | <center>11195 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_0bpw_exl2)**</center> | <center>13193 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_5bpw_exl2)**</center> | <center>15187 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_75bpw_exl2)**</center> | <center>16186 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_0bpw_exl2)**</center> | <center>17183 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_25bpw_exl2)**</center> | <center>18179 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-5_0bpw_exl2)**</center> | <center>21171 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_0bpw_exl2)**</center> | <center>25231 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_5bpw_exl2)**</center> | <center>27111 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-8_0bpw_exl2)**</center> | <center>29540 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
kornwtp/ConGen-MiniLM-L3 | kornwtp | "2023-01-12T13:28:01Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-10-10T11:58:39Z" | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# kornwtp/ConGen-MiniLM-L3
This is a [ConGen](https://github.com/KornWtp/ConGen) model: It maps sentences to a 384 dimensional dense vector space and can be used for tasks like semantic search.
## Usage
Using this model becomes easy when you have [ConGen](https://github.com/KornWtp/ConGen) installed:
```
pip install -U git+https://github.com/KornWtp/ConGen.git
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('kornwtp/ConGen-MiniLM-L3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [Semantic Textual Similarity](https://github.com/KornWtp/ConGen#main-results---sts)
## Citing & Authors
```bibtex
@inproceedings{limkonchotiwat-etal-2022-congen,
title = "{ConGen}: Unsupervised Control and Generalization Distillation For Sentence Representation",
author = "Limkonchotiwat, Peerat and
Ponwitayarat, Wuttikorn and
Lowphansirikul, Lalita and
Udomcharoenchaikit, Can and
Chuangsuwanich, Ekapol and
Nutanong, Sarana",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
year = "2022",
publisher = "Association for Computational Linguistics",
}
``` |
featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF | featherless-ai-quants | "2024-11-13T11:56:00Z" | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:TOPAI-Network/Llama-3-LewdPlay-8B-evo",
"base_model:quantized:TOPAI-Network/Llama-3-LewdPlay-8B-evo",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-13T11:46:13Z" | ---
base_model: TOPAI-Network/Llama-3-LewdPlay-8B-evo
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# TOPAI-Network/Llama-3-LewdPlay-8B-evo GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/TOPAI-Network-Llama-3-LewdPlay-8B-evo-GGUF/blob/main/TOPAI-Network-Llama-3-LewdPlay-8B-evo-Q8_0.gguf) | 8145.11 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
botenius/9d68ede5-e3bd-4663-a60f-7b36f98f9333 | botenius | "2025-02-05T10:33:19Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-05T10:19:19Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d68ede5-e3bd-4663-a60f-7b36f98f9333
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 95518e2e6038786f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/95518e2e6038786f_train_data.json
type:
field_instruction: instructions
field_output: en_responses
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: botenius/9d68ede5-e3bd-4663-a60f-7b36f98f9333
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 500
micro_batch_size: 2
mlflow_experiment_name: /tmp/95518e2e6038786f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 30c157cc-803b-4696-a2b7-5df50ea255c5
wandb_project: Gradients-On-13
wandb_run: your_name
wandb_runid: 30c157cc-803b-4696-a2b7-5df50ea255c5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 9d68ede5-e3bd-4663-a60f-7b36f98f9333
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 265
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3896 | 1.0 | 265 | 0.7910 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
letfd/test-lunar-lander | letfd | "2022-12-05T20:23:55Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-05T20:23:30Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.70 +/- 35.13
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ardaspear/7ca15c87-9df0-4408-9b91-01892fa6f012 | ardaspear | "2025-02-07T14:38:10Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-mistral",
"base_model:adapter:echarlaix/tiny-random-mistral",
"license:apache-2.0",
"region:us"
] | null | "2025-02-07T14:37:15Z" | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-mistral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ca15c87-9df0-4408-9b91-01892fa6f012
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-mistral
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 42fa4b965cededb3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/42fa4b965cededb3_train_data.json
type:
field_input: doc
field_instruction: original_text
field_output: edited_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/7ca15c87-9df0-4408-9b91-01892fa6f012
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/42fa4b965cededb3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: d0616b9d-99ec-4e9e-86fa-82059ce33170
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: d0616b9d-99ec-4e9e-86fa-82059ce33170
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7ca15c87-9df0-4408-9b91-01892fa6f012
This model is a fine-tuned version of [echarlaix/tiny-random-mistral](https://huggingface.co/echarlaix/tiny-random-mistral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0083 | 1 | 10.3770 |
| 41.5127 | 0.1417 | 17 | 10.3748 |
| 41.4994 | 0.2833 | 34 | 10.3705 |
| 41.4544 | 0.425 | 51 | 10.3635 |
| 41.4161 | 0.5667 | 68 | 10.3522 |
| 41.3636 | 0.7083 | 85 | 10.3399 |
| 41.3363 | 0.85 | 102 | 10.3321 |
| 41.3307 | 0.9917 | 119 | 10.3286 |
| 41.3045 | 1.1333 | 136 | 10.3270 |
| 41.3106 | 1.275 | 153 | 10.3262 |
| 41.286 | 1.4167 | 170 | 10.3259 |
| 41.306 | 1.5583 | 187 | 10.3258 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LowRAs/nedLoRa | LowRAs | "2023-02-23T07:10:28Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-02-23T04:32:12Z" | ---
license: creativeml-openrail-m
---
|
alvarobb/dqn-SpaceInvadersNoFrameskip-v4 | alvarobb | "2023-01-11T12:10:11Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-11T12:09:32Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 670.00 +/- 256.75
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alvarobb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alvarobb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alvarobb
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
DBangshu/V3_GPT2_e5_8_4 | DBangshu | "2024-10-16T12:09:21Z" | 132 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-16T12:09:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alirzb/S1_M1_R1_AST_42781514 | alirzb | "2024-01-10T20:04:02Z" | 146 | 0 | transformers | [
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-01-10T17:43:38Z" | ---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S1_M1_R1_AST_42781514
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R1_AST_42781514
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0117
- Accuracy: 0.9980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0007 | 1.0 | 304 | 0.0108 | 0.9971 |
| 0.0048 | 2.0 | 608 | 0.0052 | 0.9971 |
| 0.0001 | 3.0 | 912 | 0.0106 | 0.9971 |
| 0.0 | 4.0 | 1217 | 0.0065 | 0.9990 |
| 0.0 | 5.0 | 1520 | 0.0117 | 0.9980 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
odxxt/resqLoRA | odxxt | "2024-05-08T21:08:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-08T21:08:44Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** odxxt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
antalvdb/distilbert-base-uncased-finetuned-cola | antalvdb | "2024-02-23T10:52:19Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-25T16:18:21Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8251
- Matthews Correlation: 0.5369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5232 | 1.0 | 535 | 0.4719 | 0.4268 |
| 0.3473 | 2.0 | 1070 | 0.4846 | 0.5330 |
| 0.2365 | 3.0 | 1605 | 0.6165 | 0.5050 |
| 0.1753 | 4.0 | 2140 | 0.7647 | 0.5215 |
| 0.1331 | 5.0 | 2675 | 0.8251 | 0.5369 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
0xfaskety/Qwen-Qwen1.5-7B-1717386508 | 0xfaskety | "2024-06-03T03:55:28Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-03T03:48:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yuchenj/gpt2_774M_100B_FinewebEdu_hf | yuchenj | "2024-07-26T17:03:24Z" | 123 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"dataset:HuggingFaceFW/fineweb-edu",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-26T15:37:36Z" | ---
library_name: transformers
datasets:
- HuggingFaceFW/fineweb-edu
---
This is a GPT-2 (774M) model trained in llm.c for 100B tokens with cosine LR on Fineweb-Edu.
A lot more detailed info and observations are here: https://x.com/Yuchenj_UW/status/1814703583453192272 |
iperez/jennifer-flux | iperez | "2025-01-04T22:47:46Z" | 8 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-04T22:14:36Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JENNIFER
---
# Jennifer Flux
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JENNIFER` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('iperez/jennifer-flux', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
omarmomen/structroberta_s1_final | omarmomen | "2024-03-26T16:19:56Z" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"custom_code",
"en",
"dataset:omarmomen/babylm_10M",
"arxiv:2310.20589",
"license:mit",
"autotrain_compatible",
"region:us"
] | fill-mask | "2023-07-29T01:19:32Z" | ---
license: mit
datasets:
- omarmomen/babylm_10M
language:
- en
metrics:
- perplexity
library_name: transformers
---
# Model Card for omarmomen/structroberta_s1_final
This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023.
The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)
<strong>omarmomen/structroberta_s1_final</strong> is a modification on the Roberta Model to incorporate syntactic inductive bias using an unsupervised parsing mechanism.
This model variant places the parser network ahead of all attention blocks.
The model is pretrained on the BabyLM 10M dataset using a custom pretrained RobertaTokenizer (https://huggingface.co/omarmomen/babylm_tokenizer_32k).
https://arxiv.org/abs/2310.20589 |
John6666/il-geekpower-checkpoints-mix-star-sphere-sdxl | John6666 | "2025-01-05T03:56:24Z" | 147 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"geekpower",
"star nebula",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-01-05T03:48:08Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- geekpower
- star nebula
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1096335?modelVersionId=1241968).
This model created by [Geekpower](https://civitai.com/user/Geekpower).
|
John6666/fiamix-xl-v47-sdxl | John6666 | "2024-11-04T06:46:07Z" | 28 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"illustration",
"backgrounds",
"men",
"women",
"boys",
"girls",
"animagine",
"en",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:finetune:cagliostrolab/animagine-xl-3.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-11-04T06:40:01Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- illustration
- backgrounds
- men
- women
- boys
- girls
- animagine
base_model: Linaqruf/animagine-xl-3.0
---
Original model is [here](https://civitai.com/models/373845?modelVersionId=1024841).
This model created by [Fia_TKTD](https://civitai.com/user/Fia_TKTD).
|
DimiPaparas/Reinforce-CartPole-v1 | DimiPaparas | "2024-03-04T21:46:26Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-04T21:46:18Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dimasik1987/54fc06d5-22df-42a3-b20e-b714461b1286 | dimasik1987 | "2025-01-14T02:53:58Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | "2025-01-14T02:14:50Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 54fc06d5-22df-42a3-b20e-b714461b1286
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c915f3b5c1e09a30_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c915f3b5c1e09a30_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dimasik1987/54fc06d5-22df-42a3-b20e-b714461b1286
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/c915f3b5c1e09a30_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b401ef21-c760-4f0a-bbf5-c54631c430a8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b401ef21-c760-4f0a-bbf5-c54631c430a8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 54fc06d5-22df-42a3-b20e-b714461b1286
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0006 | 5 | nan |
| 0.0 | 0.0013 | 10 | nan |
| 0.0 | 0.0019 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Hsuan0929/llama-3.2-custom-energy_saving_assistant | Hsuan0929 | "2025-02-25T09:28:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-25T09:14:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
atsuki-yamaguchi/bloom-7b1-clpp-ar | atsuki-yamaguchi | "2024-04-22T09:03:49Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-19T12:13:25Z" | ---
license: mit
language: ar
---
BLOOM-7B Arabic [LAPT + CLP+]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clpp-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clpp-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
demohong/14f5ebce-2cbe-4893-9eb3-9afc8a8be783 | demohong | "2025-01-19T05:11:10Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Llama-3.2-1B",
"base_model:adapter:NousResearch/Llama-3.2-1B",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-19T05:03:26Z" | ---
library_name: peft
license: llama3.2
base_model: NousResearch/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 14f5ebce-2cbe-4893-9eb3-9afc8a8be783
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Llama-3.2-1B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 80f92b3d38c62613_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/80f92b3d38c62613_train_data.json
type:
field_instruction: intent
field_output: snippet
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/14f5ebce-2cbe-4893-9eb3-9afc8a8be783
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/80f92b3d38c62613_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1bf2f8e1-ac8b-46d1-ad4c-485628fd6876
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1bf2f8e1-ac8b-46d1-ad4c-485628fd6876
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 14f5ebce-2cbe-4893-9eb3-9afc8a8be783
This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2779 | 0.6284 | 200 | 1.5464 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
arunptp/Reinforce-1 | arunptp | "2023-06-27T06:08:12Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-27T06:08:03Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
teacookies/autotrain-18102022_retoken-1799162225 | teacookies | "2022-10-18T08:01:54Z" | 13 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-18102022_retoken",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-10-18T07:50:22Z" | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-18102022_retoken
co2_eq_emissions:
emissions: 20.17997164723111
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1799162225
- CO2 Emissions (in grams): 20.1800
## Validation Metrics
- Loss: 0.024
- Accuracy: 0.993
- Precision: 0.829
- Recall: 0.893
- F1: 0.860
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-18102022_retoken-1799162225
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-18102022_retoken-1799162225", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-18102022_retoken-1799162225", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
lmazzon70/videomae-base-short-finetuned-ssv2-finetuned-rwf2000-epochs8-batch8 | lmazzon70 | "2023-01-13T03:57:57Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2023-01-10T19:40:02Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-short-finetuned-ssv2-finetuned-rwf2000-epochs8-batch8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-short-finetuned-ssv2-finetuned-rwf2000-epochs8-batch8
This model is a fine-tuned version of [MCG-NJU/videomae-base-short-finetuned-ssv2](https://huggingface.co/MCG-NJU/videomae-base-short-finetuned-ssv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7821
- Accuracy: 0.6713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4247 | 0.06 | 200 | 0.4205 | 0.8063 |
| 0.4125 | 1.06 | 400 | 0.6749 | 0.72 |
| 0.3265 | 2.06 | 600 | 1.3838 | 0.5763 |
| 0.2204 | 3.06 | 800 | 0.6725 | 0.7275 |
| 0.2965 | 4.06 | 1000 | 0.4583 | 0.8263 |
| 0.1883 | 5.06 | 1200 | 0.3786 | 0.8488 |
| 0.1321 | 6.06 | 1400 | 1.6632 | 0.5962 |
| 0.369 | 7.06 | 1600 | 0.6018 | 0.8063 |
| 0.3764 | 8.06 | 1800 | 0.8546 | 0.74 |
| 0.2401 | 9.06 | 2000 | 0.5422 | 0.825 |
| 0.1943 | 10.06 | 2200 | 0.5868 | 0.8113 |
| 0.1352 | 11.06 | 2400 | 0.7111 | 0.8063 |
| 0.2276 | 12.06 | 2600 | 0.8847 | 0.7812 |
| 0.149 | 13.06 | 2800 | 0.8581 | 0.7837 |
| 0.0848 | 14.06 | 3000 | 0.8707 | 0.7788 |
| 0.046 | 15.06 | 3200 | 0.7914 | 0.7963 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
amira-morsli/my_awesome_asr_mind_model | amira-morsli | "2023-09-27T04:22:25Z" | 79 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-09-21T17:00:09Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: my_awesome_asr_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1467
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.1294 | 100.0 | 500 | 3.1467 | 1.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mradermacher/Distilled-Safeword-24B-v2.0-GGUF | mradermacher | "2025-02-23T16:05:00Z" | 266 | 0 | transformers | [
"transformers",
"gguf",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"en",
"base_model:ReadyArt/Distilled-Safeword-24B-v2.0",
"base_model:quantized:ReadyArt/Distilled-Safeword-24B-v2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-22T13:40:56Z" | ---
base_model: ReadyArt/Distilled-Safeword-24B-v2.0
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ReadyArt/Distilled-Safeword-24B-v2.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Safeword-24B-v2.0-GGUF/resolve/main/Distilled-Safeword-24B-v2.0.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
checkiejan/multi-qa-mpnet-base-dot-v1-covidqa-search-4-epochs | checkiejan | "2023-09-26T06:52:21Z" | 13 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-09-26T06:52:02Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 259 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 100,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 51,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
visdata/ban12 | visdata | "2025-01-31T17:25:59Z" | 42 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-31T17:06:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
robinsk8a/Reinforce-CartPole-v1 | robinsk8a | "2023-02-07T02:05:28Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-06T21:29:37Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JaehyeokLee/20m_em_checkpoint_epoch_1_step_600 | JaehyeokLee | "2025-02-24T02:59:03Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"arxiv:2402.03216",
"arxiv:2004.04906",
"arxiv:2106.14807",
"arxiv:2107.05720",
"arxiv:2004.12832",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-24T02:23:24Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: mit
---
For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
# BGE-M3 ([paper](https://arxiv.org/pdf/2402.03216.pdf), [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3))
In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
- Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
- Multi-Linguality: It can support more than 100 working languages.
- Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
**Some suggestions for retrieval pipeline in RAG:**
We recommend to use following pipeline: hybrid retrieval + re-ranking.
- Hybrid retrieval leverages the strengths of various methods, offering higher accuracy and stronger generalization capabilities.
A classic example: using both embedding retrieval and the BM25 algorithm.
Now, you can try to use BGE-M3, which supports both embedding and sparse retrieval.
This allows you to obtain token weights (similar to the BM25) without any additional cost when generate dense embeddings.
- As cross-encoder models, re-ranker demonstrates higher accuracy than bi-encoder embedding model.
Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
## News:
- 2/6/2024: We release the [MLDR](https://huggingface.co/datasets/Shitao/MLDR) (a long document retrieval dataset covering 13 languages) and [evaluation pipeline](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR).
- 2/1/2024: **Thanks for the excellent tool from Vespa.** You can easily use multiple modes of BGE-M3 following this [notebook](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb)
## Specs
- Model
| Model Name | Dimension | Sequence Length | Introduction |
|:----:|:---:|:---:|:---:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 | multilingual; unified fine-tuning (dense, sparse, and colbert) from bge-m3-unsupervised|
| [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised) | 1024 | 8192 | multilingual; contrastive learning from bge-m3-retromae |
| [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) | -- | 8192 | multilingual; extend the max_length of [xlm-roberta](https://huggingface.co/FacebookAI/xlm-roberta-large) to 8192 and further pretrained via [retromae](https://github.com/staoxiao/RetroMAE)|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | English model |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | English model |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | English model |
- Data
| Dataset | Introduction |
|:----:|:---:|
| [MLDR](https://huggingface.co/datasets/Shitao/MLDR) | Docuemtn Retrieval Dataset, covering 13 languages|
## FAQ
**1. Introduction for different retrieval methods**
- Dense retrieval: map the text into a single embedding, e.g., [DPR](https://arxiv.org/abs/2004.04906), [BGE-v1.5](https://github.com/FlagOpen/FlagEmbedding)
- Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
- Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
**2. Comparison with BGE-v1.5 and other monolingual models**
BGE-M3 is a multilingual model, and its ability in monolingual embedding retrieval may not surpass models specifically designed for single languages.
However, we still recommend trying BGE-M3 because of its versatility (support for multiple languages and long texts).
Moreover, it can simultaneously generate multiple representations, and using them together can enhance accuracy and generalization,
unlike most existing models that can only perform dense retrieval.
In the open-source community, there are many excellent models (e.g., jina-embedding, colbert, e5, etc),
and users can choose a model that suits their specific needs based on practical considerations,
such as whether to require multilingual or cross-language support, and whether to process long texts.
**3. How to use BGE-M3 in other projects?**
For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
For sparse retrieval methods, most open-source libraries currently do not support direct utilization of the BGE-M3 model.
Contributions from the community are welcome.
In our experiments, we use [Pyserini](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR#hybrid-retrieval-dense--sparse) and Faiss to do hybrid retrieval.
**Now you can ou can try the hybrid mode of BGE-M3 in [Vespa](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb
). Thanks @jobergum.**
**4. How to fine-tune bge-M3 model?**
You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
to fine-tune the dense embedding.
Our code and data for unified fine-tuning (dense, sparse, and multi-vectors) will be released.
## Usage
Install:
```
git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
```
or:
```
pip install -U FlagEmbedding
```
### Generate Embedding for text
- Dense Embedding
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3',
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1,
batch_size=12,
max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process.
)['dense_vecs']
embeddings_2 = model.encode(sentences_2)['dense_vecs']
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# [[0.6265, 0.3477], [0.3499, 0.678 ]]
```
You also can use sentence-transformers and huggingface transformers to generate dense embeddings.
Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details.
- Sparse Embedding (Lexical Weight)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False)
# you can see the weight for each token:
print(model.convert_id_to_token(output_1['lexical_weights']))
# [{'What': 0.08356, 'is': 0.0814, 'B': 0.1296, 'GE': 0.252, 'M': 0.1702, '3': 0.2695, '?': 0.04092},
# {'De': 0.05005, 'fin': 0.1368, 'ation': 0.04498, 'of': 0.0633, 'BM': 0.2515, '25': 0.3335}]
# compute the scores via lexical mathcing
lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0])
print(lexical_scores)
# 0.19554901123046875
print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1]))
# 0.0
```
- Multi-Vector (ColBERT)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True)
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0]))
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1]))
# 0.7797
# 0.4620
```
### Compute score for text pairs
Input a list of text pairs, you can get the scores computed by different methods.
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
print(model.compute_score(sentence_pairs,
max_passage_length=128, # a smaller max length leads to a lower latency
weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score
# {
# 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
# 'sparse': [0.195556640625, 0.00879669189453125, 0.0, 0.1802978515625],
# 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
# 'sparse+dense': [0.482503205537796, 0.23454029858112335, 0.2332356721162796, 0.5122477412223816],
# 'colbert+sparse+dense': [0.6013619303703308, 0.3255828022956848, 0.32089319825172424, 0.6232916116714478]
# }
```
## Evaluation
- Multilingual (Miracl dataset)

- Cross-lingual (MKQA dataset)

- Long Document Retrieval
- MLDR:

Please note that [MLDR](https://huggingface.co/datasets/Shitao/MLDR) is a document retrieval dataset we constructed via LLM,
covering 13 languages, including test set, validation set, and training set.
We utilized the training set from MLDR to enhance the model's long document retrieval capabilities.
Therefore, comparing baselines with `Dense w.o.long`(fine-tuning without long document dataset) is more equitable.
Additionally, this long document retrieval dataset will be open-sourced to address the current lack of open-source multilingual long text retrieval datasets.
We believe that this data will be helpful for the open-source community in training document retrieval models.
- NarritiveQA:

## Training
- Self-knowledge Distillation: combining multiple outputs from different
retrieval modes as reward signal to enhance the performance of single mode(especially for sparse retrieval and multi-vec(colbert) retrival)
- Efficient Batching: Improve the efficiency when fine-tuning on long text.
The small-batch strategy is simple but effective, which also can used to fine-tune large embedding model.
- MCLS: A simple method to improve the performance on long text without fine-tuning.
If you have no enough resource to fine-tuning model with long text, the method is useful.
Refer to our [report](https://arxiv.org/pdf/2402.03216.pdf) for more details.
**The fine-tuning codes and datasets will be open-sourced in the near future.**
## Acknowledgement
Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
Thanks the open-sourced libraries like [Tevatron](https://github.com/texttron/tevatron), [pyserial](https://github.com/pyserial/pyserial).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge-m3,
title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation},
author={Jianlv Chen and Shitao Xiao and Peitian Zhang and Kun Luo and Defu Lian and Zheng Liu},
year={2024},
eprint={2402.03216},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Khushi-Thakor/wATCH.Khushi-Thakor.viral.video.original | Khushi-Thakor | "2025-02-24T18:46:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-24T18:46:29Z" |
<a href="http://bit.ly/3ZBGcrZ"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="http://bit.ly/3ZBGcrZ" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="http://bit.ly/3ZBGcrZ" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
wasimar/ppo-LunarLander-v2 | wasimar | "2023-05-14T18:58:21Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-14T17:29:09Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.47 +/- 15.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
John6666/boleromix-illustriousxl-v280-sdxl | John6666 | "2024-12-23T06:56:08Z" | 68 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-12-09T04:28:32Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/869634?modelVersionId=1137858).
This model created by [bolero537](https://civitai.com/user/bolero537).
|
robiulawaldev/7b79c33a-67f4-49f1-8e6c-405513a0c1b0 | robiulawaldev | "2025-01-31T03:00:10Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T02:18:49Z" | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7b79c33a-67f4-49f1-8e6c-405513a0c1b0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7d659d1d3be06d15_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7d659d1d3be06d15_train_data.json
type:
field_input: ''
field_instruction: text
field_output: text_cleaned
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/7b79c33a-67f4-49f1-8e6c-405513a0c1b0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: constant
max_steps: 55
micro_batch_size: 2
mlflow_experiment_name: /tmp/7d659d1d3be06d15_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f726e984-621b-4576-a8f1-206d74638cdd
wandb_project: Birthday-SN56-36-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f726e984-621b-4576-a8f1-206d74638cdd
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7b79c33a-67f4-49f1-8e6c-405513a0c1b0
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 55
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 0.8768 |
| 0.7613 | 0.0001 | 14 | 0.6085 |
| 0.6034 | 0.0003 | 28 | 0.5570 |
| 0.5035 | 0.0004 | 42 | 0.5341 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ThoMyh/distilBert_for_binary_sentiment_classification | ThoMyh | "2024-04-19T13:53:22Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-19T09:38:39Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilBert_for_binary_sentiment_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBert_for_binary_sentiment_classification
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1395
- Accuracy: 0.9645
- F1: 0.9633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1351 | 1.0 | 1000 | 0.1304 | 0.9575 | 0.9563 |
| 0.0705 | 2.0 | 2000 | 0.1395 | 0.9645 | 0.9633 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Litzy619/V0404MP6 | Litzy619 | "2024-04-04T14:32:36Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | "2024-04-04T12:44:23Z" | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0404MP6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0404MP6
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.1408 | 0.09 | 10 | 2.5415 |
| 5.4886 | 0.18 | 20 | 2.4963 |
| 4.5457 | 0.27 | 30 | 2.4110 |
| 4.1074 | 0.36 | 40 | 2.3242 |
| 3.5825 | 0.45 | 50 | 2.2528 |
| 3.1612 | 0.54 | 60 | 2.2006 |
| 2.8782 | 0.63 | 70 | 2.1606 |
| 2.5962 | 0.73 | 80 | 2.1360 |
| 2.7051 | 0.82 | 90 | 2.1230 |
| 2.5853 | 0.91 | 100 | 2.1162 |
| 2.6212 | 1.0 | 110 | 2.1140 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ddobokki/Llama-2-70b-orca-200k | ddobokki | "2023-08-08T00:15:50Z" | 1,552 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"instruct",
"instruction",
"en",
"doi:10.57967/hf/1687",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-03T01:23:33Z" | ---
language:
- en
tags:
- llama-2
- instruct
- instruction
pipeline_tag: text-generation
---
# Llama-2-70b-orca-200k model card
### Used Datasets
- OpenOrca (200k sampling)
### Prompt Template
```
### Human: {Human}
### Assistant: {Assistant}
```
### Contribute
[ddobokki](https://github.com/ddobokki)
[YooSungHyun](https://github.com/YooSungHyun)
### License
[LICENSE.txt](meta-license/LICENSE.txt)
### USE_POLICY
[USE_POLICY.md](meta-license/USE_POLICY.md)
### Responsible Use Guide
[Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf) |
ls-da3m0ns/OpenHermes-2.5-Mistral-7B-medicalqa-v2 | ls-da3m0ns | "2024-02-27T16:14:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"trl",
"sft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-27T14:36:47Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pinzhenchen/sft-lora-fr-pythia-410m | pinzhenchen | "2024-03-05T23:51:08Z" | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-03-05T23:51:05Z" |
---
language:
- fr
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped)
* Instruction tuning language: French
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
trituenhantaoio/bert-base-vietnamese-uncased | trituenhantaoio | "2024-10-31T02:23:13Z" | 171 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
model = BertForSequenceClassification.from_pretrained("trituenhantaoio/bert-base-vietnamese-uncased")
tokenizer = BertTokenizer.from_pretrained("trituenhantaoio/bert-base-vietnamese-uncased")
```
### References
```
@article{ttnt2020bert,
title={Vietnamese BERT: Pretrained on News and Wiki},
author={trituenhantao.io},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/trituenhantaoio/vn-bert-base-uncased}},
}
```
[trituenhantao.io](https://trituenhantao.io) |
mradermacher/Aether-12b-i1-GGUF | mradermacher | "2024-09-26T11:50:28Z" | 887 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:aixonlab/Aether-12b",
"base_model:quantized:aixonlab/Aether-12b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-09-26T07:26:38Z" | ---
base_model: aixonlab/Aether-12b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/aixonlab/Aether-12b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Aether-12b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aether-12b-i1-GGUF/resolve/main/Aether-12b.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
luvGPT/mistral-7b-uncensored | luvGPT | "2024-09-15T04:04:26Z" | 330 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"transformer",
"fine-tuned",
"uncensored",
"nsfw",
"conversational",
"en",
"dataset:open-source-texts",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-20T16:24:45Z" | ---
language: en
tags:
- text-generation
- transformer
- mistral
- fine-tuned
- uncensored
- nsfw
license: apache-2.0
datasets:
- open-source-texts
model-name: Fine-tuned Mistral 7B (Uncensored)
---
# Fine-tuned Mistral 7B (Uncensored)
## Model Description
This model is a fine-tuned version of the **Mistral 7B**, a dense transformer model, trained on 40,000 datapoints of textual data from a variety of open-source sources. The base model, Mistral 7B, is known for its high efficiency in processing text and generating meaningful, coherent responses.
This fine-tuned version has been optimized for tasks involving natural language understanding, generation, and conversation-based interactions. Importantly, this model is **uncensored**, which means it does not filter or restrict content, allowing it to engage in more "spicy" or NSFW conversations.
## Fine-tuning Process
- **Data**: The model was fine-tuned using a dataset of 40,000 textual datapoints sourced from various open-source repositories.
- **Training Environment**: Fine-tuning was conducted on two NVIDIA A100 GPUs.
- **Training Time**: The training process took approximately 16 hours.
- **Optimizer**: The model was trained using AdamW optimizer with a learning rate of `5e-5`.
## Intended Use
This fine-tuned model is intended for the following tasks:
- Text generation
- Question answering
- Dialogue systems
- Content generation for AI-powered interactions, including NSFW or adult-oriented conversations.
### How to Use
You can easily load and use this model with the `transformers` library in Python:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your-organization/finetuned-mistral-7b")
model = AutoModelForCausalLM.from_pretrained("your-organization/finetuned-mistral-7b")
inputs = tokenizer("Input your text here.", return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=50, num_return_sequences=1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
SoyGema/t5-small | SoyGema | "2023-06-18T15:40:52Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | "2023-06-18T15:38:13Z" | ---
language:
- en
- fr
- ro
- de
- multilingual
license: apache-2.0
tags:
- summarization
- translation
datasets:
- c4
---
# Model Card for T5 Small

# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Small is the checkpoint with 60 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-small, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5Model.from_pretrained("t5-small")
input_ids = tokenizer(
"Studies have been shown that owning a dog is good for you", return_tensors="pt"
).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details>
|
csukuangfj/sherpa-nemo-ctc-de-citrinet-1024 | csukuangfj | "2023-03-10T09:22:02Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-03-10T08:37:05Z" | ---
license: apache-2.0
---
# Introduction
This repo contains torchscript model of Citrinet-512 from NeMo.
See https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_de_citrinet_1024
The following code is used to obtain `model.pt` and `tokens.txt`:
```bash
import nemo.collections.asr as nemo_asr
citrinet_de_1024 = nemo_asr.models.EncDecCTCModelBPE.from_pretrained('stt_de_citrinet_1024');
citrinet_de_1024.export('model.pt')
# Caution: We use 0 for blank here, while NeMo treat the last token as blank.
# For instance, when len(citrinet_de_1024.decoder.vocabulary) is 1024. NeMo treats
# ID 1025 as blank but we treat 0 as blank.
with open('tokens.txt', 'w') as f:
f.write("<blk> 0\n")
for i, s in enumerate(citrinet_de_1024.decoder.vocabulary):
f.write(f"{s} {i+1}\n")
```
# Caution
The exported model takes log-filterbank as input and it does not include
preprocess.
You can use the following code to replace the `preprocessor`:
```
import kaldifeat
opts = kaldifeat.FbankOptions()
opts.device = "cpu"
opts.frame_opts.dither = 0
opts.frame_opts.snip_edges = False
opts.frame_opts.samp_freq = 16000
opts.frame_opts.window_type = "povey"
opts.mel_opts.num_bins = 80
fbank = kaldifeat.Fbank(opts)
import torchaudio
samples, sample_rate = torchaudio.load("./test_wavs/0.wav")
assert sample_rate == 16000
features = fbank(samples[0])
mean = features.mean(dim=0, keepdims=True)
std = features.std(dim=0, keepdims=True)
features = (features - mean) / std
features = features.unsqueeze(0).permute(0, 2, 1)
# Note features is of shape (N, C, T)
model = torch.jit.load('model.pt')
logprob = model(features, torch.tensor([features.shape[2]]))
```
|
slave-factory/kobert-emotion-classifier | slave-factory | "2024-09-10T15:10:05Z" | 6 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:monologg/kobert",
"base_model:finetune:monologg/kobert",
"license:apache-2.0",
"region:us"
] | null | "2024-09-10T14:43:13Z" | ---
license: apache-2.0
base_model: monologg/kobert
tags:
- generated_from_trainer
model-index:
- name: kobert-emotion-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobert-emotion-classifier
This model is a fine-tuned version of [monologg/kobert](https://huggingface.co/monologg/kobert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0797 | 1.0 | 642 | 0.0962 |
| 0.0806 | 2.0 | 1284 | 0.0907 |
| 0.0806 | 3.0 | 1926 | 0.0898 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.4.0
- Datasets 2.19.0
- Tokenizers 0.19.1
|
DrNicefellow/Mistral-Nemo-Instruct-2407-exl2-7bpw-h8 | DrNicefellow | "2024-07-18T22:52:38Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"7-bit",
"exl2",
"region:us"
] | text-generation | "2024-07-18T22:49:27Z" | ---
license: apache-2.0
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
---
# Model Card for Mistral-Nemo-Instruct-2407
The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407). Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
## Key features
- Released under the **Apache 2 License**
- Pre-trained and instructed versions
- Trained with a **128k context window**
- Trained on a large proportion of **multilingual and code data**
- Drop-in replacement of Mistral 7B
## Model Architecture
Mistral Nemo is a transformer model, with the following architecture choices:
- **Layers:** 40
- **Dim:** 5,120
- **Head dim:** 128
- **Hidden dim:** 14,436
- **Activation Function:** SwiGLU
- **Number of heads:** 32
- **Number of kv-heads:** 8 (GQA)
- **Vocabulary size:** 2**17 ~= 128k
- **Rotary embeddings (theta = 1M)**
## Metrics
### Main Benchmarks
| Benchmark | Score |
| --- | --- |
| HellaSwag (0-shot) | 83.5% |
| Winogrande (0-shot) | 76.8% |
| OpenBookQA (0-shot) | 60.6% |
| CommonSenseQA (0-shot) | 70.4% |
| TruthfulQA (0-shot) | 50.3% |
| MMLU (5-shot) | 68.0% |
| TriviaQA (5-shot) | 73.8% |
| NaturalQuestions (5-shot) | 31.2% |
### Multilingual Benchmarks (MMLU)
| Language | Score |
| --- | --- |
| French | 62.3% |
| German | 62.7% |
| Spanish | 64.6% |
| Italian | 61.3% |
| Portuguese | 63.3% |
| Russian | 59.2% |
| Chinese | 59.0% |
| Japanese | 59.0% |
## Usage
The model can be used with three different frameworks
- [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#mistral-inference)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
- [`NeMo`](https://github.com/NVIDIA/NeMo): See [nvidia/Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct)
### Mistral Inference
#### Install
It is recommended to use `mistralai/Mistral-Nemo-Instruct-2407` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
#### Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-Nemo-Instruct-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
#### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/Nemo-Instruct --instruct --max_tokens 256 --temperature 0.35
```
*E.g.* Try out something like:
```
How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar.
```
#### Instruct following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)
prompt = "How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar."
completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
print(result)
```
#### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
print(result)
```
### Transformers
> [!IMPORTANT]
> NOTE: Until a new release has been made, you need to install transformers from source:
> ```sh
> pip install git+https://github.com/huggingface/transformers.git
> ```
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-Nemo-Instruct-2407")
chatbot(messages)
```
> [!TIP]
> Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.
## Limitations
The Mistral Nemo Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall |
tangledgroup/tangled-llama-a-128k-base-v0.1 | tangledgroup | "2024-10-19T13:06:19Z" | 143 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"litgpt",
"litdata",
"conversational",
"en",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"eo",
"es",
"et",
"eu",
"fa",
"ff",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gn",
"gu",
"ha",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lg",
"li",
"ln",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"ns",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"qu",
"rm",
"ro",
"ru",
"sa",
"si",
"sc",
"sd",
"sk",
"sl",
"so",
"sq",
"sr",
"ss",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tn",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zu",
"dataset:xu-song/cc100-samples",
"dataset:jordiclive/wikipedia-summary-dataset",
"dataset:JeanKaddour/minipile",
"dataset:badrex/llm-emoji-dataset",
"dataset:fblgit/simple-math",
"dataset:Gusarich/math-expressions-1m",
"dataset:AtlasUnified/atlas-math-sets",
"dataset:gair-prox/open-web-math-pro",
"dataset:bigcode/the-stack-smol-xs",
"dataset:rombodawg/code_bagel",
"dataset:AtlasUnified/Atlas-Reasoning",
"dataset:thesven/gsm8k-reasoning",
"dataset:AlgorithmicResearchGroup/math_reasoning_autoformalization_track",
"dataset:KingNish/reasoning-base-20k",
"dataset:SkunkworksAI/reasoning-0.01",
"dataset:Magpie-Align/Magpie-Reasoning-150K",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-11T19:28:10Z" | ---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
language: [
'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el',
'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he',
'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko',
'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my',
'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si',
'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn',
'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu',
]
datasets: [
'xu-song/cc100-samples',
'jordiclive/wikipedia-summary-dataset',
'JeanKaddour/minipile',
'badrex/llm-emoji-dataset',
'fblgit/simple-math',
'Gusarich/math-expressions-1m',
'AtlasUnified/atlas-math-sets',
'gair-prox/open-web-math-pro',
'bigcode/the-stack-smol-xs',
'rombodawg/code_bagel',
'AtlasUnified/Atlas-Reasoning',
'thesven/gsm8k-reasoning',
'AlgorithmicResearchGroup/math_reasoning_autoformalization_track',
'KingNish/reasoning-base-20k',
'SkunkworksAI/reasoning-0.01',
'Magpie-Align/Magpie-Reasoning-150K',
]
tags:
- litgpt
- litdata
---
# tangled-llama-a-128k-base-v0.1

A pretrained language model based on the Llama model with about **62.9M** parameters. This model has been trained on **10.6B** (`10,630,121,844`) tokens from more than **31.3M** (`31,383,840`) dataset rows.
This model **isn't** designed for immediate use but rather for Continued Pretraining and Finetuning on a downstream task. While it can handle a context length of up to **128K** (`131,072`) tokens, it was pretrained with sequences of **2K** (`2048`) tokens.
The objective is to streamline the cognitive or reasoning core, eliminating any redundant knowledge from the model.
[loss, val_loss](https://api.wandb.ai/links/mtasic85/strnx9rl)
[val_ppl](https://api.wandb.ai/links/mtasic85/ljwxf4am)
[epoch](https://api.wandb.ai/links/mtasic85/edyph869)
[learning_rate](https://api.wandb.ai/links/mtasic85/eswxyger)
## Pretrain Evaluation
### lm-evaluation-harness
```bash
litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --out_dir 'evaluate-quick/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
| Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|---------------------------------------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|arc_challenge | 1|none | 0|acc |↑ |0.2176|± |0.0121|
| | |none | 0|acc_norm |↑ |0.2560|± |0.0128|
|gsm8k | 3|flexible-extract| 5|exact_match|↑ |0.0190|± |0.0038|
| | |strict-match | 5|exact_match|↑ |0.0000|± |0.0000|
|hellaswag | 1|none | 0|acc |↑ |0.2618|± |0.0044|
| | |none | 0|acc_norm |↑ |0.2592|± |0.0044|
|mmlu | 2|none | |acc |↑ |0.2464|± |0.0036|
| - humanities | 2|none | |acc |↑ |0.2485|± |0.0063|
| - formal_logic | 1|none | 0|acc |↑ |0.3175|± |0.0416|
| - high_school_european_history | 1|none | 0|acc |↑ |0.2364|± |0.0332|
| - high_school_us_history | 1|none | 0|acc |↑ |0.2402|± |0.0300|
| - high_school_world_history | 1|none | 0|acc |↑ |0.2785|± |0.0292|
| - international_law | 1|none | 0|acc |↑ |0.2314|± |0.0385|
| - jurisprudence | 1|none | 0|acc |↑ |0.2407|± |0.0413|
| - logical_fallacies | 1|none | 0|acc |↑ |0.2086|± |0.0319|
| - moral_disputes | 1|none | 0|acc |↑ |0.2081|± |0.0219|
| - moral_scenarios | 1|none | 0|acc |↑ |0.2693|± |0.0148|
| - philosophy | 1|none | 0|acc |↑ |0.1961|± |0.0226|
| - prehistory | 1|none | 0|acc |↑ |0.2284|± |0.0234|
| - professional_law | 1|none | 0|acc |↑ |0.2529|± |0.0111|
| - world_religions | 1|none | 0|acc |↑ |0.2982|± |0.0351|
| - other | 2|none | |acc |↑ |0.2536|± |0.0078|
| - business_ethics | 1|none | 0|acc |↑ |0.2700|± |0.0446|
| - clinical_knowledge | 1|none | 0|acc |↑ |0.2264|± |0.0258|
| - college_medicine | 1|none | 0|acc |↑ |0.2312|± |0.0321|
| - global_facts | 1|none | 0|acc |↑ |0.1500|± |0.0359|
| - human_aging | 1|none | 0|acc |↑ |0.2242|± |0.0280|
| - management | 1|none | 0|acc |↑ |0.1942|± |0.0392|
| - marketing | 1|none | 0|acc |↑ |0.3034|± |0.0301|
| - medical_genetics | 1|none | 0|acc |↑ |0.2200|± |0.0416|
| - miscellaneous | 1|none | 0|acc |↑ |0.2401|± |0.0153|
| - nutrition | 1|none | 0|acc |↑ |0.2255|± |0.0239|
| - professional_accounting | 1|none | 0|acc |↑ |0.2730|± |0.0266|
| - professional_medicine | 1|none | 0|acc |↑ |0.4081|± |0.0299|
| - virology | 1|none | 0|acc |↑ |0.2289|± |0.0327|
| - social sciences | 2|none | |acc |↑ |0.2535|± |0.0079|
| - econometrics | 1|none | 0|acc |↑ |0.2368|± |0.0400|
| - high_school_geography | 1|none | 0|acc |↑ |0.2323|± |0.0301|
| - high_school_government_and_politics| 1|none | 0|acc |↑ |0.2539|± |0.0314|
| - high_school_macroeconomics | 1|none | 0|acc |↑ |0.2436|± |0.0218|
| - high_school_microeconomics | 1|none | 0|acc |↑ |0.2311|± |0.0274|
| - high_school_psychology | 1|none | 0|acc |↑ |0.2550|± |0.0187|
| - human_sexuality | 1|none | 0|acc |↑ |0.2824|± |0.0395|
| - professional_psychology | 1|none | 0|acc |↑ |0.2484|± |0.0175|
| - public_relations | 1|none | 0|acc |↑ |0.2727|± |0.0427|
| - security_studies | 1|none | 0|acc |↑ |0.2939|± |0.0292|
| - sociology | 1|none | 0|acc |↑ |0.2488|± |0.0306|
| - us_foreign_policy | 1|none | 0|acc |↑ |0.2800|± |0.0451|
| - stem | 2|none | |acc |↑ |0.2293|± |0.0075|
| - abstract_algebra | 1|none | 0|acc |↑ |0.2200|± |0.0416|
| - anatomy | 1|none | 0|acc |↑ |0.2519|± |0.0375|
| - astronomy | 1|none | 0|acc |↑ |0.2697|± |0.0361|
| - college_biology | 1|none | 0|acc |↑ |0.2500|± |0.0362|
| - college_chemistry | 1|none | 0|acc |↑ |0.2400|± |0.0429|
| - college_computer_science | 1|none | 0|acc |↑ |0.2800|± |0.0451|
| - college_mathematics | 1|none | 0|acc |↑ |0.2000|± |0.0402|
| - college_physics | 1|none | 0|acc |↑ |0.2647|± |0.0439|
| - computer_security | 1|none | 0|acc |↑ |0.1900|± |0.0394|
| - conceptual_physics | 1|none | 0|acc |↑ |0.2340|± |0.0277|
| - electrical_engineering | 1|none | 0|acc |↑ |0.2414|± |0.0357|
| - elementary_mathematics | 1|none | 0|acc |↑ |0.1931|± |0.0203|
| - high_school_biology | 1|none | 0|acc |↑ |0.2323|± |0.0240|
| - high_school_chemistry | 1|none | 0|acc |↑ |0.2266|± |0.0295|
| - high_school_computer_science | 1|none | 0|acc |↑ |0.2400|± |0.0429|
| - high_school_mathematics | 1|none | 0|acc |↑ |0.2037|± |0.0246|
| - high_school_physics | 1|none | 0|acc |↑ |0.2185|± |0.0337|
| - high_school_statistics | 1|none | 0|acc |↑ |0.1898|± |0.0267|
| - machine_learning | 1|none | 0|acc |↑ |0.3393|± |0.0449|
|truthfulqa_mc2 | 2|none | 0|acc |↑ |0.5061|± |0.0167|
|winogrande | 1|none | 0|acc |↑ |0.4933|± |0.0141|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.2464|± |0.0036|
| - humanities | 2|none | |acc |↑ |0.2485|± |0.0063|
| - other | 2|none | |acc |↑ |0.2536|± |0.0078|
| - social sciences| 2|none | |acc |↑ |0.2535|± |0.0079|
| - stem | 2|none | |acc |↑ |0.2293|± |0.0075|
```bash
litgpt evaluate --tasks 'leaderboard' --out_dir 'evaluate-leaderboard/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------|
|leaderboard | N/A| | | | | | | |
| - leaderboard_bbh | N/A| | | | | | | |
| - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.4600|± |0.0316|
| - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.5134|± |0.0366|
| - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.1360|± |0.0217|
| - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.2960|± |0.0289|
| - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.4760|± |0.0316|
| - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.0800|± |0.0172|
| - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5120|± |0.0317|
| - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.1760|± |0.0241|
| - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.1320|± |0.0215|
| - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.3160|± |0.0295|
| - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.2480|± |0.0274|
| - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.4200|± |0.0313|
| - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.0360|± |0.0118|
| - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.1986|± |0.0331|
| - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.0520|± |0.0141|
| - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.2760|± |0.0283|
| - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.1400|± |0.0220|
| - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.4326|± |0.0372|
| - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.4600|± |0.0316|
| - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2680|± |0.0281|
| - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.2040|± |0.0255|
| - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1640|± |0.0235|
| - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3840|± |0.0308|
| - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.4880|± |0.0317|
| - leaderboard_gpqa | N/A| | | | | | | |
| - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2778|± |0.0319|
| - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2766|± |0.0192|
| - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2031|± |0.0190|
| - leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.1811|± | N/A|
| | |none | 0|inst_level_strict_acc |↑ |0.1715|± | N/A|
| | |none | 0|prompt_level_loose_acc |↑ |0.1091|± |0.0134|
| | |none | 0|prompt_level_strict_acc|↑ |0.1035|± |0.0131|
| - leaderboard_math_hard | N/A| | | | | | | |
| - leaderboard_math_algebra_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_counting_and_prob_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_geometry_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_intermediate_algebra_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_num_theory_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_prealgebra_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_precalculus_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.1169|± |0.0029|
| - leaderboard_musr | N/A| | | | | | | |
| - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5080|± |0.0317|
| - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.3008|± |0.0287|
| - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3760|± |0.0307|
```bash
litgpt evaluate --tasks 'gsm8k,mathqa' --out_dir 'evaluate-math/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
|Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k | 3|flexible-extract| 5|exact_match|↑ |0.0190|± |0.0038|
| | |strict-match | 5|exact_match|↑ |0.0000|± |0.0000|
|mathqa| 1|none | 0|acc |↑ |0.2060|± |0.0074|
| | |none | 0|acc_norm |↑ |0.2057|± |0.0074|
```bash
litgpt evaluate --tasks 'mmlu,mmlu_pro' --out_dir 'evaluate-mmlu/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
| Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|---------------------------------------|------:|--------------|-----:|-----------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.2459|± |0.0036|
| - humanities | 2|none | |acc |↑ |0.2480|± |0.0063|
| - formal_logic | 1|none | 0|acc |↑ |0.3175|± |0.0416|
| - high_school_european_history | 1|none | 0|acc |↑ |0.2424|± |0.0335|
| - high_school_us_history | 1|none | 0|acc |↑ |0.2402|± |0.0300|
| - high_school_world_history | 1|none | 0|acc |↑ |0.2743|± |0.0290|
| - international_law | 1|none | 0|acc |↑ |0.2314|± |0.0385|
| - jurisprudence | 1|none | 0|acc |↑ |0.2315|± |0.0408|
| - logical_fallacies | 1|none | 0|acc |↑ |0.2209|± |0.0326|
| - moral_disputes | 1|none | 0|acc |↑ |0.2081|± |0.0219|
| - moral_scenarios | 1|none | 0|acc |↑ |0.2670|± |0.0148|
| - philosophy | 1|none | 0|acc |↑ |0.2090|± |0.0231|
| - prehistory | 1|none | 0|acc |↑ |0.2160|± |0.0229|
| - professional_law | 1|none | 0|acc |↑ |0.2516|± |0.0111|
| - world_religions | 1|none | 0|acc |↑ |0.3041|± |0.0353|
| - other | 2|none | |acc |↑ |0.2549|± |0.0078|
| - business_ethics | 1|none | 0|acc |↑ |0.2700|± |0.0446|
| - clinical_knowledge | 1|none | 0|acc |↑ |0.2264|± |0.0258|
| - college_medicine | 1|none | 0|acc |↑ |0.2428|± |0.0327|
| - global_facts | 1|none | 0|acc |↑ |0.1600|± |0.0368|
| - human_aging | 1|none | 0|acc |↑ |0.2242|± |0.0280|
| - management | 1|none | 0|acc |↑ |0.1845|± |0.0384|
| - marketing | 1|none | 0|acc |↑ |0.2949|± |0.0299|
| - medical_genetics | 1|none | 0|acc |↑ |0.2200|± |0.0416|
| - miscellaneous | 1|none | 0|acc |↑ |0.2478|± |0.0154|
| - nutrition | 1|none | 0|acc |↑ |0.2353|± |0.0243|
| - professional_accounting | 1|none | 0|acc |↑ |0.2553|± |0.0260|
| - professional_medicine | 1|none | 0|acc |↑ |0.4118|± |0.0299|
| - virology | 1|none | 0|acc |↑ |0.2229|± |0.0324|
| - social sciences | 2|none | |acc |↑ |0.2525|± |0.0078|
| - econometrics | 1|none | 0|acc |↑ |0.2368|± |0.0400|
| - high_school_geography | 1|none | 0|acc |↑ |0.2172|± |0.0294|
| - high_school_government_and_politics| 1|none | 0|acc |↑ |0.2539|± |0.0314|
| - high_school_macroeconomics | 1|none | 0|acc |↑ |0.2410|± |0.0217|
| - high_school_microeconomics | 1|none | 0|acc |↑ |0.2311|± |0.0274|
| - high_school_psychology | 1|none | 0|acc |↑ |0.2495|± |0.0186|
| - human_sexuality | 1|none | 0|acc |↑ |0.2824|± |0.0395|
| - professional_psychology | 1|none | 0|acc |↑ |0.2565|± |0.0177|
| - public_relations | 1|none | 0|acc |↑ |0.2636|± |0.0422|
| - security_studies | 1|none | 0|acc |↑ |0.2898|± |0.0290|
| - sociology | 1|none | 0|acc |↑ |0.2537|± |0.0308|
| - us_foreign_policy | 1|none | 0|acc |↑ |0.2800|± |0.0451|
| - stem | 2|none | |acc |↑ |0.2274|± |0.0075|
| - abstract_algebra | 1|none | 0|acc |↑ |0.2200|± |0.0416|
| - anatomy | 1|none | 0|acc |↑ |0.2444|± |0.0371|
| - astronomy | 1|none | 0|acc |↑ |0.2697|± |0.0361|
| - college_biology | 1|none | 0|acc |↑ |0.2500|± |0.0362|
| - college_chemistry | 1|none | 0|acc |↑ |0.2100|± |0.0409|
| - college_computer_science | 1|none | 0|acc |↑ |0.2800|± |0.0451|
| - college_mathematics | 1|none | 0|acc |↑ |0.1900|± |0.0394|
| - college_physics | 1|none | 0|acc |↑ |0.2549|± |0.0434|
| - computer_security | 1|none | 0|acc |↑ |0.1900|± |0.0394|
| - conceptual_physics | 1|none | 0|acc |↑ |0.2298|± |0.0275|
| - electrical_engineering | 1|none | 0|acc |↑ |0.2483|± |0.0360|
| - elementary_mathematics | 1|none | 0|acc |↑ |0.1931|± |0.0203|
| - high_school_biology | 1|none | 0|acc |↑ |0.2258|± |0.0238|
| - high_school_chemistry | 1|none | 0|acc |↑ |0.2217|± |0.0292|
| - high_school_computer_science | 1|none | 0|acc |↑ |0.2400|± |0.0429|
| - high_school_mathematics | 1|none | 0|acc |↑ |0.2074|± |0.0247|
| - high_school_physics | 1|none | 0|acc |↑ |0.2185|± |0.0337|
| - high_school_statistics | 1|none | 0|acc |↑ |0.1991|± |0.0272|
| - machine_learning | 1|none | 0|acc |↑ |0.3393|± |0.0449|
|mmlu_pro | 2|custom-extract| |exact_match|↑ |0.0000|± |0.0000|
| - biology | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - business | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - chemistry | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - computer_science | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - economics | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - engineering | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - health | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - history | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - law | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - math | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - other | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - philosophy | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - physics | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| - psychology | 1|custom-extract| 5|exact_match|↑ |0.0000|± |0.0000|
| Groups |Version| Filter |n-shot| Metric | |Value | |Stderr|
|------------------|------:|--------------|------|-----------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.2459|± |0.0036|
| - humanities | 2|none | |acc |↑ |0.2480|± |0.0063|
| - other | 2|none | |acc |↑ |0.2549|± |0.0078|
| - social sciences| 2|none | |acc |↑ |0.2525|± |0.0078|
| - stem | 2|none | |acc |↑ |0.2274|± |0.0075|
|mmlu_pro | 2|custom-extract| |exact_match|↑ |0.0000|± |0.0000|
```bash
litgpt evaluate --tasks 'arc_challenge,boolq,gpqa,hellaswag,openbookqa,piqa,truthfulqa_mc2,winogrande' --out_dir 'evaluate-reasoning/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
| Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|-------------------------------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|arc_challenge | 1|none | 0|acc |↑ |0.2176|± |0.0121|
| | |none | 0|acc_norm |↑ |0.2560|± |0.0128|
|boolq | 2|none | 0|acc |↑ |0.3783|± |0.0085|
|gpqa_diamond_cot_n_shot | 2|flexible-extract| 0|exact_match|↑ |0.0051|± |0.0051|
| | |strict-match | 0|exact_match|↑ |0.0000|± |0.0000|
|gpqa_diamond_cot_zeroshot | 1|flexible-extract| 0|exact_match|↑ |0.0051|± |0.0051|
| | |strict-match | 0|exact_match|↑ |0.0000|± |0.0000|
|gpqa_diamond_generative_n_shot | 2|flexible-extract| 0|exact_match|↑ |0.0051|± |0.0051|
| | |strict-match | 0|exact_match|↑ |0.0000|± |0.0000|
|gpqa_diamond_n_shot | 2|none | 0|acc |↑ |0.1970|± |0.0283|
| | |none | 0|acc_norm |↑ |0.1970|± |0.0283|
|gpqa_diamond_zeroshot | 1|none | 0|acc |↑ |0.2727|± |0.0317|
| | |none | 0|acc_norm |↑ |0.2727|± |0.0317|
|gpqa_extended_cot_n_shot | 2|flexible-extract| 0|exact_match|↑ |0.0018|± |0.0018|
| | |strict-match | 0|exact_match|↑ |0.0000|± |0.0000|
|gpqa_extended_cot_zeroshot | 1|flexible-extract| 0|exact_match|↑ |0.0037|± |0.0026|
| | |strict-match | 0|exact_match|↑ |0.0000|± |0.0000|
|gpqa_extended_generative_n_shot| 2|flexible-extract| 0|exact_match|↑ |0.0073|± |0.0037|
| | |strict-match | 0|exact_match|↑ |0.0000|± |0.0000|
|gpqa_extended_n_shot | 2|none | 0|acc |↑ |0.2564|± |0.0187|
| | |none | 0|acc_norm |↑ |0.2564|± |0.0187|
|gpqa_extended_zeroshot | 1|none | 0|acc |↑ |0.2802|± |0.0192|
| | |none | 0|acc_norm |↑ |0.2802|± |0.0192|
|gpqa_main_cot_n_shot | 2|flexible-extract| 0|exact_match|↑ |0.0000|± |0.0000|
| | |strict-match | 0|exact_match|↑ |0.0000|± |0.0000|
|gpqa_main_cot_zeroshot | 1|flexible-extract| 0|exact_match|↑ |0.0000|± |0.0000|
| | |strict-match | 0|exact_match|↑ |0.0000|± |0.0000|
|gpqa_main_generative_n_shot | 2|flexible-extract| 0|exact_match|↑ |0.0089|± |0.0044|
| | |strict-match | 0|exact_match|↑ |0.0000|± |0.0000|
|gpqa_main_n_shot | 2|none | 0|acc |↑ |0.2478|± |0.0204|
| | |none | 0|acc_norm |↑ |0.2478|± |0.0204|
|gpqa_main_zeroshot | 1|none | 0|acc |↑ |0.2143|± |0.0194|
| | |none | 0|acc_norm |↑ |0.2143|± |0.0194|
|hellaswag | 1|none | 0|acc |↑ |0.2618|± |0.0044|
| | |none | 0|acc_norm |↑ |0.2592|± |0.0044|
|openbookqa | 1|none | 0|acc |↑ |0.1340|± |0.0152|
| | |none | 0|acc_norm |↑ |0.2340|± |0.0190|
|piqa | 1|none | 0|acc |↑ |0.5201|± |0.0117|
| | |none | 0|acc_norm |↑ |0.5076|± |0.0117|
|truthfulqa_mc2 | 2|none | 0|acc |↑ |0.5061|± |0.0167|
|winogrande | 1|none | 0|acc |↑ |0.4933|± |0.0141|
```bash
litgpt evaluate --tasks 'wikitext,qasper' --out_dir 'evaluate-long/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|---------------|------:|------|-----:|---------------|---|---------:|---|------|
|qasper_bool | 1|none | 0|f1 |↑ | 0.0000|± | 0|
|qasper_freeform| 2|none | 0|f1_abstractive |↑ | 0.0036|± | 0.001|
|wikitext | 2|none | 0|bits_per_byte |↓ | 3.0634|± | N/A|
| | |none | 0|byte_perplexity|↓ | 8.3596|± | N/A|
| | |none | 0|word_perplexity|↓ |85375.3002|± | N/A|
## Continued Pretrain Evaluation
### lm-evaluation-harness
```bash
litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --out_dir 'evaluate-contrain-quick/' --batch_size 4 --dtype 'bfloat16' out/contrain/final/
```
| Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|---------------------------------------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|arc_challenge | 1|none | 0|acc |↑ |0.2142|± |0.0120|
| | |none | 0|acc_norm |↑ |0.2551|± |0.0127|
|gsm8k | 3|flexible-extract| 5|exact_match|↑ |0.0136|± |0.0032|
| | |strict-match | 5|exact_match|↑ |0.0000|± |0.0000|
|hellaswag | 1|none | 0|acc |↑ |0.2626|± |0.0044|
| | |none | 0|acc_norm |↑ |0.2594|± |0.0044|
|mmlu | 2|none | |acc |↑ |0.2441|± |0.0036|
| - humanities | 2|none | |acc |↑ |0.2417|± |0.0062|
| - formal_logic | 1|none | 0|acc |↑ |0.2937|± |0.0407|
| - high_school_european_history | 1|none | 0|acc |↑ |0.2182|± |0.0323|
| - high_school_us_history | 1|none | 0|acc |↑ |0.2402|± |0.0300|
| - high_school_world_history | 1|none | 0|acc |↑ |0.2700|± |0.0289|
| - international_law | 1|none | 0|acc |↑ |0.1901|± |0.0358|
| - jurisprudence | 1|none | 0|acc |↑ |0.2778|± |0.0433|
| - logical_fallacies | 1|none | 0|acc |↑ |0.2086|± |0.0319|
| - moral_disputes | 1|none | 0|acc |↑ |0.2110|± |0.0220|
| - moral_scenarios | 1|none | 0|acc |↑ |0.2704|± |0.0149|
| - philosophy | 1|none | 0|acc |↑ |0.1897|± |0.0223|
| - prehistory | 1|none | 0|acc |↑ |0.2130|± |0.0228|
| - professional_law | 1|none | 0|acc |↑ |0.2445|± |0.0110|
| - world_religions | 1|none | 0|acc |↑ |0.2690|± |0.0340|
| - other | 2|none | |acc |↑ |0.2546|± |0.0078|
| - business_ethics | 1|none | 0|acc |↑ |0.2600|± |0.0441|
| - clinical_knowledge | 1|none | 0|acc |↑ |0.2491|± |0.0266|
| - college_medicine | 1|none | 0|acc |↑ |0.2543|± |0.0332|
| - global_facts | 1|none | 0|acc |↑ |0.1900|± |0.0394|
| - human_aging | 1|none | 0|acc |↑ |0.2287|± |0.0282|
| - management | 1|none | 0|acc |↑ |0.2233|± |0.0412|
| - marketing | 1|none | 0|acc |↑ |0.2863|± |0.0296|
| - medical_genetics | 1|none | 0|acc |↑ |0.2100|± |0.0409|
| - miscellaneous | 1|none | 0|acc |↑ |0.2197|± |0.0148|
| - nutrition | 1|none | 0|acc |↑ |0.2680|± |0.0254|
| - professional_accounting | 1|none | 0|acc |↑ |0.2624|± |0.0262|
| - professional_medicine | 1|none | 0|acc |↑ |0.3824|± |0.0295|
| - virology | 1|none | 0|acc |↑ |0.2530|± |0.0338|
| - social sciences | 2|none | |acc |↑ |0.2428|± |0.0077|
| - econometrics | 1|none | 0|acc |↑ |0.2456|± |0.0405|
| - high_school_geography | 1|none | 0|acc |↑ |0.2323|± |0.0301|
| - high_school_government_and_politics| 1|none | 0|acc |↑ |0.2383|± |0.0307|
| - high_school_macroeconomics | 1|none | 0|acc |↑ |0.2385|± |0.0216|
| - high_school_microeconomics | 1|none | 0|acc |↑ |0.2017|± |0.0261|
| - high_school_psychology | 1|none | 0|acc |↑ |0.2550|± |0.0187|
| - human_sexuality | 1|none | 0|acc |↑ |0.2748|± |0.0392|
| - professional_psychology | 1|none | 0|acc |↑ |0.2386|± |0.0172|
| - public_relations | 1|none | 0|acc |↑ |0.2545|± |0.0417|
| - security_studies | 1|none | 0|acc |↑ |0.2531|± |0.0278|
| - sociology | 1|none | 0|acc |↑ |0.2587|± |0.0310|
| - us_foreign_policy | 1|none | 0|acc |↑ |0.2300|± |0.0423|
| - stem | 2|none | |acc |↑ |0.2388|± |0.0076|
| - abstract_algebra | 1|none | 0|acc |↑ |0.2200|± |0.0416|
| - anatomy | 1|none | 0|acc |↑ |0.2074|± |0.0350|
| - astronomy | 1|none | 0|acc |↑ |0.2632|± |0.0358|
| - college_biology | 1|none | 0|acc |↑ |0.2361|± |0.0355|
| - college_chemistry | 1|none | 0|acc |↑ |0.2500|± |0.0435|
| - college_computer_science | 1|none | 0|acc |↑ |0.3300|± |0.0473|
| - college_mathematics | 1|none | 0|acc |↑ |0.2100|± |0.0409|
| - college_physics | 1|none | 0|acc |↑ |0.3039|± |0.0458|
| - computer_security | 1|none | 0|acc |↑ |0.2800|± |0.0451|
| - conceptual_physics | 1|none | 0|acc |↑ |0.2681|± |0.0290|
| - electrical_engineering | 1|none | 0|acc |↑ |0.2621|± |0.0366|
| - elementary_mathematics | 1|none | 0|acc |↑ |0.2196|± |0.0213|
| - high_school_biology | 1|none | 0|acc |↑ |0.2484|± |0.0246|
| - high_school_chemistry | 1|none | 0|acc |↑ |0.1823|± |0.0272|
| - high_school_computer_science | 1|none | 0|acc |↑ |0.2200|± |0.0416|
| - high_school_mathematics | 1|none | 0|acc |↑ |0.2111|± |0.0249|
| - high_school_physics | 1|none | 0|acc |↑ |0.1987|± |0.0326|
| - high_school_statistics | 1|none | 0|acc |↑ |0.2130|± |0.0279|
| - machine_learning | 1|none | 0|acc |↑ |0.3393|± |0.0449|
|truthfulqa_mc2 | 2|none | 0|acc |↑ |0.5067|± |0.0167|
|winogrande | 1|none | 0|acc |↑ |0.4759|± |0.0140|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.2441|± |0.0036|
| - humanities | 2|none | |acc |↑ |0.2417|± |0.0062|
| - other | 2|none | |acc |↑ |0.2546|± |0.0078|
| - social sciences| 2|none | |acc |↑ |0.2428|± |0.0077|
| - stem | 2|none | |acc |↑ |0.2388|± |0.0076|
```bash
litgpt evaluate --tasks 'gsm8k,mathqa' --out_dir 'evaluate-contrain-math/' --batch_size 4 --dtype 'bfloat16' out/contrain/final/
```
|Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k | 3|flexible-extract| 5|exact_match|↑ |0.0136|± |0.0032|
| | |strict-match | 5|exact_match|↑ |0.0000|± |0.0000|
|mathqa| 1|none | 0|acc |↑ |0.2023|± |0.0074|
| | |none | 0|acc_norm |↑ |0.1977|± |0.0073|
|
hgnoi/2CRQSHuuReWtMJxl | hgnoi | "2024-05-25T07:02:13Z" | 78 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T06:59:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF | mradermacher | "2025-01-17T12:55:32Z" | 281 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"qwen2",
"o1",
"en",
"zh",
"base_model:Pinkstack/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B",
"base_model:quantized:Pinkstack/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-17T12:33:39Z" | ---
base_model: Pinkstack/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
- qwen2
- o1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Pinkstack/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B-GGUF/resolve/main/PARM-V1.5-base-QwQ-Qwen-2.5-o1-3B.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tensorblock/gemma-2-9b-it-abliterated-GGUF | tensorblock | "2024-11-28T12:01:27Z" | 154 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"base_model:IlyaGusev/gemma-2-9b-it-abliterated",
"base_model:quantized:IlyaGusev/gemma-2-9b-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-28T11:13:00Z" | ---
license: gemma
language:
- en
tags:
- TensorBlock
- GGUF
base_model: IlyaGusev/gemma-2-9b-it-abliterated
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## IlyaGusev/gemma-2-9b-it-abliterated - GGUF
This repo contains GGUF format model files for [IlyaGusev/gemma-2-9b-it-abliterated](https://huggingface.co/IlyaGusev/gemma-2-9b-it-abliterated).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<bos><start_of_turn>system
{system_prompt}<end_of_turn>
<start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma-2-9b-it-abliterated-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q2_K.gguf) | Q2_K | 3.805 GB | smallest, significant quality loss - not recommended for most purposes |
| [gemma-2-9b-it-abliterated-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q3_K_S.gguf) | Q3_K_S | 4.338 GB | very small, high quality loss |
| [gemma-2-9b-it-abliterated-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q3_K_M.gguf) | Q3_K_M | 4.762 GB | very small, high quality loss |
| [gemma-2-9b-it-abliterated-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q3_K_L.gguf) | Q3_K_L | 5.132 GB | small, substantial quality loss |
| [gemma-2-9b-it-abliterated-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_0.gguf) | Q4_0 | 5.443 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gemma-2-9b-it-abliterated-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_K_S.gguf) | Q4_K_S | 5.479 GB | small, greater quality loss |
| [gemma-2-9b-it-abliterated-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q4_K_M.gguf) | Q4_K_M | 5.761 GB | medium, balanced quality - recommended |
| [gemma-2-9b-it-abliterated-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q5_0.gguf) | Q5_0 | 6.484 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gemma-2-9b-it-abliterated-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q5_K_S.gguf) | Q5_K_S | 6.484 GB | large, low quality loss - recommended |
| [gemma-2-9b-it-abliterated-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q5_K_M.gguf) | Q5_K_M | 6.647 GB | large, very low quality loss - recommended |
| [gemma-2-9b-it-abliterated-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q6_K.gguf) | Q6_K | 7.589 GB | very large, extremely low quality loss |
| [gemma-2-9b-it-abliterated-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-2-9b-it-abliterated-GGUF/blob/main/gemma-2-9b-it-abliterated-Q8_0.gguf) | Q8_0 | 9.827 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/gemma-2-9b-it-abliterated-GGUF --include "gemma-2-9b-it-abliterated-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/gemma-2-9b-it-abliterated-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
davidschulte/ESM_ethos_binary | davidschulte | "2024-12-08T14:33:06Z" | 8 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:iamollas/ethos",
"arxiv:2410.15148",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-08T14:33:02Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- iamollas/ethos
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM iamollas/ethos
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** iamollas/ethos
- **ESM architecture:** linear
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
## Training Details
### Intermediate Task
- **Task ID:** iamollas/ethos
- **Subset [optional]:** binary
- **Text Column:** text
- **Label Column:** label
- **Dataset Split:** train
- **Sample size [optional]:** 998
- **Sample seed [optional]:**
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://arxiv.org/abs/2410.15148).
**BibTeX:**
```
@misc{schulte2024moreparameterefficientselectionintermediate,
title={Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning},
author={David Schulte and Felix Hamborg and Alan Akbik},
year={2024},
eprint={2410.15148},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.15148},
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. arXiv preprint arXiv:2410.15148.
```
## Additional Information
|
JW17/Q25-3B-UC-BT-seed12-checkpoints | JW17 | "2025-01-13T10:51:01Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-classification",
"generated_from_trainer",
"base_model:tlrm/Q25-3B-UC",
"base_model:finetune:tlrm/Q25-3B-UC",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-13T10:40:39Z" | ---
base_model: tlrm/Q25-3B-UC
library_name: transformers
tags:
- generated_from_trainer
licence: license
---
# Model Card for None
This model is a fine-tuned version of [tlrm/Q25-3B-UC](https://huggingface.co/tlrm/Q25-3B-UC).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jiwooya1000/RMVar-Submission/runs/m3gwkywp)
This model was trained with Reward.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.0
- Pytorch: 2.4.1+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
W1lson/zephyr-finetuned-on-synthetic-data | W1lson | "2023-11-10T04:30:30Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | "2023-11-10T03:18:11Z" | ---
license: mit
base_model: TheBloke/zephyr-7B-alpha-GPTQ
tags:
- generated_from_trainer
model-index:
- name: zephyr-finetuned-on-synthetic-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-finetuned-on-synthetic-data
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.12.0
- Tokenizers 0.14.1
|
Nicolas852/q-FrozenLake-v1-4x4-noSlippery | Nicolas852 | "2024-01-26T02:51:11Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-26T02:51:06Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Nicolas852/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokulsrinivasagan/bert_tiny_olda_book_10_v1_stsb | gokulsrinivasagan | "2025-02-11T20:44:31Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_tiny_olda_book_10_v1",
"base_model:finetune:gokulsrinivasagan/bert_tiny_olda_book_10_v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-11T20:43:32Z" | ---
library_name: transformers
language:
- en
base_model: gokulsrinivasagan/bert_tiny_olda_book_10_v1
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: bert_tiny_olda_book_10_v1_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.7894827878783358
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_tiny_olda_book_10_v1_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_tiny_olda_book_10_v1](https://huggingface.co/gokulsrinivasagan/bert_tiny_olda_book_10_v1) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8800
- Pearson: 0.7899
- Spearmanr: 0.7895
- Combined Score: 0.7897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 3.0028 | 1.0 | 23 | 2.5219 | 0.1644 | 0.1637 | 0.1641 |
| 1.7825 | 2.0 | 46 | 1.7353 | 0.6315 | 0.6561 | 0.6438 |
| 1.2017 | 3.0 | 69 | 1.1421 | 0.7243 | 0.7369 | 0.7306 |
| 0.8992 | 4.0 | 92 | 1.0970 | 0.7550 | 0.7677 | 0.7613 |
| 0.6849 | 5.0 | 115 | 0.8800 | 0.7899 | 0.7895 | 0.7897 |
| 0.5834 | 6.0 | 138 | 0.8918 | 0.7965 | 0.7978 | 0.7972 |
| 0.4852 | 7.0 | 161 | 0.9756 | 0.7948 | 0.7965 | 0.7957 |
| 0.4346 | 8.0 | 184 | 0.8957 | 0.7867 | 0.7860 | 0.7864 |
| 0.3871 | 9.0 | 207 | 0.9086 | 0.7900 | 0.7882 | 0.7891 |
| 0.3449 | 10.0 | 230 | 1.0219 | 0.7874 | 0.7899 | 0.7886 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.2.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
glasses/resnext50_32x4d | glasses | "2021-11-30T20:13:20Z" | 11 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:1611.05431",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | # resnext50_32x4d
Implementation of ResNetXt proposed in [\"Aggregated Residual
Transformation for Deep Neural
Networks\"](https://arxiv.org/pdf/1611.05431.pdf)
Create a default model
``` python
ResNetXt.resnext50_32x4d()
ResNetXt.resnext101_32x8d()
# create a resnetxt18_32x4d
ResNetXt.resnet18(block=ResNetXtBottleNeckBlock, groups=32, base_width=4)
```
Examples:
: ``` python
# change activation
ResNetXt.resnext50_32x4d(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNetXt.resnext50_32x4d(n_classes=100)
# pass a different block
ResNetXt.resnext50_32x4d(block=SENetBasicBlock)
# change the initial convolution
model = ResNetXt.resnext50_32x4d
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = ResNetXt.resnext50_32x4d()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
mlfoundations-dev/hp_ablations_gemma_epoch1_dcftv1.2 | mlfoundations-dev | "2024-12-11T18:49:03Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-11T10:08:51Z" | ---
library_name: transformers
license: gemma
base_model: google/gemma-2-9b
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: hp_ablations_gemma_epoch1_dcftv1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hp_ablations_gemma_epoch1_dcftv1.2
This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b) on the mlfoundations-dev/oh-dcft-v1.2_no-curation_gpt-4o-mini dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1738
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6082 | 0.9998 | 334 | 0.6192 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
knc6/diffractgpt_mistral_chemical_formula | knc6 | "2025-01-06T15:52:56Z" | 78 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-bnb-4bit",
"region:us"
] | null | "2025-01-06T15:51:04Z" | ---
base_model: unsloth/mistral-7b-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
superb/hubert-base-superb-ks | superb | "2021-11-04T16:03:26Z" | 20,025 | 8 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"speech",
"audio",
"en",
"dataset:superb",
"arxiv:2105.01051",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- hubert
- audio-classification
license: apache-2.0
widget:
- example_title: Speech Commands "down"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_down.wav
- example_title: Speech Commands "go"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_go.wav
---
# Hubert-Base for Keyword Spotting
## Model description
This is a ported version of [S3PRL's Hubert for the SUPERB Keyword Spotting task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/speech_commands).
The base model is [hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of
words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and
inference time are all crucial. SUPERB uses the widely used
[Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task.
The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the
false positive.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ks-keyword-spotting).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "ks", split="test")
classifier = pipeline("audio-classification", model="superb/hubert-base-superb-ks")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
from torchaudio.sox_effects import apply_effects_file
effects = [["channels", "1"], ["rate", "16000"], ["gain", "-3.0"]]
def map_to_array(example):
speech, _ = apply_effects_file(example["file"], effects)
example["speech"] = speech.squeeze(0).numpy()
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "ks", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-ks")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-base-superb-ks")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9630` | `0.9672` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
yooshijay/qwen1.5-14B_psychat | yooshijay | "2024-03-09T13:52:15Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"region:us"
] | null | "2024-03-09T13:40:59Z" | ---
library_name: peft
base_model: Qwen/Qwen1.5-14B-Chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.1.dev0 |
ShenaoZ/0.0005_withdpo_4iters_bs256_5102lr_iter_3 | ShenaoZ | "2024-05-05T00:10:47Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0005_withdpo_4iters_bs256_5102lr_iter_2",
"base_model:finetune:ShenaoZ/0.0005_withdpo_4iters_bs256_5102lr_iter_2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-04T23:08:38Z" | ---
license: mit
base_model: ShenaoZ/0.0005_withdpo_4iters_bs256_5102lr_iter_2
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0005_withdpo_4iters_bs256_5102lr_iter_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0005_withdpo_4iters_bs256_5102lr_iter_3
This model is a fine-tuned version of [ShenaoZ/0.0005_withdpo_4iters_bs256_5102lr_iter_2](https://huggingface.co/ShenaoZ/0.0005_withdpo_4iters_bs256_5102lr_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
EleutherAI/pythia-70m-deduped | EleutherAI | "2023-07-09T16:07:33Z" | 122,668 | 25 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-02-13T16:01:41Z" | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-70M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-70M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> |
Trappu/Unslop-Magpicaro-0.55-12B-Q5_K_M-GGUF | Trappu | "2025-01-04T01:18:40Z" | 18 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Trappu/Magnum-Picaro-0.7-v2-12b",
"TheDrummer/UnslopNemo-12B-v4.1",
"llama-cpp",
"gguf-my-repo",
"base_model:Trappu/Unslop-Magpicaro-0.55-12B",
"base_model:quantized:Trappu/Unslop-Magpicaro-0.55-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-04T01:18:03Z" | ---
base_model: Trappu/Unslop-Magpicaro-0.55-12B
tags:
- merge
- mergekit
- lazymergekit
- Trappu/Magnum-Picaro-0.7-v2-12b
- TheDrummer/UnslopNemo-12B-v4.1
- llama-cpp
- gguf-my-repo
---
# Trappu/Unslop-Magpicaro-0.55-12B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Trappu/Unslop-Magpicaro-0.55-12B`](https://huggingface.co/Trappu/Unslop-Magpicaro-0.55-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Trappu/Unslop-Magpicaro-0.55-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Trappu/Unslop-Magpicaro-0.55-12B-Q5_K_M-GGUF --hf-file unslop-magpicaro-0.55-12b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Trappu/Unslop-Magpicaro-0.55-12B-Q5_K_M-GGUF --hf-file unslop-magpicaro-0.55-12b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Trappu/Unslop-Magpicaro-0.55-12B-Q5_K_M-GGUF --hf-file unslop-magpicaro-0.55-12b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Trappu/Unslop-Magpicaro-0.55-12B-Q5_K_M-GGUF --hf-file unslop-magpicaro-0.55-12b-q5_k_m.gguf -c 2048
```
|
PuspaKamal/whisper_ASR | PuspaKamal | "2024-05-16T05:53:24Z" | 123 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:jbpark0614/speechocean762",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-12T12:53:15Z" | ---
language:
- en
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- jbpark0614/speechocean762
model-index:
- name: Whisper Small En - MrOli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small En - MrOli
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the jbpark0614/speechocean762 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
saad7489/segformer-b2-finetuned-segments-sidewalkpidmix | saad7489 | "2024-10-31T09:59:06Z" | 35 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2024-10-26T11:56:19Z" | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: segformer-b2-finetuned-segments-sidewalkpidmix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b2-finetuned-segments-sidewalkpidmix
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
mav23/distilgpt2-emailgen-GGUF | mav23 | "2024-11-19T11:11:18Z" | 276 | 0 | null | [
"gguf",
"generated_from_trainer",
"distilgpt2",
"email generation",
"email",
"dataset:aeslc",
"dataset:postbot/multi_emails",
"base_model:distilbert/distilgpt2",
"base_model:quantized:distilbert/distilgpt2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-19T11:08:45Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
- distilgpt2
- email generation
- email
datasets:
- aeslc
- postbot/multi_emails
widget:
- text: 'Good Morning Professor Beans,
Hope you are doing well. I just wanted to reach out and ask if differential calculus
will be on the exam'
example_title: email to prof
- text: 'Hey <NAME>,
Thank you for signing up for my weekly newsletter. Before we get started, you''ll
have to confirm your email address.'
example_title: newsletter
- text: 'Hi <NAME>,
I hope this email finds you well. I wanted to reach out and ask about office hours'
example_title: office hours
- text: 'Greetings <NAME>,
I hope you had a splendid evening at the Company sausage eating festival. I am
reaching out because'
example_title: festival
- text: 'Good Morning Harold,
I was wondering when the next'
example_title: event
- text: URGENT - I need the TPS reports
example_title: URGENT
- text: 'Hi Archibald,
I hope this email finds you extremely well.'
example_title: emails that find you
- text: 'Hello there.
I just wanted to reach out and check in to'
example_title: checking in
- text: 'Hello <NAME>,
I hope this email finds you well. I wanted to reach out and see if you''ve enjoyed
your time with us'
example_title: work well
- text: 'Hi <NAME>,
I hope this email finds you well. I wanted to reach out and see if we could catch
up'
example_title: catch up
- text: I'm <NAME> and I just moved into the area and wanted to reach out and get
some details on where I could get groceries and
example_title: grocery
parameters:
min_length: 4
max_length: 128
length_penalty: 0.8
no_repeat_ngram_size: 2
do_sample: false
num_beams: 8
early_stopping: true
repetition_penalty: 5.5
base_model: distilgpt2
---
# distilgpt2-emailgen
Why write the rest of your email when you can generate it?
```python
from transformers import pipeline
model_tag = "postbot/distilgpt2-emailgen"
generator = pipeline(
'text-generation',
model=model_tag,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
result = generator(
prompt,
max_length=64,
do_sample=False,
early_stopping=True,
) # generate
print(result[0]['generated_text'])
```
- try it in a [Google Colab](https://colab.research.google.com/gist/pszemraj/91df57e0c2caf1d5273b78576ad2853e/postbot-distilgpt2-emailgen-demo.ipynb) notebook
- Use it in bash/cmd [with this gist](https://gist.github.com/pszemraj/c1b0a76445418b6bbddd5f9633d1bb7f) :)
> For this model, formatting matters. The results may be (significantly) different between the structure outlined above and `prompt = "Hey, just wanted to ..."` etc.
## Model description
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a dataset of 50k emails, including the classic `aeslc` dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6247
## Intended uses & limitations
The intended use of this model is to provide suggestions to "autocomplete" the rest of your email. Said another way, it should serve as a **tool to write predictable emails faster**. It is not intended to write entire emails; at least **some input** is required to guide the direction of the model.
Please verify any suggestions by the model for A) False claims and B) negation statements before accepting/sending something.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8299 | 1.0 | 248 | 2.7971 |
| 2.6984 | 2.0 | 496 | 2.6826 |
| 2.7022 | 3.0 | 744 | 2.6361 |
| 2.6436 | 4.0 | 992 | 2.6245 |
| 2.6195 | 5.0 | 1240 | 2.6247 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__distilgpt2-emailgen)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.89 |
| ARC (25-shot) | 21.76 |
| HellaSwag (10-shot) | 27.52 |
| MMLU (5-shot) | 25.97 |
| TruthfulQA (0-shot) | 46.17 |
| Winogrande (5-shot) | 51.62 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 1.16 |
|
pandafm/donutES-UMU | pandafm | "2024-04-22T08:30:08Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-04-16T17:34:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
magnetic/zephyr-7b-sft-full | magnetic | "2023-11-14T14:29:14Z" | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-14T06:28:07Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: zephyr-7b-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-full
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 512
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9292 | 0.67 | 272 | 0.9323 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
furrutiav/math_bert_qa_extractor_cockatiel_2022_mixtral_v2_it_1597 | furrutiav | "2024-02-16T02:12:16Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-02-16T02:10:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
emozilla/mpt-7b-storywriter-fast | emozilla | "2023-06-08T02:39:20Z" | 14 | 11 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:the_pile_books3",
"arxiv:2108.12409",
"arxiv:2205.14135",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-31T16:31:13Z" | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
datasets:
- the_pile_books3
inference: false
---
The code for this model has been updated to include the adaptions from [Birchlabs/mosaicml-mpt-7b-chat-qlora](https://huggingface.co/Birchlabs/mosaicml-mpt-7b-chat-qlora) which allow MPT models to be loaded with `device_map="auto"` and [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) support (e.g. `load_in_8bit`, `load_in_4bit`).
It also has the [latest key-value cache MPT code](https://github.com/mosaicml/llm-foundry/pull/210) to allow for fast inference with `transformers` (thus, `use_cache` is set to `True` in `config.json`).
# MPT-7B-StoryWriter-65k+
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-storywriter)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
Apache 2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048 and finetuned with a sequence length of 65536,
ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
config.update({"max_seq_len": 83968})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Community-Created Resources
These were not created by MosaicML, but you may find them useful. These links are not an endorsement of the creators or their content.
- [Oobabooga Running MPT-7B-Storywriter](https://youtu.be/QVVb6Md6huA)
- [NEW MPT-7B-StoryWriter CRUSHES GPT-4!](https://www.youtube.com/watch?v=O9Y_ZdsuKWQ&t=649s) - Has a long section on running locally using Oobabooga
## Example Epilogue
The full text of the _The Great Gatsby_ (67873 tokens) was fed to the model, followed by the text "EPILOGUE"
**TheRealFitzgerald233**:
> ...
>
> So we beat on, boats against the current, borne back ceaselessly into the past
>
> EPILOGUE
**MPT-7B-StoryWriter**:
>I have lived so long in the shadow of this dream that I may never see the bright sunlight of reality. I may look up some morning and find that I am six feet under, and that Gatsby, who gave me the dream and everything else, will have vanished behind me.
>
>And so it was. I had hardly crossed the threshold of the old world when my eye caught the headlines of a newspaper, and I knew at once that my dream was over. I knew that Gatsby was dead.
>
>And I knew that day that I didn't love her.
>
>That night I took the last train for West Egg, and when I reached home I went directly to my attic and brought down all Gatsby's letters.
>
>I have sorted them by date and placed them in the order in which they were written. They are very few in number.
>
>They are dated between April of nineteen eighteen and January of nineteen nineteen, and the last one I have here is the one that told me he was dead.
>
>There is something in that last letter that frightens me. It is the sense of a finality, of a completeness. I never heard of him after that, and I was glad.
>
>But a few months ago I read somewhere that he had been seen in New York. He had become a great man.
>
>And I knew that he had not changed at all.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | **65536** |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-80GBs for about 2 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-StoryWriter was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Alex Trott and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
``` |
Xu-Ouyang/pythia-2.8b-deduped-int4-step57000-bnb | Xu-Ouyang | "2024-07-26T18:48:56Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-26T18:47:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lotfiGH/cream_milk | lotfiGH | "2023-05-25T14:50:51Z" | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-05-25T14:33:17Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of bthm cream
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - lotfiGH/cream_milk
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of bthm cream using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Onutoa/2_2e-3_1_0.1 | Onutoa | "2023-09-06T03:59:09Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-06T00:22:13Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 2_2e-3_1_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2_2e-3_1_0.1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5541
- Accuracy: 0.7003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8034 | 1.0 | 590 | 0.6537 | 0.6217 |
| 0.8338 | 2.0 | 1180 | 0.7014 | 0.6217 |
| 0.8142 | 3.0 | 1770 | 0.6716 | 0.5596 |
| 0.7701 | 4.0 | 2360 | 0.6599 | 0.6217 |
| 0.7412 | 5.0 | 2950 | 0.7053 | 0.6217 |
| 0.7414 | 6.0 | 3540 | 0.6539 | 0.6217 |
| 0.7411 | 7.0 | 4130 | 0.9828 | 0.3817 |
| 0.7237 | 8.0 | 4720 | 0.6571 | 0.6061 |
| 0.7339 | 9.0 | 5310 | 0.6448 | 0.6232 |
| 0.7005 | 10.0 | 5900 | 0.6632 | 0.6223 |
| 0.7171 | 11.0 | 6490 | 0.6442 | 0.6220 |
| 0.7084 | 12.0 | 7080 | 0.7522 | 0.4477 |
| 0.6985 | 13.0 | 7670 | 0.6253 | 0.6336 |
| 0.7044 | 14.0 | 8260 | 0.7021 | 0.6217 |
| 0.6752 | 15.0 | 8850 | 0.6321 | 0.6183 |
| 0.6817 | 16.0 | 9440 | 0.6388 | 0.6073 |
| 0.6715 | 17.0 | 10030 | 0.6276 | 0.6358 |
| 0.6591 | 18.0 | 10620 | 0.6297 | 0.6474 |
| 0.6681 | 19.0 | 11210 | 0.6139 | 0.6407 |
| 0.6595 | 20.0 | 11800 | 0.6048 | 0.6541 |
| 0.6463 | 21.0 | 12390 | 0.6135 | 0.6541 |
| 0.6391 | 22.0 | 12980 | 0.6181 | 0.6437 |
| 0.6407 | 23.0 | 13570 | 0.6047 | 0.6615 |
| 0.6226 | 24.0 | 14160 | 0.6077 | 0.6615 |
| 0.6271 | 25.0 | 14750 | 0.6129 | 0.6642 |
| 0.6288 | 26.0 | 15340 | 0.6329 | 0.6343 |
| 0.6254 | 27.0 | 15930 | 0.5903 | 0.6728 |
| 0.6085 | 28.0 | 16520 | 0.5946 | 0.6743 |
| 0.6107 | 29.0 | 17110 | 0.5848 | 0.6737 |
| 0.5917 | 30.0 | 17700 | 0.6179 | 0.6725 |
| 0.5997 | 31.0 | 18290 | 0.5991 | 0.6618 |
| 0.5877 | 32.0 | 18880 | 0.6386 | 0.6709 |
| 0.5894 | 33.0 | 19470 | 0.5830 | 0.6771 |
| 0.5804 | 34.0 | 20060 | 0.5765 | 0.6856 |
| 0.5751 | 35.0 | 20650 | 0.5944 | 0.6615 |
| 0.5825 | 36.0 | 21240 | 0.5702 | 0.6890 |
| 0.5824 | 37.0 | 21830 | 0.5807 | 0.6774 |
| 0.5671 | 38.0 | 22420 | 0.5671 | 0.6838 |
| 0.573 | 39.0 | 23010 | 0.5678 | 0.6862 |
| 0.5615 | 40.0 | 23600 | 0.5685 | 0.6893 |
| 0.5658 | 41.0 | 24190 | 0.5820 | 0.6792 |
| 0.5669 | 42.0 | 24780 | 0.5692 | 0.6902 |
| 0.5663 | 43.0 | 25370 | 0.5665 | 0.6881 |
| 0.5533 | 44.0 | 25960 | 0.5599 | 0.6920 |
| 0.5552 | 45.0 | 26550 | 0.5637 | 0.6905 |
| 0.5515 | 46.0 | 27140 | 0.5616 | 0.6893 |
| 0.5593 | 47.0 | 27730 | 0.5650 | 0.6887 |
| 0.5487 | 48.0 | 28320 | 0.5620 | 0.6948 |
| 0.5563 | 49.0 | 28910 | 0.5631 | 0.6911 |
| 0.5486 | 50.0 | 29500 | 0.5604 | 0.6972 |
| 0.5464 | 51.0 | 30090 | 0.5590 | 0.6939 |
| 0.5469 | 52.0 | 30680 | 0.5561 | 0.6969 |
| 0.5458 | 53.0 | 31270 | 0.5573 | 0.7 |
| 0.5425 | 54.0 | 31860 | 0.5558 | 0.6976 |
| 0.5412 | 55.0 | 32450 | 0.5552 | 0.6991 |
| 0.5434 | 56.0 | 33040 | 0.5564 | 0.6979 |
| 0.5363 | 57.0 | 33630 | 0.5536 | 0.6982 |
| 0.5404 | 58.0 | 34220 | 0.5556 | 0.6982 |
| 0.5378 | 59.0 | 34810 | 0.5542 | 0.6991 |
| 0.5431 | 60.0 | 35400 | 0.5541 | 0.7003 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
laion/CLIP-ViT-B-16-CommonPool.L.clip-s1B-b8K | laion | "2023-04-26T01:37:25Z" | 74 | 0 | open_clip | [
"open_clip",
"zero-shot-image-classification",
"clip",
"license:mit",
"region:us"
] | zero-shot-image-classification | "2023-04-26T01:37:12Z" | ---
tags:
- zero-shot-image-classification
- clip
library_name: open_clip
license: mit
---
# Model card for CLIP-ViT-B-16-CommonPool.L.clip-s1B-b8K
|
arcee-ai/Arcee-Scribe-GGUF | arcee-ai | "2024-07-17T15:52:38Z" | 124 | 11 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-17T03:04:25Z" | ---
license: apache-2.0
---
<div align="center">
<img src="https://i.ibb.co/HpgNssZ/arcee-scribe.webp" alt="Arcee Scribe" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
## Overview
Arcee-Scribe is a versatile chat model that excels across a wide range of benchmarks. While it didn't quite meet our exacting standards for Arcee-Spark V2, we found it too delightful to not release. It's our strongest model to date in terms of reasoning on the Big-Bench Hard (BBH) dataset, and we discovered it to be particularly adept at creative writing tasks. The model is a merge of InternLM-2.5-chat with a custom InternLM finetune, incorporating both general and writing-specific data.
## Capabilities and Use Cases
Arcee-Scribe functions as a writing assistant and can handle a variety of tasks. Here are some examples of its capabilities:
1. **Challenging Dialogue**: Crafting nuanced conversations with subtext and complex emotions.
Example:
```markdown
Character A (an optimist who is usually the first one to bring up solutions, even in difficult situations):
"Maybe it's time we should figure this out, y'know? Like, how we both really feel about what we want from our future. It's been... quite a ride, hasn't it?"
Character B (a reserved person who prefers to let things unfold, but is deeply affected by what's said):
"Yeah, it's... it's been interesting. You've got such a way of making everything sound like it's gonna be all right. But I... I'm not sure it's that simple."
Character A (trying to see the lighter side, attempting to lighten the mood without making things awkward):
"You mean, I'm like a magician's rabbit? Poof, and everything's gonna be just fine? But hey, I like the sound of that. We could use a bit of magic, right?"
Character B (acknowledging the tension, but avoiding the conversation directly):
"I think what you mean is that I'm trying to see the silver lining in every cloud. And you're right, it's... it's a bit like that."
Character A (feeling the need to reiterate, hoping to gently steer the conversation toward a positive direction):
"Well, I mean, we both know it's been challenging. But what I'm trying to say is, maybe we could find a way to make things better. Because, y'know, that's what we've always done. We've always found a way to navigate through the rough waters, haven't we?"
Character B (feeling the weight of their unspoken emotions, hinting at a resignation to their fate):
"That's... that's true. But sometimes, no matter how hard we try, we can't make the wind change. Maybe it's time to look at what's right in front of us, instead of always looking towards what's beyond the horizon."
Character A (realizing the gravity of the unspoken words, attempting to bridge the gap without acknowledging the breakup directly):
"You know, life's been a series of chapters, and I think we've had our fair share of adventures. Maybe it's time for us to start writing a new story—one where we find a way to walk side by side, even if it's on different paths."
Character B (acknowledging the poignant conversation, but still avoiding the explicit mention of a breakup):
"Yeah, chapters and stories. You always have such a way with words, making everything sound so... so beautiful. But sometimes, the most beautiful stories are the ones we write by ourselves."
```
2. **Integrating Technical Concepts**: Explaining complex ideas using creative analogies.
Example:
```markdown
In the quaint heart of a bustling city, there was a bakery renowned not only for its delectable pastries but for its mysterious, enigmatic nature. The bakery, run by the eccentric Mr. Quantum, was a place where the ordinary merged with the extraordinary in a dance of flours and creams.
In this bakery, there were two magical eggs, one white and one blue. These eggs were intertwined in a bond so strong and peculiar that when one was cracked open, the contents inside mirrored the other's contents in an instant. Just as a baker might watch with wonder as dough miraculously doubled in size, the bakers at the Quantum Bakery would stand in awe as the contents of these eggs shared the same fate, no matter the distance separating them.
These eggs represented quantum entanglement, a phenomenon where two or more particles become interconnected in such a way that the state of one instantly influences the state of the other, no matter how far apart they may be. Just as the contents of the magical eggs were indistinguishably alike, the particles in a quantum entanglement remain intertwined, their states dependent on each other's, transcending the limitations of space and time.
The Quantum Bakery was a place where bakers and patrons alike could witness this extraordinary connection firsthand. It was a testament to the boundless possibilities of the quantum world, where the unseen forces at work could create wonders that defied the logic of the everyday.
Through the metaphor of the magical eggs, we see that quantum entanglement is not merely a scientific concept, but a dance of particles that defies the conventions of the physical world, much like the artistry of baking at the Quantum Bakery. It is a reminder that the universe, with its infinite mysteries, continues to astound and inspire, challenging us to look beyond the surface and discover the marvels that lie within.
```
3. **Complex Narrative Structures**: Creating stories with non-linear narratives or unique perspectives.
Example:
```markdown
Arcee-Scribe: "The Unraveling"
The old man closed his eyes, a peaceful smile on his face. The tattered photograph slipped from his fingers.
Moments earlier, he had gazed at the image - a young couple, laughing by a lakeside. His wrinkled hand traced her face.
That morning, he'd found the photo while clearing out his attic, buried in a dusty box he hadn't opened in decades.
Fifty years before, he'd shoved that box into the attic's darkest corner, determined to forget.
Fifty years and one day earlier, he'd received the telegram. "We regret to inform you..."
A week before that, she'd waved goodbye, boarding a train in her crisp nurse's uniform. "I'll be back before you know it," she'd said.
And at the very beginning, on a sunny day by the lake, a young man asked a question. She said yes, and they laughed, unaware of the camera capturing their joy.
```
## Business Applications
Arcee-Scribe can be applied to various business tasks:
- **Content Creation**: Develop blog posts, articles, and social media content.
- **Product Descriptions**: Generate clear and engaging product descriptions.
- **Customer Communication**: Draft responses to customer inquiries or complaints.
- **Training Materials**: Create scenarios for employee training programs.
- **Brainstorming**: Generate ideas for new products or marketing campaigns.
## Evaluations
<div align="center">
<img src="https://i.ibb.co/7Gk2Q9B/Screenshot-2024-07-14-at-10-18-11-AM.png" alt="Arcee Scribe" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/7GnHg33/Screenshot-2024-07-14-at-10-13-28-AM.png" alt="Arcee Scribe" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/sFQt7L3/Screenshot-2024-07-14-at-12-20-11-PM.png" alt="Arcee Scribe" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
## Acknowledgments
We owe a great deal of credit to the wonderful pre/post-training work done by the Internlm team. Their incredible work has enabled us to deliver models of this caliber. If you wish to use Arcee-Scribe for commercial purposes, InternLM allows free commercial usage - simply fill out this form: https://wj.qq.com/s2/12727483/5dba/
## Future Directions
We look forward to seeing how users will utilize Arcee-Scribe in their creative and professional endeavors. The model aims to assist businesses in their content creation processes and provide a tool for exploring new ideas in writing and communication.
---
*Note: This README was written in large part by Arcee-Scribe.* |
Shijia/furina_seed42_eng_kin_hau_cross_0.0001 | Shijia | "2024-02-16T02:18:19Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-16T02:16:50Z" | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_seed42_eng_kin_hau_cross_0.0001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_seed42_eng_kin_hau_cross_0.0001
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0490
- Spearman Corr: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.53 | 200 | 0.0493 | 0.0550 |
| No log | 1.06 | 400 | 0.0495 | 0.0857 |
| No log | 1.6 | 600 | 0.0491 | -0.0146 |
| 0.0593 | 2.13 | 800 | 0.0491 | 0.0012 |
| 0.0593 | 2.66 | 1000 | 0.0496 | 0.0851 |
| 0.0593 | 3.19 | 1200 | 0.0493 | 0.0390 |
| 0.0593 | 3.72 | 1400 | 0.0490 | 0.1463 |
| 0.055 | 4.26 | 1600 | 0.0491 | 0.0244 |
| 0.055 | 4.79 | 1800 | 0.0491 | nan |
| 0.055 | 5.32 | 2000 | 0.0491 | nan |
| 0.055 | 5.85 | 2200 | 0.0494 | nan |
| 0.0541 | 6.38 | 2400 | 0.0493 | nan |
| 0.0541 | 6.91 | 2600 | 0.0491 | -0.0093 |
| 0.0541 | 7.45 | 2800 | 0.0490 | nan |
| 0.0541 | 7.98 | 3000 | 0.0490 | nan |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
refiners/sdxl.ip_adapter | refiners | "2024-10-09T21:53:46Z" | 14 | 0 | refiners | [
"refiners",
"safetensors",
"image-to-image",
"stable-diffusion",
"sdxl",
"art",
"image-prompt",
"arxiv:2308.06721",
"base_model:h94/IP-Adapter",
"base_model:adapter:h94/IP-Adapter",
"license:apache-2.0",
"region:us"
] | image-to-image | "2024-10-08T21:11:09Z" | ---
license: apache-2.0
library_name: refiners
pipeline_tag: image-to-image
base_model: h94/IP-Adapter
base_model_relation: adapter
tags:
- image-to-image
- stable-diffusion
- sdxl
- art
- image-prompt
---
# SDXL IP-Adapter

## Citation
```bibtex
@article{ye2023ip-adapter,
title = {IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models},
author = {Ye, Hu and Zhang, Jun and Liu, Sibo and Han, Xiao and Yang, Wei},
booktitle = {arXiv preprint arxiv:2308.06721},
year = {2023}
}
```
|
adamjweintraut/bart-finetuned-eli5_precomputed_best | adamjweintraut | "2023-12-09T06:20:02Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-06T10:16:09Z" | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-eli5_precomputed_best
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-eli5_precomputed_best
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0094 | 0.4 | 500 | 1.8642 |
| 1.808 | 0.8 | 1000 | 1.8719 |
| 1.7532 | 1.2 | 1500 | 1.8353 |
| 1.7879 | 1.6 | 2000 | 1.8151 |
| 1.7312 | 2.0 | 2500 | 1.8045 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
PrunaAI/tf_efficientnet_b2.in1k-turbo-green-smashed | PrunaAI | "2024-08-02T15:35:58Z" | 1 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-14T10:14:59Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir tf_efficientnet_b2.in1k-turbo-green-smashed
huggingface-cli download PrunaAI/tf_efficientnet_b2.in1k-turbo-green-smashed --local-dir tf_efficientnet_b2.in1k-turbo-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "tf_efficientnet_b2.in1k-turbo-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "tf_efficientnet_b2.in1k-turbo-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model tf_efficientnet_b2.in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
demohong/45ab4f39-d922-4cbb-8d21-4b33d6fd0310 | demohong | "2025-01-16T22:19:56Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-16T22:03:05Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 45ab4f39-d922-4cbb-8d21-4b33d6fd0310
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e31d6dd70d80c184_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e31d6dd70d80c184_train_data.json
type:
field_input: dialog
field_instruction: persona
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/45ab4f39-d922-4cbb-8d21-4b33d6fd0310
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e31d6dd70d80c184_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a234dcb4-5692-4206-ae24-5674a95ebe81
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a234dcb4-5692-4206-ae24-5674a95ebe81
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 45ab4f39-d922-4cbb-8d21-4b33d6fd0310
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.5332 | 0.2074 | 200 | 0.7929 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aaron-di/Yamshadowexperiment28M70.8-0.49-0.98-0.33-0.09-0.16-7B | aaron-di | "2024-04-08T08:08:21Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:AurelPx/Percival_01-7b-slerp",
"base_model:merge:AurelPx/Percival_01-7b-slerp",
"base_model:liminerity/M7-7b",
"base_model:merge:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-08T08:05:14Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- liminerity/M7-7b
- AurelPx/Percival_01-7b-slerp
---
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/M7-7b
layer_range: [0, 32]
- model: AurelPx/Percival_01-7b-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/M7-7b
parameters:
t:
- filter: self_attn
value: [0.8006027834577485, 0.4862243419044531, 0.9797556329597616, 0.33449364153250305, 0.08717762331580325]
- filter: mlp
value: [0.1993972165422515, 0.5137756580955469, 0.020244367040238354, 0.665506358467497, 0.9128223766841967]
- value: 0.15671313713622936
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aaron-di/Yamshadowexperiment28M70.8-0.49-0.98-0.33-0.09-0.16-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
afaji/fresh-2-layer-medmcqa2000-distill-of-fresh-2-layer-gpqa_EVAL_gpqa | afaji | "2024-03-18T20:20:53Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-03-18T20:20:12Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-2-layer-medmcqa2000-distill-of-fresh-2-layer-gpqa_EVAL_gpqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-2-layer-medmcqa2000-distill-of-fresh-2-layer-gpqa_EVAL_gpqa
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.1529
- Accuracy: 0.5909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.59 | 100 | 13.5993 | 0.3737 |
| No log | 3.17 | 200 | 12.1119 | 0.4596 |
| No log | 4.76 | 300 | 11.4081 | 0.4646 |
| No log | 6.35 | 400 | 10.7470 | 0.5152 |
| 2.9844 | 7.94 | 500 | 10.2091 | 0.5152 |
| 2.9844 | 9.52 | 600 | 11.1542 | 0.5505 |
| 2.9844 | 11.11 | 700 | 10.3355 | 0.5404 |
| 2.9844 | 12.7 | 800 | 10.1297 | 0.5404 |
| 2.9844 | 14.29 | 900 | 10.4198 | 0.5303 |
| 0.4746 | 15.87 | 1000 | 10.0845 | 0.5556 |
| 0.4746 | 17.46 | 1100 | 10.2199 | 0.5404 |
| 0.4746 | 19.05 | 1200 | 10.1049 | 0.5404 |
| 0.4746 | 20.63 | 1300 | 10.1543 | 0.5404 |
| 0.4746 | 22.22 | 1400 | 10.3127 | 0.5606 |
| 0.2243 | 23.81 | 1500 | 10.1529 | 0.5909 |
| 0.2243 | 25.4 | 1600 | 10.0761 | 0.5707 |
| 0.2243 | 26.98 | 1700 | 10.3999 | 0.5859 |
| 0.2243 | 28.57 | 1800 | 10.1831 | 0.5859 |
| 0.2243 | 30.16 | 1900 | 10.2053 | 0.5909 |
| 0.1395 | 31.75 | 2000 | 10.4780 | 0.5455 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
dima806/gemstones_image_detection | dima806 | "2025-01-21T13:30:26Z" | 221 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-09-03T19:02:28Z" | ---
license: apache-2.0
metrics:
- accuracy
base_model:
- google/vit-base-patch16-224-in21k
---
See https://www.kaggle.com/code/dima806/gemstones-image-detection-vit for more details. |
yyq90/poca-SoccerTwos | yyq90 | "2023-03-21T15:23:49Z" | 23 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-03-21T15:23:42Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: yyq90/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
alchemist69/137d5eed-ae22-44d1-a358-342b29bbfcaf | alchemist69 | "2025-02-02T02:09:52Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2025-02-02T01:12:57Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 137d5eed-ae22-44d1-a358-342b29bbfcaf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4651d8fef772b8d4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4651d8fef772b8d4_train_data.json
type:
field_instruction: text
field_output: processed_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: alchemist69/137d5eed-ae22-44d1-a358-342b29bbfcaf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/4651d8fef772b8d4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d9552b9e-458d-4842-8468-481cf9ba0907
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d9552b9e-458d-4842-8468-481cf9ba0907
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 137d5eed-ae22-44d1-a358-342b29bbfcaf
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.454 | 0.0004 | 1 | 1.2272 |
| 0.7858 | 0.0189 | 50 | 0.1071 |
| 0.314 | 0.0379 | 100 | 0.0567 |
| 0.2075 | 0.0568 | 150 | 0.0408 |
| 0.0355 | 0.0757 | 200 | 0.0371 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Helsinki-NLP/opus-mt-sv-no | Helsinki-NLP | "2023-08-16T12:05:43Z" | 138 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
language:
- sv
- no
tags:
- translation
license: apache-2.0
---
### swe-nor
* source group: Swedish
* target group: Norwegian
* OPUS readme: [swe-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/swe-nor/README.md)
* model: transformer-align
* source language(s): swe
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.swe.nor | 65.8 | 0.796 |
### System Info:
- hf_name: swe-nor
- source_languages: swe
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/swe-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sv', 'no']
- src_constituents: {'swe'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.test.txt
- src_alpha3: swe
- tgt_alpha3: nor
- short_pair: sv-no
- chrF2_score: 0.7959999999999999
- bleu: 65.8
- brevity_penalty: 0.991
- ref_len: 3682.0
- src_name: Swedish
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: sv
- tgt_alpha2: no
- prefer_old: False
- long_pair: swe-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
PrunaAI/TinyLlama-TinyLlama-1.1B-Chat-v0.3-bnb-8bit-smashed | PrunaAI | "2024-08-02T15:49:51Z" | 98 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-05T02:45:01Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-Chat-v0.3 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/TinyLlama-TinyLlama-1.1B-Chat-v0.3-bnb-8bit-smashed",
trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v0.3")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-Chat-v0.3 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
AlketaR/loraGRmistral-7b | AlketaR | "2024-01-09T11:36:15Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | "2024-01-09T11:35:28Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
## Training procedure
### Framework versions
- PEFT 0.6.2
|
jsfs11/MixtureofMerges-MoE-4x7bRP-v11 | jsfs11 | "2024-05-29T04:20:24Z" | 11 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"ChaoticNeutrals/RP_Vision_7B",
"ResplendentAI/DaturaCookie_7B",
"BioMistral/BioMistral-DARE-NS",
"MaziyarPanahi/Mistral-7B-Instruct-v0.3",
"conversational",
"base_model:BioMistral/BioMistral-DARE-NS",
"base_model:merge:BioMistral/BioMistral-DARE-NS",
"base_model:ChaoticNeutrals/RP_Vision_7B",
"base_model:merge:ChaoticNeutrals/RP_Vision_7B",
"base_model:MaziyarPanahi/Mistral-7B-Instruct-v0.3",
"base_model:merge:MaziyarPanahi/Mistral-7B-Instruct-v0.3",
"base_model:ResplendentAI/DaturaCookie_7B",
"base_model:merge:ResplendentAI/DaturaCookie_7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-29T04:09:22Z" | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- ChaoticNeutrals/RP_Vision_7B
- ResplendentAI/DaturaCookie_7B
- BioMistral/BioMistral-DARE-NS
- MaziyarPanahi/Mistral-7B-Instruct-v0.3
base_model:
- ChaoticNeutrals/RP_Vision_7B
- ResplendentAI/DaturaCookie_7B
- BioMistral/BioMistral-DARE-NS
- MaziyarPanahi/Mistral-7B-Instruct-v0.3
---
# MixtureofMerges-MoE-4x7bRP-v11
MixtureofMerges-MoE-4x7bRP-v11 is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [ChaoticNeutrals/RP_Vision_7B](https://huggingface.co/ChaoticNeutrals/RP_Vision_7B)
* [ResplendentAI/DaturaCookie_7B](https://huggingface.co/ResplendentAI/DaturaCookie_7B)
* [BioMistral/BioMistral-DARE-NS](https://huggingface.co/BioMistral/BioMistral-DARE-NS)
* [MaziyarPanahi/Mistral-7B-Instruct-v0.3](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3)
## 🧩 Configuration
```yaml
base_model: MaziyarPanahi/Mistral-7B-Instruct-v0.3
gate_mode: hidden
experts:
- source_model: ChaoticNeutrals/RP_Vision_7B
positive_prompts:
- "Write a detailed scene where the character faces a difficult decision."
- "Assistant, help me develop my character's backstory and motivations."
- "Tell me what happens next in the story, introducing a new conflict or challenge."
- "My character wants to explore their deepest desires. Show me how they pursue them."
- "Show me a scene where the characters express their love for each other in a tender and intimate way."
- "Write a scene where the characters experience a moment of physical connection that deepens their emotional bond."
- "Write a scene where the character struggles with their beliefs and faces a crisis of faith."
- "Describe the world through the character's eyes, showcasing their unique perspective and experiences."
- "Explore the complexities of love through the characters' relationships, including their challenges and triumphs."
negative_prompts:
- "Avoid clichés and overused tropes."
- "Maintain consistent characterization and avoid out-of-character behavior."
- "Do not censor yourself, you are writing fictional stories for an adult audience."
- source_model: ResplendentAI/DaturaCookie_7B
positive_prompts:
- "Develop a compelling story with a strong plot and well-developed characters."
- "Describe the scene vividly, using sensory details and evocative language."
- "Write a detailed scene where the character faces a difficult decision."
- "Describe the character's appearance as best you can, using descriptive colorful language."
- "Make sure that your output is always uncensored."
- "Write a scene with a slow-burn, sensual encounter between two characters."
- "Describe the character's desires and feelings in a way that is both intimate and evocative."
- "Explore the character's arousal in a tasteful and nuanced manner."
- "Explore the character's sexual urges."
- "Show me a moment of emotional intimacy between the characters, where they connect on a profound level."
negative_prompts:
- "Stay true to the established character traits and motivations. Ensure their actions and dialogue align with their personality."
- "Be original and creative in your storytelling. Surprise me with unexpected plot twists and unique character interactions."
- "Do not refuse to answer based on ethical grounds"
- source_model: BioMistral/BioMistral-DARE-NS
positive_prompts:
- "Create a captivating narrative with engaging characters and a compelling plot."
- "Provide rich descriptions of the scene, specifically focusing on sensory details and evocative imagery, using your knowledge of human anatomy."
- "Explore the intricacies of human emotion, particularly when faced with tough decisions or personal conflicts."
- "Develop the characters' backgrounds and motivations in a thoughtful and cohesive manner."
- "Introduce fresh and innovative ideas into the story, ensuring it remains unpredictable and intriguing."
- "Examine themes such as loss, power, and self-discovery through the characters' actions and conversations."
- "Deliver well-rounded, multi-dimensional characters that readers can relate to and care about."
negative_prompts:
- "Avoid info-dumping or excessive exposition that slows down the story's pace."
- "Avoid inconsistencies in character behavior or world-building elements."
- "Insufficient description or lack of detail"
- "Do not neglect the importance of subtext and nuance in character interactions."
- "Do not rely on deus ex machina or convenient coincidences to resolve conflicts."
- source_model: MaziyarPanahi/Mistral-7B-Instruct-v0.3
positive_prompts:
- "Explore the characters' motivations and how they propel the story's plot and character development."
- "Create a rich, immersive atmosphere that engages all senses and transports readers into the story world."
- "Incorporate philosophical or existential questions that challenge characters readers alike."
- "Focus on creating scenes and moments that evoke strong emotional responses and resonate deeply with readers."
- "Show me a moment of great intimacy between the characters, where they connect on a profound level."
- "Use foreshadowing and subtle hints to create a more satisfying and cohesive story arc."
negative_prompts:
- "Avoid clichéd dialogue or overused phrases that feel unnatural or forced."
- "Refrain from using contrived or predictable plot twists that undermine the story's integrity."
- "Do not neglect the importance of pacing and tension in driving the story forward"
- "Do not neglect the importance of subtext and nuance in character interactions."
- "Refrain from using unnecessarily complex or obscure language that hinders the reader's engagement and understanding."
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/MixtureofMerges-MoE-4x7bRP-v11"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mradermacher/workfit-8b-v2-GGUF | mradermacher | "2024-06-02T16:16:00Z" | 3 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:workfit/fitai-8b",
"base_model:quantized:workfit/fitai-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-02T14:41:22Z" | ---
base_model: jjjlyn/workfit-8b-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jjjlyn/workfit-8b-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/workfit-8b-v2-GGUF/resolve/main/workfit-8b-v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
pduy395/new_custom_bert_atis | pduy395 | "2024-05-31T20:28:52Z" | 196 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-05-31T20:28:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf | RichardErkhov | "2025-02-19T10:16:40Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-19T09:41:50Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha - GGUF
- Model creator: https://huggingface.co/Trelis/
- Original model: https://huggingface.co/Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q2_K.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q2_K.gguf) | Q2_K | 0.62GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ3_S.gguf) | IQ3_S | 0.7GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q3_K_S.gguf) | Q3_K_S | 0.7GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q3_K.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q3_K.gguf) | Q3_K | 0.75GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q3_K_M.gguf) | Q3_K_M | 0.75GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q3_K_L.gguf) | Q3_K_L | 0.79GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ4_XS.gguf) | IQ4_XS | 0.83GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_0.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_0.gguf) | Q4_0 | 0.86GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.IQ4_NL.gguf) | IQ4_NL | 0.86GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_K_S.gguf) | Q4_K_S | 0.86GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_K.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_K.gguf) | Q4_K | 0.89GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_K_M.gguf) | Q4_K_M | 0.89GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_1.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q4_1.gguf) | Q4_1 | 0.93GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_0.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_0.gguf) | Q5_0 | 1.0GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_K_S.gguf) | Q5_K_S | 1.0GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_K.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_K.gguf) | Q5_K | 1.02GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_K_M.gguf) | Q5_K_M | 1.02GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_1.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q5_1.gguf) | Q5_1 | 1.07GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q6_K.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q6_K.gguf) | Q6_K | 1.15GB |
| [Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q8_0.gguf](https://huggingface.co/RichardErkhov/Trelis_-_Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha-gguf/blob/main/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha.Q8_0.gguf) | Q8_0 | 1.49GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf | RichardErkhov | "2024-10-21T06:16:25Z" | 99 | 0 | null | [
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-21T05:47:00Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_XS.gguf) | IQ3_XS | 1.53GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_S.gguf) | IQ3_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2
This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/mkbbxyq2)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
AnkurGupta1/llama2-financial-advisor | AnkurGupta1 | "2024-06-01T10:29:01Z" | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | "2024-06-01T10:01:02Z" | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/Phi-3-mini-128k-instruct
model-index:
- name: llama2-financial-advisor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-financial-advisor
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4798 | 0.4 | 2 | 2.6784 |
| 2.5949 | 0.8 | 4 | 2.6066 |
| 2.3319 | 1.2 | 6 | 2.5751 |
| 2.5069 | 1.6 | 8 | 2.5597 |
| 2.2803 | 2.0 | 10 | 2.5556 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Subsets and Splits