Datasets:
modelId
stringlengths 5
127
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
8.08k
| library_name
stringclasses 349
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pyrotank41/llama3-8b-escrow-unsloth | pyrotank41 | "2024-05-05T04:28:17" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-05T04:27:53" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** pyrotank41
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
makcedward/Llama-3.2-1B-Instruct-AdaLoRA-Adapter | makcedward | "2025-02-17T16:32:18" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-17T16:32:14" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ekaterinatao/nerel-bio-rubert-base | ekaterinatao | "2024-02-27T08:42:59" | 120 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-02-27T08:37:05" | ---
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: nerel-bio-rubert-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerel-bio-rubert-base
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6122
- Precision: 0.7873
- Recall: 0.7882
- F1: 0.7878
- Accuracy: 0.8601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 102 | 1.1211 | 0.6196 | 0.5809 | 0.5996 | 0.7125 |
| No log | 2.0 | 204 | 0.6800 | 0.7333 | 0.7165 | 0.7248 | 0.8137 |
| No log | 3.0 | 306 | 0.5985 | 0.7445 | 0.7488 | 0.7466 | 0.8303 |
| No log | 4.0 | 408 | 0.5673 | 0.7608 | 0.7622 | 0.7615 | 0.8402 |
| 0.7954 | 5.0 | 510 | 0.5665 | 0.7751 | 0.7702 | 0.7726 | 0.8485 |
| 0.7954 | 6.0 | 612 | 0.5934 | 0.7826 | 0.7742 | 0.7784 | 0.8544 |
| 0.7954 | 7.0 | 714 | 0.5804 | 0.7795 | 0.7751 | 0.7773 | 0.8527 |
| 0.7954 | 8.0 | 816 | 0.6075 | 0.7839 | 0.7878 | 0.7858 | 0.8577 |
| 0.7954 | 9.0 | 918 | 0.6139 | 0.7887 | 0.7889 | 0.7888 | 0.8614 |
| 0.1024 | 10.0 | 1020 | 0.6122 | 0.7873 | 0.7882 | 0.7878 | 0.8601 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
CarlBrendt/gpt-neox-20b_new | CarlBrendt | "2023-11-18T17:21:45" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-neox-20b",
"base_model:adapter:EleutherAI/gpt-neox-20b",
"region:us"
] | null | "2023-11-18T17:21:32" | ---
library_name: peft
base_model: EleutherAI/gpt-neox-20b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
dimasik1987/d0b444d3-4c6e-4c6f-a3e0-c745e20ca3f8 | dimasik1987 | "2025-01-20T23:36:04" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | "2025-01-20T23:35:41" | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0b444d3-4c6e-4c6f-a3e0-c745e20ca3f8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dd22c8863ed4176b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dd22c8863ed4176b_train_data.json
type:
field_input: text
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik1987/d0b444d3-4c6e-4c6f-a3e0-c745e20ca3f8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/dd22c8863ed4176b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ee0747e5-378f-43ac-83d3-8dd08d6876bf
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ee0747e5-378f-43ac-83d3-8dd08d6876bf
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# d0b444d3-4c6e-4c6f-a3e0-c745e20ca3f8
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0292 | 1 | 10.3601 |
| 10.3594 | 0.1460 | 5 | 10.3562 |
| 10.3515 | 0.2920 | 10 | 10.3464 |
| 10.3409 | 0.4380 | 15 | 10.3383 |
| 10.337 | 0.5839 | 20 | 10.3330 |
| 10.3318 | 0.7299 | 25 | 10.3307 |
| 10.3292 | 0.8759 | 30 | 10.3303 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mostafahaggag/sd-class-butterflies-32 | mostafahaggag | "2022-11-28T17:37:32" | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2022-11-28T17:37:23" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(mostafahaggag/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
aseratus1/ce6ed156-3158-40ac-9687-2f66641dc8a1 | aseratus1 | "2025-01-27T17:20:51" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-27T17:15:01" | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ce6ed156-3158-40ac-9687-2f66641dc8a1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f5fd1429b6536180_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f5fd1429b6536180_train_data.json
type:
field_input: topic
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aseratus1/ce6ed156-3158-40ac-9687-2f66641dc8a1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f5fd1429b6536180_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e62d9c43-feae-452f-aae2-7fd9ee1a8839
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e62d9c43-feae-452f-aae2-7fd9ee1a8839
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ce6ed156-3158-40ac-9687-2f66641dc8a1
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2307 | 0.0393 | 200 | 2.0980 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
speedra500/test-llm | speedra500 | "2024-12-27T20:41:44" | 17 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-27T02:24:38" | ---
license: mit
tags:
- unsloth
- trl
- sft
base_model:
- unsloth/Llama-3.2-3B-Instruct
library_name: transformers
--- |
Pankaj001/ObjectDetection | Pankaj001 | "2024-06-06T10:01:32" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-06T09:58:00" | This repository contains the YOLO model trained on the COCO Dataset for Object Detection. The model gave an IoU of about 69% on the test dataset and can be used as a onnx model
Model Details:
* Task: Object Detection
* Architecture : YOLO Model stored in onnx format
* Input Size: Varying size but resized to 640x640 pixels with 3 channels(RGB)
* Dataset : The model is trained on unnormalized dataset
* Dataset used: PASCAL VOC Dataset which is a subset of COCO Dataset
* IoU: 69% |
vicky6/dummy-model_ | vicky6 | "2024-02-27T14:11:50" | 70 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-02-27T14:10:33" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
base_model: google-bert/bert-base-cased
model-index:
- name: dummy-model_
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model_
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Tokenizers 0.15.2
|
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k40_task5_organization_fold1 | MayBashendy | "2024-11-27T12:53:29" | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-27T12:39:17" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k40_task5_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k40_task5_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6862
- Qwk: 0.3574
- Mse: 1.6862
- Rmse: 1.2985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0208 | 2 | 2.6274 | 0.0211 | 2.6274 | 1.6209 |
| No log | 0.0417 | 4 | 1.5282 | 0.2508 | 1.5282 | 1.2362 |
| No log | 0.0625 | 6 | 1.2640 | 0.3571 | 1.2640 | 1.1243 |
| No log | 0.0833 | 8 | 1.2201 | 0.2765 | 1.2201 | 1.1046 |
| No log | 0.1042 | 10 | 1.1214 | 0.4593 | 1.1214 | 1.0590 |
| No log | 0.125 | 12 | 1.0986 | 0.4017 | 1.0986 | 1.0481 |
| No log | 0.1458 | 14 | 1.0338 | 0.3207 | 1.0338 | 1.0168 |
| No log | 0.1667 | 16 | 1.0372 | 0.2964 | 1.0372 | 1.0184 |
| No log | 0.1875 | 18 | 1.1989 | 0.1880 | 1.1989 | 1.0949 |
| No log | 0.2083 | 20 | 1.3651 | 0.1880 | 1.3651 | 1.1684 |
| No log | 0.2292 | 22 | 1.3087 | 0.2008 | 1.3087 | 1.1440 |
| No log | 0.25 | 24 | 1.1204 | 0.2008 | 1.1204 | 1.0585 |
| No log | 0.2708 | 26 | 1.0236 | 0.2980 | 1.0236 | 1.0118 |
| No log | 0.2917 | 28 | 0.9676 | 0.3856 | 0.9676 | 0.9837 |
| No log | 0.3125 | 30 | 0.9312 | 0.4572 | 0.9312 | 0.9650 |
| No log | 0.3333 | 32 | 0.9255 | 0.5183 | 0.9255 | 0.9620 |
| No log | 0.3542 | 34 | 0.8809 | 0.4777 | 0.8809 | 0.9386 |
| No log | 0.375 | 36 | 0.8624 | 0.5253 | 0.8624 | 0.9286 |
| No log | 0.3958 | 38 | 0.8795 | 0.5324 | 0.8795 | 0.9378 |
| No log | 0.4167 | 40 | 0.8756 | 0.5324 | 0.8756 | 0.9357 |
| No log | 0.4375 | 42 | 0.8790 | 0.4938 | 0.8790 | 0.9376 |
| No log | 0.4583 | 44 | 0.8351 | 0.5215 | 0.8351 | 0.9139 |
| No log | 0.4792 | 46 | 0.8509 | 0.5005 | 0.8509 | 0.9224 |
| No log | 0.5 | 48 | 0.8620 | 0.4896 | 0.8620 | 0.9284 |
| No log | 0.5208 | 50 | 0.8762 | 0.4600 | 0.8762 | 0.9361 |
| No log | 0.5417 | 52 | 0.8786 | 0.4579 | 0.8786 | 0.9373 |
| No log | 0.5625 | 54 | 0.8912 | 0.4395 | 0.8912 | 0.9440 |
| No log | 0.5833 | 56 | 0.9012 | 0.4756 | 0.9012 | 0.9493 |
| No log | 0.6042 | 58 | 0.8943 | 0.4245 | 0.8943 | 0.9457 |
| No log | 0.625 | 60 | 0.9216 | 0.4481 | 0.9216 | 0.9600 |
| No log | 0.6458 | 62 | 1.1372 | 0.2679 | 1.1372 | 1.0664 |
| No log | 0.6667 | 64 | 1.3679 | 0.2779 | 1.3679 | 1.1696 |
| No log | 0.6875 | 66 | 1.5693 | 0.3180 | 1.5693 | 1.2527 |
| No log | 0.7083 | 68 | 1.4781 | 0.4152 | 1.4781 | 1.2158 |
| No log | 0.7292 | 70 | 1.1520 | 0.3512 | 1.1520 | 1.0733 |
| No log | 0.75 | 72 | 0.9709 | 0.3926 | 0.9709 | 0.9853 |
| No log | 0.7708 | 74 | 0.9778 | 0.3738 | 0.9778 | 0.9888 |
| No log | 0.7917 | 76 | 1.0549 | 0.3783 | 1.0549 | 1.0271 |
| No log | 0.8125 | 78 | 1.2477 | 0.4121 | 1.2477 | 1.1170 |
| No log | 0.8333 | 80 | 1.3859 | 0.2934 | 1.3859 | 1.1772 |
| No log | 0.8542 | 82 | 1.2810 | 0.3457 | 1.2810 | 1.1318 |
| No log | 0.875 | 84 | 1.1279 | 0.2828 | 1.1279 | 1.0620 |
| No log | 0.8958 | 86 | 1.1453 | 0.3239 | 1.1453 | 1.0702 |
| No log | 0.9167 | 88 | 1.3205 | 0.2934 | 1.3205 | 1.1491 |
| No log | 0.9375 | 90 | 1.3885 | 0.2779 | 1.3885 | 1.1783 |
| No log | 0.9583 | 92 | 1.3479 | 0.2429 | 1.3479 | 1.1610 |
| No log | 0.9792 | 94 | 1.2145 | 0.3070 | 1.2145 | 1.1020 |
| No log | 1.0 | 96 | 1.3026 | 0.2391 | 1.3026 | 1.1413 |
| No log | 1.0208 | 98 | 1.3558 | 0.2767 | 1.3558 | 1.1644 |
| No log | 1.0417 | 100 | 1.3125 | 0.3904 | 1.3125 | 1.1456 |
| No log | 1.0625 | 102 | 1.1370 | 0.4569 | 1.1370 | 1.0663 |
| No log | 1.0833 | 104 | 1.0274 | 0.4617 | 1.0274 | 1.0136 |
| No log | 1.1042 | 106 | 1.3102 | 0.4261 | 1.3102 | 1.1446 |
| No log | 1.125 | 108 | 1.5871 | 0.3423 | 1.5871 | 1.2598 |
| No log | 1.1458 | 110 | 1.6843 | 0.3163 | 1.6843 | 1.2978 |
| No log | 1.1667 | 112 | 1.5257 | 0.3459 | 1.5257 | 1.2352 |
| No log | 1.1875 | 114 | 1.2084 | 0.4555 | 1.2084 | 1.0993 |
| No log | 1.2083 | 116 | 1.1084 | 0.4569 | 1.1084 | 1.0528 |
| No log | 1.2292 | 118 | 1.3066 | 0.4123 | 1.3066 | 1.1431 |
| No log | 1.25 | 120 | 1.5538 | 0.3808 | 1.5538 | 1.2465 |
| No log | 1.2708 | 122 | 2.0961 | 0.2711 | 2.0961 | 1.4478 |
| No log | 1.2917 | 124 | 1.9157 | 0.3040 | 1.9157 | 1.3841 |
| No log | 1.3125 | 126 | 1.4894 | 0.3935 | 1.4894 | 1.2204 |
| No log | 1.3333 | 128 | 1.7419 | 0.3157 | 1.7419 | 1.3198 |
| No log | 1.3542 | 130 | 1.6206 | 0.3524 | 1.6206 | 1.2730 |
| No log | 1.375 | 132 | 1.1907 | 0.4448 | 1.1907 | 1.0912 |
| No log | 1.3958 | 134 | 1.2174 | 0.4139 | 1.2174 | 1.1034 |
| No log | 1.4167 | 136 | 1.3632 | 0.3906 | 1.3632 | 1.1676 |
| No log | 1.4375 | 138 | 1.5633 | 0.3694 | 1.5633 | 1.2503 |
| No log | 1.4583 | 140 | 1.5952 | 0.3480 | 1.5952 | 1.2630 |
| No log | 1.4792 | 142 | 1.4603 | 0.3480 | 1.4603 | 1.2084 |
| No log | 1.5 | 144 | 1.3481 | 0.3904 | 1.3481 | 1.1611 |
| No log | 1.5208 | 146 | 1.5564 | 0.3289 | 1.5564 | 1.2476 |
| No log | 1.5417 | 148 | 1.7653 | 0.2970 | 1.7653 | 1.3287 |
| No log | 1.5625 | 150 | 1.6029 | 0.3163 | 1.6029 | 1.2661 |
| No log | 1.5833 | 152 | 1.2712 | 0.4210 | 1.2712 | 1.1275 |
| No log | 1.6042 | 154 | 1.0498 | 0.5218 | 1.0498 | 1.0246 |
| No log | 1.625 | 156 | 1.2234 | 0.4348 | 1.2234 | 1.1061 |
| No log | 1.6458 | 158 | 1.6761 | 0.3384 | 1.6761 | 1.2946 |
| No log | 1.6667 | 160 | 2.3121 | 0.2924 | 2.3121 | 1.5206 |
| No log | 1.6875 | 162 | 2.2841 | 0.2924 | 2.2841 | 1.5113 |
| No log | 1.7083 | 164 | 1.7647 | 0.3529 | 1.7647 | 1.3284 |
| No log | 1.7292 | 166 | 1.0397 | 0.5113 | 1.0397 | 1.0197 |
| No log | 1.75 | 168 | 0.8802 | 0.5218 | 0.8802 | 0.9382 |
| No log | 1.7708 | 170 | 0.9861 | 0.4956 | 0.9861 | 0.9930 |
| No log | 1.7917 | 172 | 1.3861 | 0.3605 | 1.3861 | 1.1773 |
| No log | 1.8125 | 174 | 1.7771 | 0.3238 | 1.7771 | 1.3331 |
| No log | 1.8333 | 176 | 1.8538 | 0.2736 | 1.8538 | 1.3616 |
| No log | 1.8542 | 178 | 1.5881 | 0.3340 | 1.5881 | 1.2602 |
| No log | 1.875 | 180 | 1.2897 | 0.4102 | 1.2897 | 1.1356 |
| No log | 1.8958 | 182 | 1.1675 | 0.3960 | 1.1675 | 1.0805 |
| No log | 1.9167 | 184 | 1.2422 | 0.4247 | 1.2422 | 1.1146 |
| No log | 1.9375 | 186 | 1.5249 | 0.3827 | 1.5249 | 1.2349 |
| No log | 1.9583 | 188 | 2.1242 | 0.2986 | 2.1242 | 1.4575 |
| No log | 1.9792 | 190 | 2.3119 | 0.2777 | 2.3119 | 1.5205 |
| No log | 2.0 | 192 | 2.0697 | 0.2775 | 2.0697 | 1.4387 |
| No log | 2.0208 | 194 | 1.5222 | 0.3827 | 1.5222 | 1.2338 |
| No log | 2.0417 | 196 | 1.1524 | 0.3991 | 1.1524 | 1.0735 |
| No log | 2.0625 | 198 | 1.1191 | 0.4149 | 1.1191 | 1.0579 |
| No log | 2.0833 | 200 | 1.3117 | 0.4241 | 1.3117 | 1.1453 |
| No log | 2.1042 | 202 | 1.5375 | 0.3700 | 1.5375 | 1.2400 |
| No log | 2.125 | 204 | 1.5882 | 0.3574 | 1.5882 | 1.2602 |
| No log | 2.1458 | 206 | 1.4075 | 0.4040 | 1.4075 | 1.1864 |
| No log | 2.1667 | 208 | 1.1793 | 0.4655 | 1.1793 | 1.0860 |
| No log | 2.1875 | 210 | 1.0536 | 0.4969 | 1.0536 | 1.0264 |
| No log | 2.2083 | 212 | 1.1078 | 0.4969 | 1.1078 | 1.0525 |
| No log | 2.2292 | 214 | 1.1655 | 0.4504 | 1.1655 | 1.0796 |
| No log | 2.25 | 216 | 1.3135 | 0.3960 | 1.3135 | 1.1461 |
| No log | 2.2708 | 218 | 1.3589 | 0.4175 | 1.3589 | 1.1657 |
| No log | 2.2917 | 220 | 1.3665 | 0.3907 | 1.3665 | 1.1690 |
| No log | 2.3125 | 222 | 1.1596 | 0.4504 | 1.1596 | 1.0769 |
| No log | 2.3333 | 224 | 1.0657 | 0.4740 | 1.0657 | 1.0323 |
| No log | 2.3542 | 226 | 1.0565 | 0.4740 | 1.0565 | 1.0279 |
| No log | 2.375 | 228 | 1.2034 | 0.4623 | 1.2034 | 1.0970 |
| No log | 2.3958 | 230 | 1.4196 | 0.3960 | 1.4196 | 1.1915 |
| No log | 2.4167 | 232 | 1.5546 | 0.3613 | 1.5546 | 1.2468 |
| No log | 2.4375 | 234 | 1.6058 | 0.3613 | 1.6058 | 1.2672 |
| No log | 2.4583 | 236 | 1.6035 | 0.3613 | 1.6035 | 1.2663 |
| No log | 2.4792 | 238 | 1.4278 | 0.3739 | 1.4278 | 1.1949 |
| No log | 2.5 | 240 | 1.3681 | 0.3960 | 1.3681 | 1.1696 |
| No log | 2.5208 | 242 | 1.3394 | 0.3960 | 1.3394 | 1.1573 |
| No log | 2.5417 | 244 | 1.3665 | 0.3960 | 1.3665 | 1.1690 |
| No log | 2.5625 | 246 | 1.5810 | 0.3475 | 1.5810 | 1.2574 |
| No log | 2.5833 | 248 | 1.8963 | 0.3330 | 1.8963 | 1.3770 |
| No log | 2.6042 | 250 | 1.8972 | 0.3330 | 1.8972 | 1.3774 |
| No log | 2.625 | 252 | 1.6437 | 0.3697 | 1.6437 | 1.2821 |
| No log | 2.6458 | 254 | 1.4334 | 0.3737 | 1.4334 | 1.1972 |
| No log | 2.6667 | 256 | 1.4196 | 0.3734 | 1.4196 | 1.1915 |
| No log | 2.6875 | 258 | 1.6052 | 0.3697 | 1.6052 | 1.2670 |
| No log | 2.7083 | 260 | 1.9296 | 0.3181 | 1.9296 | 1.3891 |
| No log | 2.7292 | 262 | 2.0539 | 0.2638 | 2.0539 | 1.4332 |
| No log | 2.75 | 264 | 1.9151 | 0.3197 | 1.9151 | 1.3839 |
| No log | 2.7708 | 266 | 1.5370 | 0.3605 | 1.5370 | 1.2398 |
| No log | 2.7917 | 268 | 1.2951 | 0.4288 | 1.2951 | 1.1380 |
| No log | 2.8125 | 270 | 1.3123 | 0.4207 | 1.3123 | 1.1455 |
| No log | 2.8333 | 272 | 1.5368 | 0.4042 | 1.5368 | 1.2397 |
| No log | 2.8542 | 274 | 1.7053 | 0.3189 | 1.7053 | 1.3059 |
| No log | 2.875 | 276 | 1.6095 | 0.3683 | 1.6095 | 1.2687 |
| No log | 2.8958 | 278 | 1.6022 | 0.3766 | 1.6022 | 1.2658 |
| No log | 2.9167 | 280 | 1.5723 | 0.3888 | 1.5723 | 1.2539 |
| No log | 2.9375 | 282 | 1.6353 | 0.3569 | 1.6353 | 1.2788 |
| No log | 2.9583 | 284 | 1.6888 | 0.3408 | 1.6888 | 1.2995 |
| No log | 2.9792 | 286 | 1.5049 | 0.3605 | 1.5049 | 1.2268 |
| No log | 3.0 | 288 | 1.4521 | 0.3605 | 1.4521 | 1.2050 |
| No log | 3.0208 | 290 | 1.2131 | 0.4636 | 1.2131 | 1.1014 |
| No log | 3.0417 | 292 | 1.1533 | 0.4671 | 1.1533 | 1.0739 |
| No log | 3.0625 | 294 | 1.2774 | 0.4468 | 1.2774 | 1.1302 |
| No log | 3.0833 | 296 | 1.6104 | 0.3475 | 1.6104 | 1.2690 |
| No log | 3.1042 | 298 | 1.8812 | 0.3212 | 1.8812 | 1.3716 |
| No log | 3.125 | 300 | 1.9046 | 0.3424 | 1.9046 | 1.3801 |
| No log | 3.1458 | 302 | 1.8386 | 0.3937 | 1.8386 | 1.3559 |
| No log | 3.1667 | 304 | 1.5473 | 0.4059 | 1.5473 | 1.2439 |
| No log | 3.1875 | 306 | 1.2620 | 0.4331 | 1.2620 | 1.1234 |
| No log | 3.2083 | 308 | 1.2862 | 0.4465 | 1.2862 | 1.1341 |
| No log | 3.2292 | 310 | 1.4430 | 0.4382 | 1.4430 | 1.2013 |
| No log | 3.25 | 312 | 1.5322 | 0.4255 | 1.5322 | 1.2378 |
| No log | 3.2708 | 314 | 1.4815 | 0.4189 | 1.4815 | 1.2172 |
| No log | 3.2917 | 316 | 1.3425 | 0.4175 | 1.3425 | 1.1587 |
| No log | 3.3125 | 318 | 1.3360 | 0.3960 | 1.3360 | 1.1558 |
| No log | 3.3333 | 320 | 1.5401 | 0.3986 | 1.5401 | 1.2410 |
| No log | 3.3542 | 322 | 1.5969 | 0.3737 | 1.5969 | 1.2637 |
| No log | 3.375 | 324 | 1.4647 | 0.3775 | 1.4647 | 1.2103 |
| No log | 3.3958 | 326 | 1.2988 | 0.4241 | 1.2988 | 1.1396 |
| No log | 3.4167 | 328 | 1.3310 | 0.4241 | 1.3310 | 1.1537 |
| No log | 3.4375 | 330 | 1.5170 | 0.3570 | 1.5170 | 1.2317 |
| No log | 3.4583 | 332 | 1.6771 | 0.3446 | 1.6771 | 1.2950 |
| No log | 3.4792 | 334 | 1.8152 | 0.3189 | 1.8152 | 1.3473 |
| No log | 3.5 | 336 | 1.7382 | 0.3538 | 1.7382 | 1.3184 |
| No log | 3.5208 | 338 | 1.6292 | 0.3783 | 1.6292 | 1.2764 |
| No log | 3.5417 | 340 | 1.4035 | 0.3960 | 1.4035 | 1.1847 |
| No log | 3.5625 | 342 | 1.3381 | 0.4018 | 1.3381 | 1.1568 |
| No log | 3.5833 | 344 | 1.4198 | 0.4016 | 1.4198 | 1.1916 |
| No log | 3.6042 | 346 | 1.6174 | 0.3960 | 1.6174 | 1.2718 |
| No log | 3.625 | 348 | 1.7250 | 0.3413 | 1.7250 | 1.3134 |
| No log | 3.6458 | 350 | 1.6282 | 0.4170 | 1.6282 | 1.2760 |
| No log | 3.6667 | 352 | 1.4134 | 0.4018 | 1.4134 | 1.1888 |
| No log | 3.6875 | 354 | 1.2642 | 0.3507 | 1.2642 | 1.1244 |
| No log | 3.7083 | 356 | 1.2664 | 0.3662 | 1.2664 | 1.1253 |
| No log | 3.7292 | 358 | 1.4428 | 0.4384 | 1.4428 | 1.2012 |
| No log | 3.75 | 360 | 1.6603 | 0.4242 | 1.6603 | 1.2885 |
| No log | 3.7708 | 362 | 1.8180 | 0.3700 | 1.8180 | 1.3483 |
| No log | 3.7917 | 364 | 1.8091 | 0.3499 | 1.8091 | 1.3450 |
| No log | 3.8125 | 366 | 1.7317 | 0.3781 | 1.7317 | 1.3160 |
| No log | 3.8333 | 368 | 1.6560 | 0.3779 | 1.6560 | 1.2868 |
| No log | 3.8542 | 370 | 1.5822 | 0.3960 | 1.5822 | 1.2579 |
| No log | 3.875 | 372 | 1.4541 | 0.4164 | 1.4541 | 1.2059 |
| No log | 3.8958 | 374 | 1.4265 | 0.4166 | 1.4265 | 1.1943 |
| No log | 3.9167 | 376 | 1.5149 | 0.4016 | 1.5149 | 1.2308 |
| No log | 3.9375 | 378 | 1.6751 | 0.3827 | 1.6751 | 1.2942 |
| No log | 3.9583 | 380 | 1.8447 | 0.3303 | 1.8447 | 1.3582 |
| No log | 3.9792 | 382 | 1.9467 | 0.3058 | 1.9467 | 1.3952 |
| No log | 4.0 | 384 | 1.8657 | 0.3503 | 1.8657 | 1.3659 |
| No log | 4.0208 | 386 | 1.6303 | 0.4242 | 1.6303 | 1.2768 |
| No log | 4.0417 | 388 | 1.4046 | 0.4177 | 1.4046 | 1.1852 |
| No log | 4.0625 | 390 | 1.3677 | 0.4384 | 1.3677 | 1.1695 |
| No log | 4.0833 | 392 | 1.5104 | 0.4242 | 1.5104 | 1.2290 |
| No log | 4.1042 | 394 | 1.5794 | 0.4242 | 1.5794 | 1.2567 |
| No log | 4.125 | 396 | 1.6711 | 0.4182 | 1.6711 | 1.2927 |
| No log | 4.1458 | 398 | 1.6277 | 0.3986 | 1.6277 | 1.2758 |
| No log | 4.1667 | 400 | 1.5032 | 0.4375 | 1.5032 | 1.2261 |
| No log | 4.1875 | 402 | 1.4641 | 0.4375 | 1.4641 | 1.2100 |
| No log | 4.2083 | 404 | 1.5076 | 0.4113 | 1.5076 | 1.2278 |
| No log | 4.2292 | 406 | 1.5599 | 0.4113 | 1.5599 | 1.2490 |
| No log | 4.25 | 408 | 1.5209 | 0.4039 | 1.5209 | 1.2332 |
| No log | 4.2708 | 410 | 1.4032 | 0.4172 | 1.4032 | 1.1846 |
| No log | 4.2917 | 412 | 1.4146 | 0.4172 | 1.4146 | 1.1894 |
| No log | 4.3125 | 414 | 1.4155 | 0.4172 | 1.4155 | 1.1897 |
| No log | 4.3333 | 416 | 1.3880 | 0.4172 | 1.3880 | 1.1781 |
| No log | 4.3542 | 418 | 1.3240 | 0.4379 | 1.3240 | 1.1506 |
| No log | 4.375 | 420 | 1.2124 | 0.4656 | 1.2124 | 1.1011 |
| No log | 4.3958 | 422 | 1.2521 | 0.4656 | 1.2521 | 1.1190 |
| No log | 4.4167 | 424 | 1.4073 | 0.4574 | 1.4073 | 1.1863 |
| No log | 4.4375 | 426 | 1.5810 | 0.4679 | 1.5810 | 1.2574 |
| No log | 4.4583 | 428 | 1.5291 | 0.4311 | 1.5291 | 1.2366 |
| No log | 4.4792 | 430 | 1.3922 | 0.4442 | 1.3922 | 1.1799 |
| No log | 4.5 | 432 | 1.3489 | 0.4528 | 1.3489 | 1.1614 |
| No log | 4.5208 | 434 | 1.3971 | 0.4522 | 1.3971 | 1.1820 |
| No log | 4.5417 | 436 | 1.4815 | 0.4442 | 1.4815 | 1.2172 |
| No log | 4.5625 | 438 | 1.5997 | 0.4311 | 1.5997 | 1.2648 |
| No log | 4.5833 | 440 | 1.5864 | 0.4311 | 1.5864 | 1.2595 |
| No log | 4.6042 | 442 | 1.4452 | 0.4379 | 1.4452 | 1.2022 |
| No log | 4.625 | 444 | 1.2787 | 0.4736 | 1.2787 | 1.1308 |
| No log | 4.6458 | 446 | 1.2470 | 0.4528 | 1.2470 | 1.1167 |
| No log | 4.6667 | 448 | 1.3239 | 0.4442 | 1.3239 | 1.1506 |
| No log | 4.6875 | 450 | 1.3138 | 0.4237 | 1.3138 | 1.1462 |
| No log | 4.7083 | 452 | 1.3478 | 0.4234 | 1.3478 | 1.1609 |
| No log | 4.7292 | 454 | 1.4180 | 0.4234 | 1.4180 | 1.1908 |
| No log | 4.75 | 456 | 1.4292 | 0.4442 | 1.4292 | 1.1955 |
| No log | 4.7708 | 458 | 1.4594 | 0.4442 | 1.4594 | 1.2081 |
| No log | 4.7917 | 460 | 1.6081 | 0.3985 | 1.6081 | 1.2681 |
| No log | 4.8125 | 462 | 1.6563 | 0.4126 | 1.6563 | 1.2870 |
| No log | 4.8333 | 464 | 1.5947 | 0.4246 | 1.5947 | 1.2628 |
| No log | 4.8542 | 466 | 1.5217 | 0.4308 | 1.5217 | 1.2336 |
| No log | 4.875 | 468 | 1.5114 | 0.4242 | 1.5114 | 1.2294 |
| No log | 4.8958 | 470 | 1.4418 | 0.4442 | 1.4418 | 1.2007 |
| No log | 4.9167 | 472 | 1.3956 | 0.4234 | 1.3956 | 1.1813 |
| No log | 4.9375 | 474 | 1.3941 | 0.4234 | 1.3941 | 1.1807 |
| No log | 4.9583 | 476 | 1.5111 | 0.4234 | 1.5111 | 1.2293 |
| No log | 4.9792 | 478 | 1.6845 | 0.3860 | 1.6845 | 1.2979 |
| No log | 5.0 | 480 | 1.7019 | 0.3737 | 1.7019 | 1.3046 |
| No log | 5.0208 | 482 | 1.7054 | 0.3652 | 1.7054 | 1.3059 |
| No log | 5.0417 | 484 | 1.7216 | 0.3652 | 1.7216 | 1.3121 |
| No log | 5.0625 | 486 | 1.6243 | 0.3827 | 1.6243 | 1.2745 |
| No log | 5.0833 | 488 | 1.5722 | 0.4231 | 1.5722 | 1.2539 |
| No log | 5.1042 | 490 | 1.5730 | 0.4231 | 1.5730 | 1.2542 |
| No log | 5.125 | 492 | 1.5653 | 0.4094 | 1.5653 | 1.2511 |
| No log | 5.1458 | 494 | 1.5562 | 0.4095 | 1.5562 | 1.2475 |
| No log | 5.1667 | 496 | 1.4984 | 0.4095 | 1.4984 | 1.2241 |
| No log | 5.1875 | 498 | 1.4562 | 0.4234 | 1.4562 | 1.2067 |
| 0.308 | 5.2083 | 500 | 1.4410 | 0.4234 | 1.4410 | 1.2004 |
| 0.308 | 5.2292 | 502 | 1.4692 | 0.4234 | 1.4692 | 1.2121 |
| 0.308 | 5.25 | 504 | 1.6340 | 0.4040 | 1.6340 | 1.2783 |
| 0.308 | 5.2708 | 506 | 1.7626 | 0.3419 | 1.7626 | 1.3276 |
| 0.308 | 5.2917 | 508 | 1.7895 | 0.3512 | 1.7895 | 1.3377 |
| 0.308 | 5.3125 | 510 | 1.7230 | 0.3784 | 1.7230 | 1.3126 |
| 0.308 | 5.3333 | 512 | 1.7706 | 0.3740 | 1.7706 | 1.3306 |
| 0.308 | 5.3542 | 514 | 1.7257 | 0.3860 | 1.7257 | 1.3137 |
| 0.308 | 5.375 | 516 | 1.5519 | 0.4040 | 1.5519 | 1.2458 |
| 0.308 | 5.3958 | 518 | 1.4382 | 0.4234 | 1.4382 | 1.1993 |
| 0.308 | 5.4167 | 520 | 1.4482 | 0.4234 | 1.4482 | 1.2034 |
| 0.308 | 5.4375 | 522 | 1.5142 | 0.4234 | 1.5142 | 1.2305 |
| 0.308 | 5.4583 | 524 | 1.5240 | 0.4234 | 1.5240 | 1.2345 |
| 0.308 | 5.4792 | 526 | 1.5856 | 0.3960 | 1.5856 | 1.2592 |
| 0.308 | 5.5 | 528 | 1.5765 | 0.3960 | 1.5765 | 1.2556 |
| 0.308 | 5.5208 | 530 | 1.5394 | 0.4014 | 1.5394 | 1.2407 |
| 0.308 | 5.5417 | 532 | 1.6108 | 0.3960 | 1.6108 | 1.2692 |
| 0.308 | 5.5625 | 534 | 1.6541 | 0.3960 | 1.6541 | 1.2861 |
| 0.308 | 5.5833 | 536 | 1.6017 | 0.4095 | 1.6017 | 1.2656 |
| 0.308 | 5.6042 | 538 | 1.5389 | 0.4154 | 1.5389 | 1.2405 |
| 0.308 | 5.625 | 540 | 1.5547 | 0.4154 | 1.5547 | 1.2469 |
| 0.308 | 5.6458 | 542 | 1.6310 | 0.4095 | 1.6310 | 1.2771 |
| 0.308 | 5.6667 | 544 | 1.6275 | 0.3960 | 1.6275 | 1.2757 |
| 0.308 | 5.6875 | 546 | 1.5940 | 0.4095 | 1.5940 | 1.2625 |
| 0.308 | 5.7083 | 548 | 1.5211 | 0.4154 | 1.5211 | 1.2333 |
| 0.308 | 5.7292 | 550 | 1.4842 | 0.4154 | 1.4842 | 1.2183 |
| 0.308 | 5.75 | 552 | 1.5404 | 0.4095 | 1.5404 | 1.2411 |
| 0.308 | 5.7708 | 554 | 1.6547 | 0.3828 | 1.6547 | 1.2863 |
| 0.308 | 5.7917 | 556 | 1.7452 | 0.3864 | 1.7452 | 1.3211 |
| 0.308 | 5.8125 | 558 | 1.7022 | 0.3864 | 1.7022 | 1.3047 |
| 0.308 | 5.8333 | 560 | 1.6624 | 0.4106 | 1.6624 | 1.2894 |
| 0.308 | 5.8542 | 562 | 1.6452 | 0.4108 | 1.6452 | 1.2826 |
| 0.308 | 5.875 | 564 | 1.7026 | 0.3864 | 1.7026 | 1.3048 |
| 0.308 | 5.8958 | 566 | 1.8440 | 0.3894 | 1.8440 | 1.3579 |
| 0.308 | 5.9167 | 568 | 1.8604 | 0.3894 | 1.8604 | 1.3640 |
| 0.308 | 5.9375 | 570 | 1.8763 | 0.3962 | 1.8763 | 1.3698 |
| 0.308 | 5.9583 | 572 | 1.8267 | 0.4235 | 1.8267 | 1.3515 |
| 0.308 | 5.9792 | 574 | 1.8156 | 0.4235 | 1.8156 | 1.3474 |
| 0.308 | 6.0 | 576 | 1.7934 | 0.4055 | 1.7934 | 1.3392 |
| 0.308 | 6.0208 | 578 | 1.8513 | 0.3748 | 1.8513 | 1.3606 |
| 0.308 | 6.0417 | 580 | 1.7985 | 0.3546 | 1.7985 | 1.3411 |
| 0.308 | 6.0625 | 582 | 1.7240 | 0.3582 | 1.7240 | 1.3130 |
| 0.308 | 6.0833 | 584 | 1.6591 | 0.3578 | 1.6591 | 1.2881 |
| 0.308 | 6.1042 | 586 | 1.6583 | 0.3578 | 1.6583 | 1.2877 |
| 0.308 | 6.125 | 588 | 1.6852 | 0.3578 | 1.6852 | 1.2982 |
| 0.308 | 6.1458 | 590 | 1.6560 | 0.3578 | 1.6560 | 1.2869 |
| 0.308 | 6.1667 | 592 | 1.6809 | 0.3578 | 1.6809 | 1.2965 |
| 0.308 | 6.1875 | 594 | 1.7089 | 0.3578 | 1.7089 | 1.3073 |
| 0.308 | 6.2083 | 596 | 1.7173 | 0.3578 | 1.7173 | 1.3105 |
| 0.308 | 6.2292 | 598 | 1.7329 | 0.3578 | 1.7329 | 1.3164 |
| 0.308 | 6.25 | 600 | 1.6897 | 0.3578 | 1.6897 | 1.2999 |
| 0.308 | 6.2708 | 602 | 1.7115 | 0.3578 | 1.7115 | 1.3082 |
| 0.308 | 6.2917 | 604 | 1.7153 | 0.3578 | 1.7153 | 1.3097 |
| 0.308 | 6.3125 | 606 | 1.7708 | 0.3461 | 1.7708 | 1.3307 |
| 0.308 | 6.3333 | 608 | 1.7584 | 0.3578 | 1.7584 | 1.3261 |
| 0.308 | 6.3542 | 610 | 1.7004 | 0.3578 | 1.7004 | 1.3040 |
| 0.308 | 6.375 | 612 | 1.6501 | 0.3621 | 1.6501 | 1.2846 |
| 0.308 | 6.3958 | 614 | 1.7021 | 0.3578 | 1.7021 | 1.3046 |
| 0.308 | 6.4167 | 616 | 1.7169 | 0.3578 | 1.7169 | 1.3103 |
| 0.308 | 6.4375 | 618 | 1.7769 | 0.3456 | 1.7769 | 1.3330 |
| 0.308 | 6.4583 | 620 | 1.8530 | 0.3122 | 1.8530 | 1.3613 |
| 0.308 | 6.4792 | 622 | 1.8253 | 0.3235 | 1.8253 | 1.3510 |
| 0.308 | 6.5 | 624 | 1.7284 | 0.3582 | 1.7284 | 1.3147 |
| 0.308 | 6.5208 | 626 | 1.6109 | 0.3703 | 1.6109 | 1.2692 |
| 0.308 | 6.5417 | 628 | 1.5279 | 0.3522 | 1.5279 | 1.2361 |
| 0.308 | 6.5625 | 630 | 1.5171 | 0.3522 | 1.5171 | 1.2317 |
| 0.308 | 6.5833 | 632 | 1.5050 | 0.3522 | 1.5050 | 1.2268 |
| 0.308 | 6.6042 | 634 | 1.5166 | 0.3746 | 1.5166 | 1.2315 |
| 0.308 | 6.625 | 636 | 1.5667 | 0.3746 | 1.5667 | 1.2517 |
| 0.308 | 6.6458 | 638 | 1.6328 | 0.3578 | 1.6328 | 1.2778 |
| 0.308 | 6.6667 | 640 | 1.6765 | 0.3582 | 1.6765 | 1.2948 |
| 0.308 | 6.6875 | 642 | 1.6317 | 0.3578 | 1.6317 | 1.2774 |
| 0.308 | 6.7083 | 644 | 1.6276 | 0.3617 | 1.6276 | 1.2758 |
| 0.308 | 6.7292 | 646 | 1.5852 | 0.3960 | 1.5852 | 1.2591 |
| 0.308 | 6.75 | 648 | 1.5239 | 0.3960 | 1.5239 | 1.2345 |
| 0.308 | 6.7708 | 650 | 1.5398 | 0.3739 | 1.5398 | 1.2409 |
| 0.308 | 6.7917 | 652 | 1.6157 | 0.3960 | 1.6157 | 1.2711 |
| 0.308 | 6.8125 | 654 | 1.7291 | 0.3582 | 1.7291 | 1.3149 |
| 0.308 | 6.8333 | 656 | 1.7641 | 0.3343 | 1.7641 | 1.3282 |
| 0.308 | 6.8542 | 658 | 1.7794 | 0.3343 | 1.7794 | 1.3339 |
| 0.308 | 6.875 | 660 | 1.7404 | 0.3373 | 1.7404 | 1.3192 |
| 0.308 | 6.8958 | 662 | 1.6618 | 0.3522 | 1.6618 | 1.2891 |
| 0.308 | 6.9167 | 664 | 1.5460 | 0.3791 | 1.5460 | 1.2434 |
| 0.308 | 6.9375 | 666 | 1.4392 | 0.4016 | 1.4392 | 1.1997 |
| 0.308 | 6.9583 | 668 | 1.4103 | 0.4016 | 1.4103 | 1.1875 |
| 0.308 | 6.9792 | 670 | 1.4427 | 0.4016 | 1.4427 | 1.2011 |
| 0.308 | 7.0 | 672 | 1.5329 | 0.4014 | 1.5329 | 1.2381 |
| 0.308 | 7.0208 | 674 | 1.6798 | 0.3617 | 1.6798 | 1.2961 |
| 0.308 | 7.0417 | 676 | 1.8512 | 0.3105 | 1.8512 | 1.3606 |
| 0.308 | 7.0625 | 678 | 1.9881 | 0.2989 | 1.9881 | 1.4100 |
| 0.308 | 7.0833 | 680 | 2.0132 | 0.3182 | 2.0132 | 1.4189 |
| 0.308 | 7.1042 | 682 | 1.9456 | 0.3091 | 1.9456 | 1.3948 |
| 0.308 | 7.125 | 684 | 1.8844 | 0.2808 | 1.8844 | 1.3727 |
| 0.308 | 7.1458 | 686 | 1.7526 | 0.3582 | 1.7526 | 1.3239 |
| 0.308 | 7.1667 | 688 | 1.6151 | 0.3517 | 1.6151 | 1.2709 |
| 0.308 | 7.1875 | 690 | 1.5554 | 0.3512 | 1.5554 | 1.2472 |
| 0.308 | 7.2083 | 692 | 1.5439 | 0.3789 | 1.5439 | 1.2425 |
| 0.308 | 7.2292 | 694 | 1.5384 | 0.3789 | 1.5384 | 1.2403 |
| 0.308 | 7.25 | 696 | 1.5522 | 0.3789 | 1.5522 | 1.2459 |
| 0.308 | 7.2708 | 698 | 1.5596 | 0.3789 | 1.5596 | 1.2488 |
| 0.308 | 7.2917 | 700 | 1.6367 | 0.3746 | 1.6367 | 1.2793 |
| 0.308 | 7.3125 | 702 | 1.7116 | 0.3582 | 1.7116 | 1.3083 |
| 0.308 | 7.3333 | 704 | 1.7067 | 0.3582 | 1.7067 | 1.3064 |
| 0.308 | 7.3542 | 706 | 1.7247 | 0.3662 | 1.7247 | 1.3133 |
| 0.308 | 7.375 | 708 | 1.7725 | 0.3662 | 1.7725 | 1.3313 |
| 0.308 | 7.3958 | 710 | 1.7635 | 0.3662 | 1.7635 | 1.3280 |
| 0.308 | 7.4167 | 712 | 1.7061 | 0.3574 | 1.7061 | 1.3062 |
| 0.308 | 7.4375 | 714 | 1.6461 | 0.3830 | 1.6461 | 1.2830 |
| 0.308 | 7.4583 | 716 | 1.6470 | 0.3830 | 1.6470 | 1.2834 |
| 0.308 | 7.4792 | 718 | 1.6493 | 0.3574 | 1.6493 | 1.2843 |
| 0.308 | 7.5 | 720 | 1.7063 | 0.3578 | 1.7063 | 1.3063 |
| 0.308 | 7.5208 | 722 | 1.7860 | 0.3253 | 1.7860 | 1.3364 |
| 0.308 | 7.5417 | 724 | 1.7872 | 0.3253 | 1.7872 | 1.3368 |
| 0.308 | 7.5625 | 726 | 1.8060 | 0.3343 | 1.8060 | 1.3439 |
| 0.308 | 7.5833 | 728 | 1.7801 | 0.3542 | 1.7801 | 1.3342 |
| 0.308 | 7.6042 | 730 | 1.7146 | 0.3578 | 1.7146 | 1.3094 |
| 0.308 | 7.625 | 732 | 1.6076 | 0.3574 | 1.6076 | 1.2679 |
| 0.308 | 7.6458 | 734 | 1.5195 | 0.3741 | 1.5195 | 1.2327 |
| 0.308 | 7.6667 | 736 | 1.4754 | 0.4014 | 1.4754 | 1.2147 |
| 0.308 | 7.6875 | 738 | 1.4769 | 0.4014 | 1.4769 | 1.2153 |
| 0.308 | 7.7083 | 740 | 1.5002 | 0.3741 | 1.5002 | 1.2248 |
| 0.308 | 7.7292 | 742 | 1.5593 | 0.3697 | 1.5593 | 1.2487 |
| 0.308 | 7.75 | 744 | 1.6387 | 0.3574 | 1.6387 | 1.2801 |
| 0.308 | 7.7708 | 746 | 1.7055 | 0.3578 | 1.7055 | 1.3060 |
| 0.308 | 7.7917 | 748 | 1.7908 | 0.3985 | 1.7908 | 1.3382 |
| 0.308 | 7.8125 | 750 | 1.8629 | 0.3894 | 1.8629 | 1.3649 |
| 0.308 | 7.8333 | 752 | 1.8830 | 0.3605 | 1.8830 | 1.3722 |
| 0.308 | 7.8542 | 754 | 1.8638 | 0.3525 | 1.8638 | 1.3652 |
| 0.308 | 7.875 | 756 | 1.8099 | 0.3456 | 1.8099 | 1.3453 |
| 0.308 | 7.8958 | 758 | 1.7730 | 0.3578 | 1.7730 | 1.3315 |
| 0.308 | 7.9167 | 760 | 1.7403 | 0.3578 | 1.7403 | 1.3192 |
| 0.308 | 7.9375 | 762 | 1.7265 | 0.3578 | 1.7265 | 1.3140 |
| 0.308 | 7.9583 | 764 | 1.7416 | 0.3578 | 1.7416 | 1.3197 |
| 0.308 | 7.9792 | 766 | 1.7387 | 0.3578 | 1.7387 | 1.3186 |
| 0.308 | 8.0 | 768 | 1.6835 | 0.3578 | 1.6835 | 1.2975 |
| 0.308 | 8.0208 | 770 | 1.5980 | 0.3697 | 1.5980 | 1.2641 |
| 0.308 | 8.0417 | 772 | 1.5455 | 0.3694 | 1.5455 | 1.2432 |
| 0.308 | 8.0625 | 774 | 1.5349 | 0.3694 | 1.5349 | 1.2389 |
| 0.308 | 8.0833 | 776 | 1.5411 | 0.3694 | 1.5411 | 1.2414 |
| 0.308 | 8.1042 | 778 | 1.5611 | 0.3697 | 1.5611 | 1.2494 |
| 0.308 | 8.125 | 780 | 1.6052 | 0.3570 | 1.6052 | 1.2670 |
| 0.308 | 8.1458 | 782 | 1.6332 | 0.3570 | 1.6332 | 1.2780 |
| 0.308 | 8.1667 | 784 | 1.6721 | 0.3578 | 1.6721 | 1.2931 |
| 0.308 | 8.1875 | 786 | 1.7004 | 0.3578 | 1.7004 | 1.3040 |
| 0.308 | 8.2083 | 788 | 1.6739 | 0.3578 | 1.6739 | 1.2938 |
| 0.308 | 8.2292 | 790 | 1.6134 | 0.3570 | 1.6134 | 1.2702 |
| 0.308 | 8.25 | 792 | 1.5592 | 0.3697 | 1.5592 | 1.2487 |
| 0.308 | 8.2708 | 794 | 1.5480 | 0.3697 | 1.5480 | 1.2442 |
| 0.308 | 8.2917 | 796 | 1.5680 | 0.3570 | 1.5680 | 1.2522 |
| 0.308 | 8.3125 | 798 | 1.6028 | 0.3574 | 1.6028 | 1.2660 |
| 0.308 | 8.3333 | 800 | 1.6737 | 0.3574 | 1.6737 | 1.2937 |
| 0.308 | 8.3542 | 802 | 1.7413 | 0.3574 | 1.7413 | 1.3196 |
| 0.308 | 8.375 | 804 | 1.7853 | 0.3578 | 1.7853 | 1.3362 |
| 0.308 | 8.3958 | 806 | 1.7950 | 0.3542 | 1.7950 | 1.3398 |
| 0.308 | 8.4167 | 808 | 1.7832 | 0.3578 | 1.7832 | 1.3353 |
| 0.308 | 8.4375 | 810 | 1.7699 | 0.3578 | 1.7699 | 1.3304 |
| 0.308 | 8.4583 | 812 | 1.7521 | 0.3578 | 1.7521 | 1.3237 |
| 0.308 | 8.4792 | 814 | 1.7106 | 0.3574 | 1.7106 | 1.3079 |
| 0.308 | 8.5 | 816 | 1.6527 | 0.3574 | 1.6527 | 1.2856 |
| 0.308 | 8.5208 | 818 | 1.6156 | 0.3574 | 1.6156 | 1.2711 |
| 0.308 | 8.5417 | 820 | 1.5943 | 0.3574 | 1.5943 | 1.2627 |
| 0.308 | 8.5625 | 822 | 1.5537 | 0.3570 | 1.5537 | 1.2465 |
| 0.308 | 8.5833 | 824 | 1.4959 | 0.3960 | 1.4959 | 1.2231 |
| 0.308 | 8.6042 | 826 | 1.4680 | 0.3739 | 1.4680 | 1.2116 |
| 0.308 | 8.625 | 828 | 1.4774 | 0.3739 | 1.4774 | 1.2155 |
| 0.308 | 8.6458 | 830 | 1.4845 | 0.3739 | 1.4845 | 1.2184 |
| 0.308 | 8.6667 | 832 | 1.4879 | 0.3739 | 1.4879 | 1.2198 |
| 0.308 | 8.6875 | 834 | 1.5152 | 0.3475 | 1.5152 | 1.2309 |
| 0.308 | 8.7083 | 836 | 1.5583 | 0.3570 | 1.5583 | 1.2483 |
| 0.308 | 8.7292 | 838 | 1.5921 | 0.3570 | 1.5921 | 1.2618 |
| 0.308 | 8.75 | 840 | 1.6250 | 0.3570 | 1.6250 | 1.2747 |
| 0.308 | 8.7708 | 842 | 1.6517 | 0.3574 | 1.6517 | 1.2852 |
| 0.308 | 8.7917 | 844 | 1.6500 | 0.3574 | 1.6500 | 1.2845 |
| 0.308 | 8.8125 | 846 | 1.6387 | 0.3570 | 1.6387 | 1.2801 |
| 0.308 | 8.8333 | 848 | 1.6212 | 0.3570 | 1.6212 | 1.2733 |
| 0.308 | 8.8542 | 850 | 1.5931 | 0.3570 | 1.5931 | 1.2622 |
| 0.308 | 8.875 | 852 | 1.5922 | 0.3570 | 1.5922 | 1.2618 |
| 0.308 | 8.8958 | 854 | 1.6112 | 0.3570 | 1.6112 | 1.2693 |
| 0.308 | 8.9167 | 856 | 1.6539 | 0.3574 | 1.6539 | 1.2861 |
| 0.308 | 8.9375 | 858 | 1.6996 | 0.3574 | 1.6996 | 1.3037 |
| 0.308 | 8.9583 | 860 | 1.7444 | 0.3574 | 1.7444 | 1.3208 |
| 0.308 | 8.9792 | 862 | 1.7831 | 0.3574 | 1.7831 | 1.3353 |
| 0.308 | 9.0 | 864 | 1.7882 | 0.3451 | 1.7882 | 1.3372 |
| 0.308 | 9.0208 | 866 | 1.7950 | 0.3451 | 1.7950 | 1.3398 |
| 0.308 | 9.0417 | 868 | 1.7846 | 0.3574 | 1.7846 | 1.3359 |
| 0.308 | 9.0625 | 870 | 1.7551 | 0.3574 | 1.7551 | 1.3248 |
| 0.308 | 9.0833 | 872 | 1.7194 | 0.3574 | 1.7194 | 1.3112 |
| 0.308 | 9.1042 | 874 | 1.6705 | 0.3574 | 1.6705 | 1.2925 |
| 0.308 | 9.125 | 876 | 1.6303 | 0.3574 | 1.6303 | 1.2769 |
| 0.308 | 9.1458 | 878 | 1.5932 | 0.3570 | 1.5932 | 1.2622 |
| 0.308 | 9.1667 | 880 | 1.5744 | 0.3570 | 1.5744 | 1.2548 |
| 0.308 | 9.1875 | 882 | 1.5496 | 0.3570 | 1.5496 | 1.2448 |
| 0.308 | 9.2083 | 884 | 1.5284 | 0.3340 | 1.5284 | 1.2363 |
| 0.308 | 9.2292 | 886 | 1.5217 | 0.3605 | 1.5217 | 1.2336 |
| 0.308 | 9.25 | 888 | 1.5318 | 0.3566 | 1.5318 | 1.2377 |
| 0.308 | 9.2708 | 890 | 1.5573 | 0.3570 | 1.5573 | 1.2479 |
| 0.308 | 9.2917 | 892 | 1.5747 | 0.3570 | 1.5747 | 1.2549 |
| 0.308 | 9.3125 | 894 | 1.5967 | 0.3574 | 1.5967 | 1.2636 |
| 0.308 | 9.3333 | 896 | 1.6163 | 0.3574 | 1.6163 | 1.2713 |
| 0.308 | 9.3542 | 898 | 1.6220 | 0.3574 | 1.6220 | 1.2736 |
| 0.308 | 9.375 | 900 | 1.6224 | 0.3574 | 1.6224 | 1.2737 |
| 0.308 | 9.3958 | 902 | 1.6356 | 0.3574 | 1.6356 | 1.2789 |
| 0.308 | 9.4167 | 904 | 1.6570 | 0.3574 | 1.6570 | 1.2873 |
| 0.308 | 9.4375 | 906 | 1.6770 | 0.3574 | 1.6770 | 1.2950 |
| 0.308 | 9.4583 | 908 | 1.7015 | 0.3574 | 1.7015 | 1.3044 |
| 0.308 | 9.4792 | 910 | 1.7095 | 0.3574 | 1.7095 | 1.3075 |
| 0.308 | 9.5 | 912 | 1.7139 | 0.3574 | 1.7139 | 1.3092 |
| 0.308 | 9.5208 | 914 | 1.7149 | 0.3574 | 1.7149 | 1.3095 |
| 0.308 | 9.5417 | 916 | 1.7025 | 0.3574 | 1.7025 | 1.3048 |
| 0.308 | 9.5625 | 918 | 1.6976 | 0.3574 | 1.6976 | 1.3029 |
| 0.308 | 9.5833 | 920 | 1.6955 | 0.3574 | 1.6955 | 1.3021 |
| 0.308 | 9.6042 | 922 | 1.6867 | 0.3574 | 1.6867 | 1.2987 |
| 0.308 | 9.625 | 924 | 1.6765 | 0.3574 | 1.6765 | 1.2948 |
| 0.308 | 9.6458 | 926 | 1.6655 | 0.3574 | 1.6655 | 1.2905 |
| 0.308 | 9.6667 | 928 | 1.6632 | 0.3574 | 1.6632 | 1.2897 |
| 0.308 | 9.6875 | 930 | 1.6644 | 0.3574 | 1.6644 | 1.2901 |
| 0.308 | 9.7083 | 932 | 1.6668 | 0.3574 | 1.6668 | 1.2910 |
| 0.308 | 9.7292 | 934 | 1.6712 | 0.3574 | 1.6712 | 1.2927 |
| 0.308 | 9.75 | 936 | 1.6735 | 0.3574 | 1.6735 | 1.2936 |
| 0.308 | 9.7708 | 938 | 1.6765 | 0.3574 | 1.6765 | 1.2948 |
| 0.308 | 9.7917 | 940 | 1.6777 | 0.3574 | 1.6777 | 1.2953 |
| 0.308 | 9.8125 | 942 | 1.6810 | 0.3574 | 1.6810 | 1.2965 |
| 0.308 | 9.8333 | 944 | 1.6847 | 0.3574 | 1.6847 | 1.2980 |
| 0.308 | 9.8542 | 946 | 1.6867 | 0.3574 | 1.6867 | 1.2987 |
| 0.308 | 9.875 | 948 | 1.6877 | 0.3574 | 1.6877 | 1.2991 |
| 0.308 | 9.8958 | 950 | 1.6864 | 0.3574 | 1.6864 | 1.2986 |
| 0.308 | 9.9167 | 952 | 1.6859 | 0.3574 | 1.6859 | 1.2984 |
| 0.308 | 9.9375 | 954 | 1.6867 | 0.3574 | 1.6867 | 1.2987 |
| 0.308 | 9.9583 | 956 | 1.6872 | 0.3574 | 1.6872 | 1.2989 |
| 0.308 | 9.9792 | 958 | 1.6865 | 0.3574 | 1.6865 | 1.2986 |
| 0.308 | 10.0 | 960 | 1.6862 | 0.3574 | 1.6862 | 1.2985 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mrm8488/Llama-3-8B-ft-orpo-40k-mix | mrm8488 | "2024-05-15T15:57:47" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-15T15:52:59" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF | mradermacher | "2024-06-26T20:28:49" | 309 | 0 | transformers | [
"transformers",
"gguf",
"en",
"fr",
"base_model:Enno-Ai/EnnoAi-Pro-Llama-3-8B-v0.1",
"base_model:quantized:Enno-Ai/EnnoAi-Pro-Llama-3-8B-v0.1",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-23T19:06:59" | ---
base_model: Enno-Ai/EnnoAi-Pro-Llama-3-8B-v0.1
language:
- en
- fr
library_name: transformers
license: bigscience-bloom-rail-1.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Enno-Ai/EnnoAi-Pro-Llama-3-8B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/EnnoAi-Pro-Llama-3-8B-v0.1-GGUF/resolve/main/EnnoAi-Pro-Llama-3-8B-v0.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
decentfuture/btcli-finetune | decentfuture | "2024-03-19T03:49:55" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-16T05:42:43" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Skanderbeg/Reinforce-PolicyGradientCartPole | Skanderbeg | "2023-03-08T03:56:01" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-08T03:55:48" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PolicyGradientCartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jeiku/Cookie_7B | jeiku | "2024-02-17T01:23:43" | 55 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:jeiku/Rainbow_69_7B",
"base_model:merge:jeiku/Rainbow_69_7B",
"base_model:jeiku/SpaghettiOs_7B",
"base_model:merge:jeiku/SpaghettiOs_7B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-15T20:27:16" | ---
base_model:
- jeiku/SpaghettiOs_7B
- jeiku/Rainbow_69_7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# Cookie
A reasonably logical model with a few datasets thrown in to increase RP abilities. This is a good candidate for a balanced 7B model that can provide assistant functionality alongside roleplaying or romantic endeavors.

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [jeiku/SpaghettiOs_7B](https://huggingface.co/jeiku/SpaghettiOs_7B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Rainbow_69_7B](https://huggingface.co/jeiku/Rainbow_69_7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: jeiku/SpaghettiOs_7B
parameters:
normalize: true
models:
- model: jeiku/SpaghettiOs_7B
parameters:
weight: 1
- model: jeiku/Rainbow_69_7B
parameters:
weight: 1
dtype: float16
``` |
StoriesLM/StoriesLM-v1-1926 | StoriesLM | "2024-03-09T23:09:26" | 101 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:dell-research-harvard/AmericanStories",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-03-09T22:31:06" | ---
license: mit
datasets:
- dell-research-harvard/AmericanStories
language:
- en
---
# StoriesLM: A Family of Language Models With Sequentially-Expanding Pretraining Windows
## Model Family
StoriesLM is a family of language models with sequentially-expanding pretraining windows. The pretraining data for the model family comes from the American Stories dataset—a collection of language from historical American news articles. The first language model in the StoriesLM family is pretrained on language data from 1900. Each subsequent language model further trains the previous year’s model checkpoint using data from the following year, up until 1963.
## Dataset
The StoriesLM family is pretrained on the American Stories dataset. If you use a model from this family, please also cite the original dataset's authors:
```
@article{dell2024american,
title={American stories: A large-scale structured text dataset of historical us newspapers},
author={Dell, Melissa and Carlson, Jacob and Bryan, Tom and Silcock, Emily and Arora, Abhishek and Shen, Zejiang and D'Amico-Wong, Luca and Le, Quan and Querubin, Pablo and Heldring, Leander},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
}
```
|
gridoneai/Llama-3-8B-Jungso-Instruct-LoRA-6k | gridoneai | "2024-06-05T01:05:23" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-04T04:35:07" | ---
license: cc-by-nc-sa-4.0
---
|
RichardErkhov/radlab_-_polish-qa-v2-8bits | RichardErkhov | "2024-05-09T11:44:25" | 50 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-09T11:43:22" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
polish-qa-v2 - bnb 8bits
- Model creator: https://huggingface.co/radlab/
- Original model: https://huggingface.co/radlab/polish-qa-v2/
Original model description:
---
license: cc-by-4.0
datasets:
- clarin-pl/poquad
language:
- pl
library_name: transformers
---
# Model Card
Extractive Question-Answer model for polish. Extractive means, that the most relevant
chunk of the text is returned as answer from the context for the given question.
## Model Details
- **Model name:** `radlab/polish-qa-v2`
- **Developed by:** [radlab.dev](https://radlab.dev)
- **Shared by:** [radlab.dev](https://radlab.dev)
- **Model type:** QA
- **Language(s) (NLP):** PL
- **License:** CC-BY-4.0
- **Finetuned from model:** [sdadas/polish-roberta-large-v2](https://huggingface.co/sdadas/polish-roberta-large-v2)
- **Maxiumum context size:** 512 tokens
## Model Usage
Simple model usage with huggingface library:
```python
from transformers import pipeline
model_path = "radlab/polish-qa-v2"
question_answerer = pipeline(
"question-answering",
model=model_path
)
question = "Jakie silniki posiadał okręt?"
context = """Okręt był napędzany przez trzy trzycylindrowe maszyny parowe potrójnego rozprężania, które
napędzały poprzez wały napędowe trzy śruby napędowe (dwie trójskrzydłowe
zewnętrzne o średnicy 4,5 metra i czteroskrzydłową o średnicy 4,2 metra).
Para była dostarczana przez cztery kotły wodnorurkowe typu Marine,
wyposażone łącznie w osiem palenisk i osiem kotłów cylindrycznych,
które miały łącznie 32 paleniska. Ciśnienie robocze kotłów wynosiło 12 at,
a ich łączna powierzchnia grzewcza 3560 m². Wszystkie kotły były opalane węglem,
którego normalny zapas wynosił 650, a maksymalny 1070 ton.
Nominalna moc siłowni wynosiła 13 000 KM (maksymalnie 13 922 KM przy 108 obr./min),
co pozwalało na osiągnięcie prędkości maksymalnej od 17,5 do 17,6 węzła.
Zasięg wynosił 3420 mil morskich przy prędkości 10 węzłów. Zużycie węgla przy mocy 10 000 KM
wynosiło około 11 ton na godzinę, a przy mocy maksymalnej 16 ton na godzinę.
"""
print(
question_answerer(
question=question,
context=context.replace("\n", " ")
)
)
```
with the sample output:
```json
{
'score': 0.612459123134613,
'start': 25,
'end': 84,
'answer': ' trzy trzycylindrowe maszyny parowe potrójnego rozprężania,'
}
```
Link to the article on our [blog](https://radlab.dev/2024/04/15/ekstrakcyjne-qa-nasz-model-polish-qa-v2/) in polish.
|
Jozaita/fine_tune_test_2 | Jozaita | "2024-04-24T07:46:04" | 109 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-23T15:38:58" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
crumb/qrstudy-gpt2-16-32 | crumb | "2023-11-17T22:50:25" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-11-17T22:50:19" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
QuantFactory/G2-9B-Aletheia-v1-GGUF | QuantFactory | "2024-11-04T06:08:31" | 114 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"base_model:merge:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"base_model:allura-org/G2-9B-Sugarquill-v0",
"base_model:merge:allura-org/G2-9B-Sugarquill-v0",
"base_model:crestf411/gemma2-9B-sunfall-v0.5.2",
"base_model:merge:crestf411/gemma2-9B-sunfall-v0.5.2",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-04T05:06:25" |
---
base_model:
- UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
- crestf411/gemma2-9B-sunfall-v0.5.2
- allura-org/G2-9B-Sugarquill-v0
library_name: transformers
tags:
- mergekit
- merge
license: gemma
---
[](https://hf.co/QuantFactory)
# QuantFactory/G2-9B-Aletheia-v1-GGUF
This is quantized version of [allura-org/G2-9B-Aletheia-v1](https://huggingface.co/allura-org/G2-9B-Aletheia-v1) created using llama.cpp
# Original Model Card
<img src="inpaint.png">
<sub>Image by CalamitousFelicitouness</sub>
---
# Gemma-2-9B Aletheia v1
A merge of Sugarquill and Sunfall. I wanted to combine Sugarquill's more novel-like writing style with something that would improve it's RP perfomance and make it more steerable, w/o adding superfluous synthetic writing patterns.
I quite like Crestfall's Sunfall models and I felt like Gemma version of Sunfall will steer the model in this direction when merged in. To keep more of Gemma-2-9B-it-SPPO-iter3's smarts, I've decided to apply Sunfall LoRA on top of it, instead of using the published Sunfall model.
I'm generally pleased with the result, this model has nice, fresh writing style, good charcard adherence and good system prompt following.
It still should work well for raw completion storywriting, as it's a trained feature in both merged models.
---
Made by Auri.
Thanks to Prodeus, Inflatebot and ShotMisser for testing and giving feedback.
### Format
Model responds to Gemma instruct formatting, exactly like it's base model.
```
<bos><start_of_turn>user
{user message}<end_of_turn>
<start_of_turn>model
{response}<end_of_turn><eos>
```
### Mergekit config
The following YAML configuration was used to produce this model:
```yaml
models:
- model: allura-org/G2-9B-Sugarquill-v0
parameters:
weight: 0.55
density: 0.4
- model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3+AuriAetherwiing/sunfall-g2-lora
parameters:
weight: 0.45
density: 0.3
merge_method: ties
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
parameters:
normalize: true
dtype: bfloat16
```
|
umerah/Task3 | umerah | "2023-08-30T12:14:26" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-30T09:05:51" | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: Task3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Task3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.1753 | 1.0 | 1741 | 3.0775 |
| 3.0194 | 2.0 | 3482 | 3.0338 |
| 2.9194 | 3.0 | 5223 | 3.0157 |
| 2.8401 | 4.0 | 6964 | 3.0063 |
| 2.7765 | 5.0 | 8705 | 3.0064 |
| 2.7266 | 6.0 | 10446 | 3.0093 |
| 2.6817 | 7.0 | 12187 | 3.0105 |
| 2.6458 | 8.0 | 13928 | 3.0156 |
| 2.6195 | 9.0 | 15669 | 3.0205 |
| 2.5997 | 10.0 | 17410 | 3.0246 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ard2020/l3_it_10k_minus_hien | ard2020 | "2024-05-31T19:16:27" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | "2024-05-31T19:16:05" | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: l3_it_10k_minus_hien
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# l3_it_10k_minus_hien
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 3407
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.01
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7703 | 0.2010 | 24 | 0.7228 |
| 0.6912 | 0.4021 | 48 | 0.6820 |
| 0.6512 | 0.6031 | 72 | 0.6617 |
| 0.6573 | 0.8042 | 96 | 0.6522 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
mosaicml/mosaic-bert-base-seqlen-1024 | mosaicml | "2024-03-05T20:30:49" | 28 | 15 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"en",
"dataset:c4",
"arxiv:2108.12409",
"arxiv:2205.14135",
"arxiv:2002.05202",
"arxiv:2208.08124",
"arxiv:1612.08083",
"arxiv:2102.11972",
"arxiv:1907.11692",
"arxiv:2202.08005",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | "2023-04-28T21:11:45" | ---
license: apache-2.0
datasets:
- c4
language:
- en
inference: false
---
# MosaicBERT: mosaic-bert-base-seqlen-1024 Pretrained Model
MosaicBERT-Base is a new BERT architecture and training recipe optimized for fast pretraining.
MosaicBERT trains faster and achieves higher pretraining and finetuning accuracy when benchmarked against
Hugging Face's [bert-base-uncased](https://huggingface.co/bert-base-uncased). It incorporates efficiency insights
from the past half a decade of transformers research, from RoBERTa to T5 and GPT.
__This model was trained with [ALiBi](https://arxiv.org/abs/2108.12409) on a sequence length of 1024 tokens.__
ALiBi allows a model trained with a sequence length n to easily extrapolate to sequence lengths >2n during finetuning. For more details, see [Train Short, Test Long: Attention with Linear
Biases Enables Input Length Extrapolation (Press et al. 2022)](https://arxiv.org/abs/2108.12409)
It is part of the **family of MosaicBERT-Base models** trained using ALiBi on different sequence lengths:
* [mosaic-bert-base](https://huggingface.co/mosaicml/mosaic-bert-base) (trained on a sequence length of 128 tokens)
* [mosaic-bert-base-seqlen-256](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-256)
* [mosaic-bert-base-seqlen-512](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-512)
* mosaic-bert-base-seqlen-1024
* [mosaic-bert-base-seqlen-2048](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-2048)
The primary use case of these models is for research on efficient pretraining and finetuning for long context embeddings.
## Model Date
April 2023
## Model Date
April 2023
## Documentation
* [Project Page (mosaicbert.github.io)](mosaicbert.github.io)
* [Github (mosaicml/examples/tree/main/examples/benchmarks/bert)](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert)
* [Paper (NeurIPS 2023)](https://openreview.net/forum?id=5zipcfLC2Z)
* Colab Tutorials:
* [MosaicBERT Tutorial Part 1: Load Pretrained Weights and Experiment with Sequence Length Extrapolation Using ALiBi](https://colab.research.google.com/drive/1r0A3QEbu4Nzs2Jl6LaiNoW5EumIVqrGc?usp=sharing)
* [Blog Post (March 2023)](https://www.mosaicml.com/blog/mosaicbert)
## How to use
```python
import torch
import transformers
from transformers import AutoModelForMaskedLM, BertTokenizer, pipeline
from transformers import BertTokenizer, BertConfig
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # MosaicBERT uses the standard BERT tokenizer
config = transformers.BertConfig.from_pretrained('mosaicml/mosaic-bert-base-seqlen-1024') # the config needs to be passed in
mosaicbert = AutoModelForMaskedLM.from_pretrained('mosaicml/mosaic-bert-base-seqlen-1024',config=config,trust_remote_code=True)
# To use this model directly for masked language modeling
mosaicbert_classifier = pipeline('fill-mask', model=mosaicbert, tokenizer=tokenizer,device="cpu")
mosaicbert_classifier("I [MASK] to the store yesterday.")
```
Note that the tokenizer for this model is simply the Hugging Face `bert-base-uncased` tokenizer.
In order to take advantage of ALiBi by extrapolating to longer sequence lengths, simply change the `alibi_starting_size` flag in the
config file and reload the model.
```python
config = transformers.BertConfig.from_pretrained('mosaicml/mosaic-bert-base-seqlen-1024')
config.alibi_starting_size = 2048 # maximum sequence length updated to 2048 from config default of 1024
mosaicbert = AutoModelForMaskedLM.from_pretrained('mosaicml/mosaic-bert-base-seqlen-2048',config=config,trust_remote_code=True)
```
This simply presets the non-learned linear bias matrix in every attention block to 2048 tokens (note that this particular model was trained with a sequence length of 1024 tokens).
**To continue MLM pretraining**, follow the [MLM pre-training section of the mosaicml/examples/bert repo](https://github.com/mosaicml/examples/tree/main/examples/bert#mlm-pre-training).
**To fine-tune this model for classification**, follow the [Single-task fine-tuning section of the mosaicml/examples/bert repo](https://github.com/mosaicml/examples/tree/main/examples/bert#single-task-fine-tuning).
### [Update 1/2/2024] Triton Flash Attention with ALiBi
Note that by default, triton Flash Attention is **not** enabled or required. In order to enable our custom implementation of triton Flash Attention with ALiBi from March 2023,
set `attention_probs_dropout_prob: 0.0`. We are currently working on supporting Flash Attention 2 (see [PR here](https://github.com/mosaicml/examples/pull/440)).
### Remote Code
This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we train using [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), which is not part of the `transformers` library and depends on [Triton](https://github.com/openai/triton) and some custom PyTorch code. Since this involves executing arbitrary code, you should consider passing a git `revision` argument that specifies the exact commit of the code, for example:
```python
mlm = AutoModelForMaskedLM.from_pretrained(
'mosaicml/mosaic-bert-base-seqlen-1024',
trust_remote_code=True,
revision='24512df',
)
```
However, if there are updates to this model or code and you specify a revision, you will need to manually check for them and update the commit hash accordingly.
## MosaicBERT Model description
In order to build MosaicBERT, we adopted architectural choices from the recent transformer literature.
These include [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi (Press et al. 2021)](https://arxiv.org/abs/2108.12409),
and [Gated Linear Units (Shazeer 2020)](https://arxiv.org/abs/2002.05202). In addition, we remove padding inside the transformer block,
and apply LayerNorm with low precision.
### Modifications to the Attention Mechanism
1. **FlashAttention**: Attention layers are core components of the transformer architecture. The recently proposed FlashAttention layer
reduces the number of read/write operations between the GPU HBM (high bandwidth memory, i.e. long-term memory) and the GPU SRAM
(i.e. short-term memory) [[Dao et al. 2022]](https://arxiv.org/pdf/2205.14135.pdf). We used the FlashAttention module built by
[hazy research](https://github.com/HazyResearch/flash-attention) with [OpenAI’s triton library](https://github.com/openai/triton).
2. **Attention with Linear Biases (ALiBi)**: In most BERT models, the positions of tokens in a sequence are encoded with a position embedding layer;
this embedding allows subsequent layers to keep track of the order of tokens in a sequence. ALiBi eliminates position embeddings and
instead conveys this information using a bias matrix in the attention operation. It modifies the attention mechanism such that nearby
tokens strongly attend to one another [[Press et al. 2021]](https://arxiv.org/abs/2108.12409). In addition to improving the performance of the final model, ALiBi helps the
model to handle sequences longer than it saw during training. Details on our ALiBi implementation can be found [in the mosaicml/examples repo here](https://github.com/mosaicml/examples/blob/d14a7c94a0f805f56a7c865802082bf6d8ac8903/examples/bert/src/bert_layers.py#L425).
3. **Unpadding**: Standard NLP practice is to combine text sequences of different lengths into a batch, and pad the sequences with empty
tokens so that all sequence lengths are the same. During training, however, this can lead to many superfluous operations on those
padding tokens. In MosaicBERT, we take a different approach: we concatenate all the examples in a minibatch into a single sequence
of batch size 1. Results from NVIDIA and others have shown that this approach leads to speed improvements during training, since
operations are not performed on padding tokens (see for example [Zeng et al. 2022](https://arxiv.org/pdf/2208.08124.pdf)).
Details on our “unpadding” implementation can be found [in the mosaicml/examples repo here](https://github.com/mosaicml/examples/blob/main/examples/bert/src/bert_padding.py).
4. **Low Precision LayerNorm**: this small tweak forces LayerNorm modules to run in float16 or bfloat16 precision instead of float32, improving utilization.
Our implementation can be found [in the mosaicml/examples repo here](https://docs.mosaicml.com/en/v0.12.1/method_cards/low_precision_layernorm.html).
### Modifications to the Feedforward Layers
5. **Gated Linear Units (GLU)**: We used Gated Linear Units for the feedforward sublayer of a transformer. GLUs were first proposed in 2016 [[Dauphin et al. 2016]](https://arxiv.org/abs/1612.08083),
and incorporate an extra learnable matrix that “gates” the outputs of the feedforward layer. More recent work has shown that
GLUs can improve performance quality in transformers [[Shazeer, 2020](https://arxiv.org/abs/2002.05202), [Narang et al. 2021](https://arxiv.org/pdf/2102.11972.pdf)]. We used the GeLU (Gaussian-error Linear Unit)
activation function with GLU, which is sometimes referred to as GeGLU. The GeLU activation function is a smooth, fully differentiable
approximation to ReLU; we found that this led to a nominal improvement over ReLU. More details on our implementation of GLU can be found here.
The extra gating matrix in a GLU model potentially adds additional parameters to a model; we chose to augment our BERT-Base model with
additional parameters due to GLU modules as it leads to a Pareto improvement across all timescales (which is not true of all larger
models such as BERT-Large). While BERT-Base has 110 million parameters, MosaicBERT-Base has 137 million parameters. Note that
MosaicBERT-Base trains faster than BERT-Base despite having more parameters.
## Training data
MosaicBERT is pretrained using a standard Masked Language Modeling (MLM) objective: the model is given a sequence of
text with some tokens hidden, and it has to predict these masked tokens. MosaicBERT is trained on
the English [“Colossal, Cleaned, Common Crawl” C4 dataset](https://github.com/allenai/allennlp/discussions/5056), which contains roughly 365 million curated text documents scraped
from the internet (equivalent to 156 billion tokens). We used this more modern dataset in place of traditional BERT pretraining
corpora like English Wikipedia and BooksCorpus.
## Pretraining Optimizations
Many of these pretraining optimizations below were informed by our [BERT results for the MLPerf v2.1 speed benchmark](https://www.mosaicml.com/blog/mlperf-nlp-nov2022).
1. **MosaicML Streaming Dataset**: As part of our efficiency pipeline, we converted the C4 dataset to [MosaicML’s StreamingDataset format](https://www.mosaicml.com/blog/mosaicml-streamingdataset) and used this
for both MosaicBERT-Base and the baseline BERT-Base. For all BERT-Base models, we chose the training duration to be 286,720,000 samples of **sequence length 1024**; this covers 78.6% of C4.
2. **Higher Masking Ratio for the Masked Language Modeling Objective**: We used the standard Masked Language Modeling (MLM) pretraining objective.
While the original BERT paper also included a Next Sentence Prediction (NSP) task in the pretraining objective,
subsequent papers have shown this to be unnecessary [Liu et al. 2019](https://arxiv.org/abs/1907.11692).
However, we found that a 30% masking ratio led to slight accuracy improvements in both pretraining MLM and downstream GLUE performance.
We therefore included this simple change as part of our MosaicBERT training recipe. Recent studies have also found that this simple
change can lead to downstream improvements [Wettig et al. 2022](https://arxiv.org/abs/2202.08005).
3. **Bfloat16 Precision**: We use [bf16 (bfloat16) mixed precision training](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus) for all the models, where a matrix multiplication layer uses bf16
for the multiplication and 32-bit IEEE floating point for gradient accumulation. We found this to be more stable than using float16 mixed precision.
4. **Vocab Size as a Multiple of 64**: We increased the vocab size to be a multiple of 8 as well as 64 (i.e. from 30,522 to 30,528).
This small constraint is something of [a magic trick among ML practitioners](https://twitter.com/karpathy/status/1621578354024677377), and leads to a throughput speedup.
5. **Hyperparameters**: For all models, we use Decoupled AdamW with Beta_1=0.9 and Beta_2=0.98, and a weight decay value of 1.0e-5.
The learning rate schedule begins with a warmup to a maximum learning rate of 5.0e-4 followed by a linear decay to zero.
Warmup lasted for 6% of the full training duration. Global batch size was set to 4096, and microbatch size was **64**; since global batch size was 4096, full pretraining consisted of 70,000 batches.
We set the **maximum sequence length during pretraining to 1024**, and we used the standard embedding dimension of 768.
For MosaicBERT, we applied 0.1 dropout to the feedforward layers but no dropout to the FlashAttention module, as this was not possible with the OpenAI triton implementation.
Full configuration details for pretraining MosaicBERT-Base can be found in the configuration yamls [in the mosaicml/examples repo here](https://github.com/mosaicml/examples/tree/main/bert/yamls/main).
## Intended uses & limitations
This model is intended to be finetuned on downstream tasks.
## Citation
Please cite this model using the following format:
```
@article{portes2023MosaicBERT,
title={MosaicBERT: A Bidirectional Encoder Optimized for Fast Pretraining},
author={Jacob Portes, Alexander R Trott, Sam Havens, Daniel King, Abhinav Venigalla,
Moin Nadeem, Nikhil Sardana, Daya Khudia, Jonathan Frankle},
journal={NeuRIPS https://openreview.net/pdf?id=5zipcfLC2Z},
year={2023},
}
``` |
OpenDILabCommunity/Walker2d-v3-TD3 | OpenDILabCommunity | "2023-09-22T20:37:18" | 0 | 0 | pytorch | [
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"Walker2d-v3",
"en",
"license:apache-2.0",
"region:us"
] | reinforcement-learning | "2023-04-21T14:45:41" | ---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- Walker2d-v3
benchmark_name: OpenAI/Gym/MuJoCo
task_name: Walker2d-v3
pipeline_tag: reinforcement-learning
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/MuJoCo-Walker2d-v3
type: OpenAI/Gym/MuJoCo-Walker2d-v3
metrics:
- type: mean_reward
value: 4331.88 +/- 12.08
name: mean_reward
---
# Play **Walker2d-v3** with **TD3** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **TD3** implementation to OpenAI/Gym/MuJoCo **Walker2d-v3** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
sudo apt update -y && sudo apt install -y build-essential libgl1-mesa-dev libgl1-mesa-glx libglew-dev libosmesa6-dev libglfw3 libglfw3-dev libsdl2-dev libsdl2-image-dev libglm-dev libfreetype6-dev patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install "cython<3"
pip3 install DI-engine[common_env]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import SACAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = SACAgent(env_id="Walker2d-v3", exp_name="Walker2d-v3-TD3", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import TD3Agent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/Walker2d-v3-TD3")
# Instantiate the agent
agent = TD3Agent(env_id="Walker2d-v3", exp_name="Walker2d-v3-TD3", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import TD3Agent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = TD3Agent(env_id="Walker2d-v3", exp_name="Walker2d-v3-TD3")
# Train the agent
return_ = agent.train(step=int(5000000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/MuJoCo",
task_name="Walker2d-v3",
algo_name="TD3",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/td3.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html",
installation_guide='''
sudo apt update -y \
&& sudo apt install -y \
build-essential \
libgl1-mesa-dev \
libgl1-mesa-glx \
libglew-dev \
libosmesa6-dev \
libglfw3 \
libglfw3-dev \
libsdl2-dev \
libsdl2-image-dev \
libglm-dev \
libfreetype6-dev \
patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install "cython<3"
pip3 install DI-engine[common_env]
''',
usage_file_by_git_clone="./td3/walker2d_td3_deploy.py",
usage_file_by_huggingface_ding="./td3/walker2d_td3_download.py",
train_file="./td3/walker2d_td3.py",
repo_id="OpenDILabCommunity/Walker2d-v3-TD3",
create_repo=False
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'env': {
'manager': {
'episode_num': float("inf"),
'max_retry': 1,
'retry_type': 'reset',
'auto_reset': True,
'step_timeout': None,
'reset_timeout': None,
'retry_waiting_time': 0.1,
'cfg_type': 'BaseEnvManagerDict'
},
'stop_value': 6000,
'n_evaluator_episode': 8,
'env_id': 'Walker2d-v3',
'norm_obs': {
'use_norm': False
},
'norm_reward': {
'use_norm': False
},
'collector_env_num': 1,
'evaluator_env_num': 8,
'env_wrapper': 'mujoco_default'
},
'policy': {
'model': {
'twin_critic': True,
'obs_shape': 17,
'action_shape': 6,
'actor_head_hidden_size': 256,
'critic_head_hidden_size': 256,
'action_space': 'regression'
},
'learn': {
'learner': {
'train_iterations': 1000000000,
'dataloader': {
'num_workers': 0
},
'log_policy': True,
'hook': {
'load_ckpt_before_run': '',
'log_show_after_iter': 100,
'save_ckpt_after_iter': 10000,
'save_ckpt_after_run': True
},
'cfg_type': 'BaseLearnerDict'
},
'update_per_collect': 1,
'batch_size': 256,
'learning_rate_actor': 0.001,
'learning_rate_critic': 0.001,
'ignore_done': False,
'target_theta': 0.005,
'discount_factor': 0.99,
'actor_update_freq': 2,
'noise': True,
'noise_sigma': 0.2,
'noise_range': {
'min': -0.5,
'max': 0.5
}
},
'collect': {
'collector': {},
'unroll_len': 1,
'noise_sigma': 0.1,
'n_sample': 1
},
'eval': {
'evaluator': {
'eval_freq': 5000,
'render': {
'render_freq': -1,
'mode': 'train_iter'
},
'figure_path': None,
'cfg_type': 'InteractionSerialEvaluatorDict',
'stop_value': 6000,
'n_episode': 8
}
},
'other': {
'replay_buffer': {
'replay_buffer_size': 1000000
}
},
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'type': 'td3',
'priority': False,
'priority_IS_weight': False,
'random_collect_size': 25000,
'transition_with_policy_data': False,
'action_space': 'continuous',
'reward_batch_norm': False,
'multi_agent': False,
'cfg_type': 'TD3PolicyDict'
},
'exp_name': 'Walker2d-v3-TD3',
'seed': 0,
'wandb_logger': {
'gradient_logger': True,
'video_logger': True,
'plot_logger': True,
'action_logger': True,
'return_logger': False
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/Walker2d-v3-TD3)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/td3.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/Walker2d-v3-TD3/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/Walker2d-v3-TD3/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 1690.06 KB
- **Last Update Date:** 2023-09-22
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/MuJoCo
- **Task:** Walker2d-v3
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.9
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html)
|
XinXuNLPer/MuseCoco_text2attribute | XinXuNLPer | "2024-10-08T04:34:40" | 155 | 5 | transformers | [
"transformers",
"pytorch",
"bert",
"MuseCoco",
"Text2Music",
"en",
"arxiv:2306.00110",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2023-06-29T13:50:10" | ---
license: apache-2.0
language:
- en
tags:
- MuseCoco
- Text2Music
---
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://ai-muzic.github.io/images/musecoco/framework.png" alt="Text2Attribute" style="width: 100%; min-width: 100px; display: block; margin: auto;"></a>
</p>
# Text-to-Attribute Understanding

This is the text-to-attribute model to extract musical attributes from text, introduced in the paper [*MuseCoco: Generating Symbolic Music from Text*](https://arxiv.org/abs/2306.00110) and [first released in this repository](https://github.com/microsoft/muzic/tree/main/musecoco).
It is based on BERT-large and has multiple classification heads for diverse musical attributes:
```json
[
"Instrument",
"Rhythm Danceability",
"Rhythm Intensity",
"Artist",
"Genre",
"Bar",
"Time Signature",
"Key",
"Tempo",
"Pitch Range",
"Emotion",
"Time"
]
```
# BibTeX entry and citation info
```bibtex
@article{musecoco2023,
title={MuseCoco: Generating Symbolic Music from Text},
author={Peiling Lu, Xin Xu, Chenfei Kang, Botao Yu, Chengyi Xing, Xu Tan, Jiang Bian},
journal={arXiv preprint arXiv:2306.00110},
year={2023}
}
```
|
PrunaAI/0017-alt-llm-jp-3-ratio-60-bnb-8bit-smashed | PrunaAI | "2024-12-14T09:24:27" | 5 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:0017-alt/llm-jp-3-ratio-60",
"base_model:quantized:0017-alt/llm-jp-3-ratio-60",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-12-14T09:19:18" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: 0017-alt/llm-jp-3-ratio-60
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo 0017-alt/llm-jp-3-ratio-60 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/0017-alt-llm-jp-3-ratio-60-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("0017-alt/llm-jp-3-ratio-60")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model 0017-alt/llm-jp-3-ratio-60 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
tomaszki/gemma-6 | tomaszki | "2024-02-28T10:40:27" | 114 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-28T10:37:00" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
robiulawaldev/8c1f328d-5df4-4fe0-aeeb-63c7e0ebb844 | robiulawaldev | "2025-02-05T17:37:42" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-05T17:32:50" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8c1f328d-5df4-4fe0-aeeb-63c7e0ebb844
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 8c1f328d-5df4-4fe0-aeeb-63c7e0ebb844
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AIDA-UPM/MARTINI_enrich_BERTopic_artwithaim | AIDA-UPM | "2025-01-13T17:54:09" | 5 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | "2025-01-13T17:54:07" |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# MARTINI_enrich_BERTopic_artwithaim
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("AIDA-UPM/MARTINI_enrich_BERTopic_artwithaim")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 77
* Number of training documents: 11688
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | vaccinated - pfizer - freedom - everyone - live | 20 | -1_vaccinated_pfizer_freedom_everyone |
| 0 | ascension - souls - visionaries - empathy - airika | 7801 | 0_ascension_souls_visionaries_empathy |
| 1 | detox - glutathione - probiotics - turmeric - pesticides | 156 | 1_detox_glutathione_probiotics_turmeric |
| 2 | unvaxxed - lausd - mandated - exemptions - illinois | 118 | 2_unvaxxed_lausd_mandated_exemptions |
| 3 | mkultra - traffickers - satanic - rituals - kidman | 108 | 3_mkultra_traffickers_satanic_rituals |
| 4 | facemasks - asbestos - microplastic - unmasked - influenza | 107 | 4_facemasks_asbestos_microplastic_unmasked |
| 5 | shadowbanned - facebook - censorship - deplatformed - airika | 105 | 5_shadowbanned_facebook_censorship_deplatformed |
| 6 | radiofrequency - dangers - mhz - 5gee - altimeters | 102 | 6_radiofrequency_dangers_mhz_5gee |
| 7 | globalists - davos - soros - russia - глобальных | 94 | 7_globalists_davos_soros_russia |
| 8 | fires - пожар - blaze - destroyed - iowa | 94 | 8_fires_пожар_blaze_destroyed |
| 9 | epstein - billionaire - kushner - ghislaine - mossad | 93 | 9_epstein_billionaire_kushner_ghislaine |
| 10 | pfizer - vaers - deaths - doses - 2021 | 92 | 10_pfizer_vaers_deaths_doses |
| 11 | artwithaim8 - redbubble - illustrations - sweatshirts - embroidered | 90 | 11_artwithaim8_redbubble_illustrations_sweatshirts |
| 12 | russians - россии - decembrist - vladimir - хорошии | 90 | 12_russians_россии_decembrist_vladimir |
| 13 | died - spartak - athlete - myocarditis - collapses | 88 | 13_died_spartak_athlete_myocarditis |
| 14 | artwithaim8 - bitchute - youtube - episodes - banned | 85 | 14_artwithaim8_bitchute_youtube_episodes |
| 15 | australians - queensland - victoria - tortured - vaccinere | 79 | 15_australians_queensland_victoria_tortured |
| 16 | illegals - refugee - border - juarez - soros | 79 | 16_illegals_refugee_border_juarez |
| 17 | yeeeaaaa - kickin - bitches - patooty - karaoke | 75 | 17_yeeeaaaa_kickin_bitches_patooty |
| 18 | ai - humanoid - гуманоидов - kurzweil - promobot | 71 | 18_ai_humanoid_гуманоидов_kurzweil |
| 19 | farmers - fuels - decarbonization - fertiliser - nitrogen | 71 | 19_farmers_fuels_decarbonization_fertiliser |
| 20 | wikileaks - clinton - lewinsky - pedopodesta - pornhub | 66 | 20_wikileaks_clinton_lewinsky_pedopodesta |
| 21 | chemtrails - geoengineering - clouds - ionosphere - haarp | 66 | 21_chemtrails_geoengineering_clouds_ionosphere |
| 22 | musk - transhumanism - genius - optimus - edison | 61 | 22_musk_transhumanism_genius_optimus |
| 23 | nanobots - vaccinated - morgellons - vials - hydrogel | 61 | 23_nanobots_vaccinated_morgellons_vials |
| 24 | remdesivir - midazolam - intravenous - hospitalists - overdosing | 59 | 24_remdesivir_midazolam_intravenous_hospitalists |
| 25 | dioxins - ohio - hazardous - derailments - methanol | 59 | 25_dioxins_ohio_hazardous_derailments |
| 26 | gates - вакцины - glaxosmithkline - billionaire - unicef | 57 | 26_gates_вакцины_glaxosmithkline_billionaire |
| 27 | biden - pompeo - inauguration - greatawakeningofficial - beijing | 57 | 27_biden_pompeo_inauguration_greatawakeningofficial |
| 28 | globalists - pandemics - sovereignty - guterres - amendments | 53 | 28_globalists_pandemics_sovereignty_guterres |
| 29 | scum - damned - evil - subhuman - repugnant | 53 | 29_scum_damned_evil_subhuman |
| 30 | motherfauci - wuhan - virology - institute - 2020 | 51 | 30_motherfauci_wuhan_virology_institute |
| 31 | homosexuals - lgbtq - pedosexual - pervert - sickos | 48 | 31_homosexuals_lgbtq_pedosexual_pervert |
| 32 | pilots - airways - qantas - aussiefreedomflyers - heathrow | 45 | 32_pilots_airways_qantas_aussiefreedomflyers |
| 33 | hamas - gaza - palestinos - israelies - genocide | 45 | 33_hamas_gaza_palestinos_israelies |
| 34 | aadhaar - skynet - netherlands - worldcoin - idvt | 43 | 34_aadhaar_skynet_netherlands_worldcoin |
| 35 | sprouting - planted - tomatoes - gardener - okra | 43 | 35_sprouting_planted_tomatoes_gardener |
| 36 | france - demokratischerwiderstand - macron - zemmour - pogroms | 43 | 36_france_demokratischerwiderstand_macron_zemmour |
| 37 | donetsk - zaporizhzhya - gaza - cnn - denazification | 43 | 37_donetsk_zaporizhzhya_gaza_cnn |
| 38 | mediaukraine - yanukovich - severodonetsk - kadyrov - belarusian | 42 | 38_mediaukraine_yanukovich_severodonetsk_kadyrov |
| 39 | vaccine - cvinjuriesanddeaths - strokes - guillain - myocarditis | 41 | 39_vaccine_cvinjuriesanddeaths_strokes_guillain |
| 40 | trudeau - winnipeg - ctvnews - millennials - hacked | 41 | 40_trudeau_winnipeg_ctvnews_millennials |
| 41 | convoy - trudeau - protesters - barricades - freedomeuro | 40 | 41_convoy_trudeau_protesters_barricades |
| 42 | ukraine - biolaboratories - pentagon - ebola - counterproliferation | 39 | 42_ukraine_biolaboratories_pentagon_ebola |
| 43 | vaccines - injection - nucleocapsid - tetanus - clotting | 37 | 43_vaccines_injection_nucleocapsid_tetanus |
| 44 | vaccinated - quarantine - passport - thailand - moldova | 36 | 44_vaccinated_quarantine_passport_thailand |
| 45 | cashless - banknote - cryptocurrency - cbdc - yuan | 36 | 45_cashless_banknote_cryptocurrency_cbdc |
| 46 | protests - lockdown - theeuropenews - tyranny - parliament | 36 | 46_protests_lockdown_theeuropenews_tyranny |
| 47 | antifa - insurrection - capitol - storming - jan | 36 | 47_antifa_insurrection_capitol_storming |
| 48 | shanghai - quarantine - zhengzhou - китаи - elizabethcliu | 34 | 48_shanghai_quarantine_zhengzhou_китаи |
| 49 | virologists - corona - pasteur - fakeologist - hpv | 34 | 49_virologists_corona_pasteur_fakeologist |
| 50 | fdic - bankers - selloffs - monetary - recession | 33 | 50_fdic_bankers_selloffs_monetary |
| 51 | vaccinate - zoetis - cows - gmo - mrna | 32 | 51_vaccinate_zoetis_cows_gmo |
| 52 | cyborgs - materialists - implants - surveill - aspires | 32 | 52_cyborgs_materialists_implants_surveill |
| 53 | lahaina - hawaiians - wildfires - winfrey - underreported | 32 | 53_lahaina_hawaiians_wildfires_winfrey |
| 54 | трансгендерным - ihatehersheys - dysphoria - feminists - naked | 32 | 54_трансгендерным_ihatehersheys_dysphoria_feminists |
| 55 | traitors - murderers - arrests - punishable - guillotines | 30 | 55_traitors_murderers_arrests_punishable |
| 56 | vaccines - mifepristone - chloroquine - june - approvals | 28 | 56_vaccines_mifepristone_chloroquine_june |
| 57 | desantis - mandates - unconstitutional - carolina - maskersclubnyc | 28 | 57_desantis_mandates_unconstitutional_carolina |
| 58 | foraging - aquaponics - gardeners - orchard - herbs | 27 | 58_foraging_aquaponics_gardeners_orchard |
| 59 | impeaching - pelosi - trump - senators - voted | 26 | 59_impeaching_pelosi_trump_senators |
| 60 | solari - episodes - catherine - financial - rebellion | 26 | 60_solari_episodes_catherine_financial |
| 61 | monkeypox - smallpox - superspreader - mugabe - vigilantfox | 26 | 61_monkeypox_smallpox_superspreader_mugabe |
| 62 | ballots - defendingtherepublic - recount - georgia - diebold | 25 | 62_ballots_defendingtherepublic_recount_georgia |
| 63 | opossums - cutest - marmoset - creature - omigosh | 25 | 63_opossums_cutest_marmoset_creature |
| 64 | neighbourhoods - banbury - councillors - moskva - taxis | 24 | 64_neighbourhoods_banbury_councillors_moskva |
| 65 | permaculturists - boycotting - freedomcells - collaborate - panopticon | 23 | 65_permaculturists_boycotting_freedomcells_collaborate |
| 66 | zuckerberg - censorship - snapchat - trump - leaked | 23 | 66_zuckerberg_censorship_snapchat_trump |
| 67 | vax - injections - morons - anaphylactic - coerced | 23 | 67_vax_injections_morons_anaphylactic |
| 68 | pcr - ncov - false - tested - asymptomatic | 22 | 68_pcr_ncov_false_tested |
| 69 | artists - pornification - cultist - manson - enlightenment | 22 | 69_artists_pornification_cultist_manson |
| 70 | republicans - bannon - neoconservative - tucker - eugenicists | 22 | 70_republicans_bannon_neoconservative_tucker |
| 71 | khazarians - millionjews - mossad - antisemitic - goyim | 21 | 71_khazarians_millionjews_mossad_antisemitic |
| 72 | enlistment - sergeants - navy - vandenberg - reinstatement | 21 | 72_enlistment_sergeants_navy_vandenberg |
| 73 | trudeau - alberta - agentsoftruthchat - decriminalize - fentanyl | 21 | 73_trudeau_alberta_agentsoftruthchat_decriminalize |
| 74 | megaliths - archaeologists - mummies - baalbek - sumerian | 21 | 74_megaliths_archaeologists_mummies_baalbek |
| 75 | bolsonaro - antivaxx - janeiro - presidents - supreme | 20 | 75_bolsonaro_antivaxx_janeiro_presidents |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.40
* UMAP: 0.5.7
* Pandas: 2.2.3
* Scikit-Learn: 1.5.2
* Sentence-transformers: 3.3.1
* Transformers: 4.46.3
* Numba: 0.60.0
* Plotly: 5.24.1
* Python: 3.10.12
|
YGHugging/ko-public-ad-topic-classifier2-runpod | YGHugging | "2024-07-22T03:51:56" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-21T09:27:53" | # 모델 정보
- 연구 3팀 공익광고 주제 분류 모델
- 모델 파인튜닝 옵션
1) load_in_4bit = False
2) max_steps = 100,
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sd-dreambooth-library/axolotee | sd-dreambooth-library | "2023-05-16T09:19:02" | 35 | 0 | diffusers | [
"diffusers",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-09-29T03:13:11" | ---
license: mit
---
### Axolotee on Stable Diffusion via Dreambooth
#### model by Angel20302
This your the Stable Diffusion model fine-tuned the Axolotee concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks Axolote**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
Here are the images used for training this concept:




|
Dizzykong/charles-dickens | Dizzykong | "2022-06-27T21:13:14" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-06-27T19:27:02" | ---
tags:
- generated_from_trainer
model-index:
- name: charles-dickens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# charles-dickens
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Dracones/c4ai-command-r-v01_exl2_8.0bpw-rpcal | Dracones | "2024-04-12T16:03:06" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"exl2",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | "2024-04-10T23:59:39" | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
tags:
- exl2
---
# c4ai-command-r-v01 - EXL2 8.0bpw
This is a 8.0bpw EXL2 quant of [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
### RP Calibrated
The rpcal quants were made using data/PIPPA-cleaned/pippa_raw_fix.parquet for calibration.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Stock Quants
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 6.4436 |
| 7.0 | 6.4372 |
| 6.0 | 6.4391 |
| 5.0 | 6.4526 |
| 4.5 | 6.4629 |
| 4.0 | 6.5081 |
| 3.5 | 6.6301 |
| 3.0 | 6.7974 |
### RP Calibrated Quants
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 6.4331 |
| 7.0 | 6.4347 |
| 6.0 | 6.4356 |
| 5.0 | 6.4740 |
| 4.5 | 6.4875 |
| 4.0 | 6.5039 |
| 3.5 | 6.6928 |
| 3.0 | 6.8913 |
## EQ Bench
Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better.
### Quants
| Quant Size | ChatML | Alpaca | Command-R | Command-R-Plus |
|------------|--------|--------|-----------|----------------|
| 8.0 | 47.28 | 56.67 | 58.46 | 58.49 |
| 7.0 | 46.86 | 57.5 | 57.29 | 57.91 |
| 6.0 | 48.61 | 56.5 | 57.8 | 58.64 |
| 5.0 | 48.48 | 54.64 | 57.14 | 56.63 |
| 4.5 | 48.1 | 57.75 | 57.08 | 56.7 |
| 4.0 | 50.99 | 53.41 | 57.46 | 57.99 |
| 3.5 | 52.72 | 56.68 | 60.91 | 60.91 |
| 3.0 | 39.19 | 36.45 | 49.17 | 49.68 |
### RP Calibrated Quants
| Quant Size | ChatML | Alpaca | Command-R | Command-R-Plus |
|------------|--------|--------|-----------|----------------|
| 8.0 | 48.42 | 56.23 | 58.41 | 58.41 |
| 7.0 | 48.47 | 57.01 | 57.85 | 57.67 |
| 6.0 | 50.93 | 58.33 | 60.32 | 59.83 |
| 5.0 | 50.29 | 55.28 | 58.96 | 59.23 |
| 4.5 | 46.63 | 55.01 | 57.7 | 59.24 |
| 4.0 | 47.13 | 49.76 | 54.76 | 55.5 |
| 3.5 | 52.98 | 56.39 | 59.19 | 58.32 |
| 3.0 | 47.94 | 50.36 | 54.89 | 53.61 |
### Command-R-Plus Template
This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS_TOKEN into the starter prompt.
_text-generation-webui/instruction-templates/Command-R-Plus.yaml_:
```yaml
instruction_template: |-
{%- if messages[0]['role'] == 'system' -%}
{%- set loop_messages = messages[1:] -%}
{%- set system_message = messages[0]['content'] -%}
{%- elif false == true -%}
{%- set loop_messages = messages -%}
{%- set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' -%}
{%- else -%}
{%- set loop_messages = messages -%}
{%- set system_message = false -%}
{%- endif -%}
{%- if system_message != false -%}
{{ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }}
{%- endif -%}
{%- for message in loop_messages -%}
{%- set content = message['content'] -%}
{%- if message['role'] == 'user' -%}
{{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}
{%- elif message['role'] == 'assistant' -%}
{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }}
{%- endif -%}
```
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="c4ai-command-r-v01"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.5 4.0 3.5 3.0)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw-rpcal"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="c4ai-command-r-v01"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# CALIBRATION_DATASET="data/PIPPA-cleaned/pippa_raw_fix.parquet"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
# python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE -c $CALIBRATION_DATASET
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(5.0)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.5 4.0 3.5 3.0)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
# python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -c $CALIBRATION_DATASET -cf $CONVERTED_FOLDER
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
|
Nexspear/ca8ff0cf-b854-4c60-81f6-ecd1b9bf8398 | Nexspear | "2025-02-04T19:22:14" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"base_model:adapter:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"license:llama3",
"region:us"
] | null | "2025-02-04T18:18:04" | ---
library_name: peft
license: llama3
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ca8ff0cf-b854-4c60-81f6-ecd1b9bf8398
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8b4276c388429034_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8b4276c388429034_train_data.json
type:
field_input: file_path
field_instruction: all_code
field_output: cropped_code
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: Nexspear/ca8ff0cf-b854-4c60-81f6-ecd1b9bf8398
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 8
mlflow_experiment_name: /tmp/8b4276c388429034_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 19273e8c-15a1-4af4-95dc-eced67bfe5c5
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 19273e8c-15a1-4af4-95dc-eced67bfe5c5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ca8ff0cf-b854-4c60-81f6-ecd1b9bf8398
This model is a fine-tuned version of [aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct](https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 348
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0374 | 0.0086 | 1 | 0.1523 |
| 0.0037 | 0.4320 | 50 | 0.0028 |
| 0.0002 | 0.8639 | 100 | 0.0009 |
| 0.0007 | 1.2959 | 150 | 0.0003 |
| 0.0002 | 1.7279 | 200 | 0.0001 |
| 0.0001 | 2.1598 | 250 | 0.0004 |
| 0.0001 | 2.5918 | 300 | 0.0002 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk1205/80a74289-7de0-40e1-a65a-96b0ffa7a849 | kostiantynk1205 | "2025-01-12T00:55:02" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-12T00:53:39" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 80a74289-7de0-40e1-a65a-96b0ffa7a849
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 40fc575c36b3bb10_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/40fc575c36b3bb10_train_data.json
type:
field_instruction: SOMMAIRE_SOURCE
field_output: SOMMAIRE_RAPPROCHEMENT
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/80a74289-7de0-40e1-a65a-96b0ffa7a849
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/40fc575c36b3bb10_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d1cf3c9-e1b5-48c4-9e98-64f2691b7ba0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3d1cf3c9-e1b5-48c4-9e98-64f2691b7ba0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 80a74289-7de0-40e1-a65a-96b0ffa7a849
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0009 | 1 | nan |
| 0.0 | 0.0026 | 3 | nan |
| 0.0 | 0.0053 | 6 | nan |
| 0.0 | 0.0079 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ucheokechukwu/a2c-PandaReachDense-v3 | ucheokechukwu | "2024-01-23T18:11:02" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-23T18:06:51" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6 | anas-awadalla | "2022-02-25T03:10:43" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
LoneStriker/Sensualize-Mixtral-bf16-3.0bpw-h6-exl2 | LoneStriker | "2024-01-10T18:22:40" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"dataset:NobodyExistsOnTheInternet/full120k",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-10T18:15:23" | ---
license: cc-by-nc-4.0
datasets:
- NobodyExistsOnTheInternet/full120k
base model: mistralai/Mixtral-8x7B-v0.1
---
Trained using a randomised subset of Full120k - 60K Samples [Roughly 50M Tokens] + More of my own NSFW Instruct & De-Alignment Data [Roughly 30M Tokens Total]
<br>Total Tokens used for Training: 80M over 1 epoch, over 2xA100s at batch size 5, grad 5 for 12 hours.
***
Experimental model, trained on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers.
***
Trained with Alpaca format.
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
Useful prompt guide: https://rentry.org/mixtralforretards
useful stopping strings:
```
["\nInput:", "\n[", "\n(", "\n### Input:"]
```
*stops run-off generations after response, important for alpaca*
***
Roleplay based model, specifically the ERP type one.
I mean, its pretty good sometimes? I had various testing versions of Mistral 7B and L2 70B, L2 13B, and even Solar with the same dataset and various learning rates, they did much better. MoE tuning kinda meh still.
about gptisms. It's weird. with certain prompts its never there, with some its there. despite the prose of full120k, I never encountered gptslop with mistral, solar or l2 based trains which was why I was confident about this being good initially.
Mixtral is really finicky. with the right settings this model can shine. I recommend Universal-Light or Universal-Creative in SillyTavern.
Anyways... Enjoy? |
ThomasSimonini/ML-Agents-SoccerTwos-Bad | ThomasSimonini | "2023-01-27T13:50:13" | 17 | 1 | ml-agents | [
"ml-agents",
"onnx",
"ML-Agents-SoccerTwos",
"reinforcement-learning",
"region:us"
] | reinforcement-learning | "2023-01-27T09:29:56" | ---
task: reinforcement-learning
library_name: ml-agents
tags:
- ML-Agents-SoccerTwos
- reinforcement-learning
--- |
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 417