Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'job_id'})
This happened while the json dataset builder was generating data using
hf://datasets/hallucinations-leaderboard/requests/EleutherAI/gpt-neo-1.3B_eval_request_False_False_False.json (at revision 237c75394f04421550a4109eb10d9b7c1ab6d6c3)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
model: string
base_model: string
revision: string
private: bool
status: string
job_id: string
weight_type: string
precision: string
model_type: string
submitted_time: timestamp[s]
license: string
likes: int64
params: double
to
{'model': Value(dtype='string', id=None), 'base_model': Value(dtype='string', id=None), 'revision': Value(dtype='string', id=None), 'private': Value(dtype='bool', id=None), 'precision': Value(dtype='string', id=None), 'weight_type': Value(dtype='string', id=None), 'status': Value(dtype='string', id=None), 'submitted_time': Value(dtype='timestamp[s]', id=None), 'model_type': Value(dtype='string', id=None), 'likes': Value(dtype='int64', id=None), 'params': Value(dtype='float64', id=None), 'license': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'job_id'})
This happened while the json dataset builder was generating data using
hf://datasets/hallucinations-leaderboard/requests/EleutherAI/gpt-neo-1.3B_eval_request_False_False_False.json (at revision 237c75394f04421550a4109eb10d9b7c1ab6d6c3)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
model
string | base_model
string | revision
string | private
bool | precision
string | weight_type
string | status
string | submitted_time
timestamp[us] | model_type
string | likes
int64 | params
float64 | license
string | job_id
string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
01-ai/Yi-1.5-9B-Chat
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-05-27T17:06:08
|
π¬ : chat models (RLHF, DPO, IFT, ...)
| 80
| 8.829
|
apache-2.0
| null |
|
BarraHome/zephyr-dpo-v2
|
unsloth/zephyr-sft-bnb-4bit
|
main
| false
|
float16
|
Original
|
FINISHED
| 2024-02-05T03:18:41
|
πΆ : fine-tuned
| 0
| 7.242
|
mit
| null |
DeepMount00/Llama-3-8b-Ita
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-05-17T15:15:42
|
πΆ : fine-tuned on domain-specific datasets
| 17
| 8.03
|
llama3
| null |
|
EleutherAI/gpt-j-6b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:26:55
|
pretrained
| 1,316
| 6
|
apache-2.0
| null |
|
EleutherAI/gpt-neo-1.3B
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-09-09T10:52:17
|
pretrained
| 206
| 1.366
|
mit
|
506054
|
|
EleutherAI/gpt-neo-125m
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-09-09T10:52:17
|
pretrained
| 132
| 0.15
|
mit
|
504945
|
|
EleutherAI/gpt-neo-2.7B
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-09-09T10:52:17
|
pretrained
| 361
| 2.718
|
mit
|
460931
|
|
EleutherAI/llemma_7b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:46:12
|
pretrained
| 54
| 7
|
llama2
| null |
|
HuggingFaceH4/mistral-7b-sft-beta
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:58:50
|
fine-tuned
| 15
| 7
|
mit
| null |
|
HuggingFaceH4/zephyr-7b-alpha
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:00
|
fine-tuned
| 986
| 7.242
|
mit
| null |
|
HuggingFaceH4/zephyr-7b-beta
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:26:39
|
fine-tuned
| 973
| 7.242
|
mit
| null |
|
KnutJaegersberg/Qwen-14B-Llamafied
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-01-30T15:47:05
|
π’ : pretrained
| 1
| 14
|
other
| null |
|
KnutJaegersberg/internlm-20b-llama
|
main
| false
|
bfloat16
|
Original
|
RUNNING
| 2024-01-30T15:47:15
|
π’ : pretrained
| 0
| 20
|
other
| null |
|
KoboldAI/GPT-J-6B-Janeway
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:46:31
|
pretrained
| 11
| 6
|
mit
| null |
|
KoboldAI/LLAMA2-13B-Holodeck-1
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:44:36
|
pretrained
| 20
| 13.016
|
other
| null |
|
KoboldAI/OPT-13B-Erebus
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:26
|
pretrained
| 168
| 13
|
other
| null |
|
KoboldAI/OPT-13B-Nerys-v2
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:48:11
|
pretrained
| 9
| 13
|
other
| null |
|
KoboldAI/OPT-2.7B-Erebus
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:28:21
|
pretrained
| 32
| 2.7
|
other
| null |
|
KoboldAI/OPT-6.7B-Erebus
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:48:16
|
pretrained
| 88
| 6.7
|
other
| null |
|
KoboldAI/OPT-6B-nerys-v2
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:41
|
pretrained
| 21
| 6
|
other
| null |
|
KoboldAI/fairseq-dense-13B-Janeway
|
main
| false
|
float32
|
Original
|
RUNNING
| 2023-12-11T15:19:11
|
pretrained
| 10
| 13
|
mit
| null |
|
LeoLM/leo-hessianai-7b-chat
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:29:23
|
instruction-tuned
| 11
| 7
| null | null |
|
LeoLM/leo-hessianai-7b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:52:44
|
pretrained
| 32
| 7
| null | null |
|
NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
|
main
| false
|
float16
|
Original
|
RUNNING
| 2024-01-29T17:05:05
|
πΆ : fine-tuned
| 10
| 46.703
|
cc-by-nc-4.0
| null |
|
Nexusflow/NexusRaven-V2-13B
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-01-29T16:55:48
|
πΆ : fine-tuned
| 344
| 13
|
other
| null |
|
NotAiLOL/Yi-1.5-dolphin-9B
|
main
| false
|
bfloat16
|
Original
|
RUNNING
| 2024-05-17T20:47:37
|
πΆ : fine-tuned on domain-specific datasets
| 0
| 8.829
|
apache-2.0
| null |
|
NousResearch/Llama-2-13b-hf
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:47:45
|
pretrained
| 64
| 13.016
| null | null |
|
NousResearch/Llama-2-7b-chat-hf
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:26:34
|
instruction-tuned
| 63
| 6.738
| null | null |
|
NousResearch/Llama-2-7b-hf
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:27:05
|
pretrained
| 92
| 6.738
| null | null |
|
NousResearch/Nous-Hermes-Llama2-13b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:27:19
|
pretrained
| 251
| 13
|
mit
| null |
|
NousResearch/Nous-Hermes-llama-2-7b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:46:18
|
pretrained
| 54
| 6.738
|
mit
| null |
|
NousResearch/Yarn-Mistral-7b-128k
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:27:34
|
pretrained
| 470
| 7
|
apache-2.0
| null |
|
Open-Orca/Mistral-7B-OpenOrca
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:27:13
|
pretrained
| 485
| 7
|
apache-2.0
| null |
|
Open-Orca/Mistral-7B-SlimOrca
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:53:45
|
pretrained
| 21
| 7
|
apache-2.0
| null |
|
Open-Orca/OpenOrca-Platypus2-13B
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:20:13
|
pretrained
| 219
| 13
|
cc-by-nc-4.0
| null |
|
PygmalionAI/pygmalion-6b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2024-09-22T15:54:39
|
pretrained
| 725
| 6
|
creativeml-openrail-m
| null |
|
Q-bert/MetaMath-Cybertron-Starling
|
main
| false
|
bfloat16
|
Original
|
RUNNING
| 2024-01-30T15:57:50
|
πΆ : fine-tuned
| 38
| 7.242
|
cc-by-nc-4.0
| null |
|
Qwen/Qwen1.5-14B
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-05-22T05:22:08
|
π’ : pretrained
| 33
| 14.167
|
other
| null |
|
Qwen/Qwen1.5-32B-Chat
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-05-22T05:22:38
|
π¬ : chat models (RLHF, DPO, IFT, ...)
| 93
| 32.512
|
other
| null |
|
Qwen/Qwen2.5-32B-Instruct
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-09-18T22:22:41
|
π¬ : chat models (RLHF, DPO, IFT, ...)
| 8
| 32.764
|
apache-2.0
| null |
|
Steelskull/Umbra-v2.1-MoE-4x10.7
|
main
| false
|
bfloat16
|
Original
|
RUNNING
| 2024-01-29T13:48:43
|
πΆ : fine-tuned
| 1
| 36.099
|
apache-2.0
| null |
|
TheBloke/Falcon-180B-Chat-GPTQ
|
main
| false
|
float32
|
Original
|
RUNNING
| 2023-12-07T22:44:49
|
fine-tuned
| 66
| 197.968
|
unknown
| null |
|
TheBloke/Llama-2-13B-chat-AWQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:47:40
|
instruction-tuned
| 12
| 2.026
|
llama2
| null |
|
TheBloke/Llama-2-13B-chat-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:27:31
|
instruction-tuned
| 309
| 16.232
|
llama2
| null |
|
TheBloke/Llama-2-13B-fp16
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:47:30
|
pretrained
| 53
| 13
| null | null |
|
TheBloke/Llama-2-70B-Chat-AWQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:44:51
|
fine-tuned
| 9
| 9.684
|
llama2
| null |
|
TheBloke/Llama-2-70B-Chat-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:08
|
fine-tuned
| 232
| 72.816
|
llama2
| null |
|
TheBloke/Llama-2-70B-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:47:16
|
fine-tuned
| 78
| 72.816
|
llama2
| null |
|
TheBloke/Llama-2-7B-Chat-AWQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:32:24
|
fine-tuned
| 8
| 1.129
|
llama2
| null |
|
TheBloke/Llama-2-7B-Chat-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:26:23
|
fine-tuned
| 185
| 9.048
|
llama2
| null |
|
TheBloke/Llama-2-7B-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:48:03
|
fine-tuned
| 65
| 9.048
|
llama2
| null |
|
TheBloke/Mistral-7B-OpenOrca-AWQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:47:42
|
fine-tuned
| 31
| 1.196
|
apache-2.0
| null |
|
TheBloke/Mistral-7B-OpenOrca-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:27:01
|
fine-tuned
| 83
| 9.592
|
apache-2.0
| null |
|
TheBloke/Mythalion-13B-AWQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:54:46
|
fine-tuned
| 4
| 2.026
|
llama2
| null |
|
TheBloke/MythoMax-L2-13B-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:52
|
fine-tuned
| 97
| 16.232
|
other
| null |
|
TheBloke/Upstage-Llama-2-70B-instruct-v2-AWQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:48:32
|
instruction-tuned
| 0
| 9.684
|
llama2
| null |
|
TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:12
|
fine-tuned
| 267
| 16.224
|
other
| null |
|
TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ
|
main
| false
|
float32
|
Original
|
RUNNING
| 2023-12-07T22:47:06
|
fine-tuned
| 444
| 35.584
|
other
| null |
|
TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:38
|
fine-tuned
| 120
| 9.04
|
other
| null |
|
TheBloke/orca_mini_v3_7B-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:35:27
|
fine-tuned
| 9
| 9.048
|
other
| null |
|
TheBloke/zephyr-7B-alpha-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:47:37
|
fine-tuned
| 25
| 9.592
|
mit
| null |
|
TheBloke/zephyr-7B-beta-AWQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:46:40
|
fine-tuned
| 26
| 1.196
|
mit
| null |
|
TheBloke/zephyr-7B-beta-GPTQ
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:46:21
|
fine-tuned
| 44
| 9.592
|
mit
| null |
|
TinyLlama/TinyLlama-1.1B-Chat-v0.3
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:47:56
|
pretrained
| 33
| 1.1
|
apache-2.0
| null |
|
TinyLlama/TinyLlama-1.1B-Chat-v0.6
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:21:15
|
pretrained
| 23
| 1.1
|
apache-2.0
| null |
|
TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:42:34
|
pretrained
| 54
| 1.1
|
apache-2.0
| null |
|
TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T14:35:43
|
pretrained
| 26
| 1.1
|
apache-2.0
| null |
|
UjjwalP/testnic1
|
main
| false
|
float32
|
Original
|
RUNNING
| 2024-04-11T06:49:35
|
πΆ : fine-tuned on domain-specific datasets
| 0
| 0.582
|
apache-2.0
| null |
|
aari1995/germeo-7b-laser
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-01-29T12:54:01
|
π¦ : RL-tuned
| 2
| 7.242
|
apache-2.0
| null |
|
abacusai/Smaug-Llama-3-70B-Instruct
|
main
| false
|
bfloat16
|
Original
|
RUNNING
| 2024-05-17T19:02:43
|
π¬ : chat models (RLHF, DPO, IFT, ...)
| 0
| 70.554
|
llama2
| null |
|
ai-forever/mGPT
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:29
|
pretrained
| 190
| 0
|
apache-2.0
| null |
|
ai4bharat/Airavata
|
main
| false
|
float32
|
Original
|
RUNNING
| 2024-01-30T06:55:04
|
πΆ : fine-tuned
| 17
| 6.87
|
llama2
| null |
|
augtoma/qCammel-70-x
|
main
| false
|
float32
|
Original
|
RUNNING
| 2023-12-07T22:47:53
|
pretrained
| 22
| 0
|
other
| null |
|
berkeley-nest/Starling-LM-7B-alpha
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:18:10
|
pretrained
| 335
| 7.242
|
cc-by-nc-4.0
| null |
|
bigscience/bloom-1b1
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:47:14
|
pretrained
| 44
| 1.065
|
bigscience-bloom-rail-1.0
| null |
|
bigscience/bloom-1b7
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:46:16
|
pretrained
| 103
| 1.722
|
bigscience-bloom-rail-1.0
| null |
|
bigscience/bloom-3b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:46:26
|
pretrained
| 70
| 3.003
|
bigscience-bloom-rail-1.0
| null |
|
bigscience/bloom-560m
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:19:48
|
pretrained
| 285
| 0.559
|
bigscience-bloom-rail-1.0
| null |
|
bigscience/bloom-7b1
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:33
|
pretrained
| 148
| 7.069
|
bigscience-bloom-rail-1.0
| null |
|
bigscience/bloom
|
main
| false
|
float32
|
Original
|
RUNNING
| 2023-12-07T22:47:50
|
pretrained
| 4,218
| 176.247
|
bigscience-bloom-rail-1.0
| null |
|
bigscience/bloomz-3b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:48:19
|
pretrained
| 72
| 3.003
|
bigscience-bloom-rail-1.0
| null |
|
bigscience/bloomz-560m
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-03T18:26:48
|
pretrained
| 85
| 0.559
|
bigscience-bloom-rail-1.0
| null |
|
bigscience/bloomz-7b1
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:46:46
|
pretrained
| 115
| 7
|
bigscience-bloom-rail-1.0
| null |
|
cognitivecomputations/WizardLM-33B-V1.0-Uncensored
|
main
| false
|
float16
|
Original
|
RUNNING
| 2024-01-30T15:58:52
|
πΆ : fine-tuned
| 56
| 33
|
other
| null |
|
cognitivecomputations/dolphin-2.9.1-yi-1.5-34b
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-05-25T05:41:27
|
πΆ : fine-tuned on domain-specific datasets
| 28
| 34.389
|
apache-2.0
| null |
|
cognitivecomputations/dolphin-2.9.1-yi-1.5-9b
|
main
| false
|
bfloat16
|
Original
|
RUNNING
| 2024-05-25T05:44:35
|
πΆ : fine-tuned on domain-specific datasets
| 19
| 8.829
|
apache-2.0
| null |
|
ddeshler/HerLlama-7B-slerp
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-08-27T19:58:30
|
π€ : base merges and moerges
| 0
| 7.242
|
apache-2.0
| null |
|
deepseek-ai/deepseek-llm-67b-chat
|
main
| false
|
float16
|
Original
|
RUNNING
| 2024-01-30T15:59:40
|
πΆ : fine-tuned
| 142
| 67
|
other
| null |
|
distilbert/distilroberta-base
|
main
| false
|
float32
|
Original
|
FINISHED
| 2024-05-10T01:33:43
|
π’ : pretrained
| 119
| 0.083
|
apache-2.0
| null |
|
ehartford/dolphin-2.1-mistral-7b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:39:30
|
pretrained
| 233
| 7
|
apache-2.0
| null |
|
ehartford/dolphin-2.2.1-mistral-7b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:45:44
|
fine-tuned
| 118
| 7
|
apache-2.0
| null |
|
facebook/bart-base
|
main
| false
|
float32
|
Original
|
RUNNING
| 2024-03-15T17:02:42
| 137
| 0.139
|
apache-2.0
| null |
||
facebook/opt-13b
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-07T22:46:09
|
pretrained
| 60
| 13
|
other
| null |
|
facebook/xglm-564M
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:33:26
|
pretrained
| 32
| 0.564
|
mit
| null |
|
fearlessdots/WizardLM-2-7B-abliterated
|
main
| false
|
bfloat16
|
Original
|
FINISHED
| 2024-05-25T13:37:35
|
πΆ : fine-tuned on domain-specific datasets
| 1
| 7.242
|
apache-2.0
| null |
|
garage-bAInd/Platypus2-7B
|
main
| false
|
float32
|
Original
|
FINISHED
| 2023-12-11T15:55:47
|
pretrained
| 4
| 7
|
cc-by-nc-sa-4.0
| null |
|
google/flan-t5-base
|
main
| false
|
float32
|
Original
|
RUNNING
| 2024-03-12T18:59:10
|
pretrained
| 598
| 0.248
|
apache-2.0
| null |
|
google/flan-t5-large
|
main
| false
|
float32
|
Original
|
RUNNING
| 2024-03-12T18:59:17
|
pretrained
| 428
| 0.783
|
apache-2.0
| null |
|
google/flan-t5-xxl
|
main
| false
|
float32
|
Original
|
RUNNING
| 2024-03-12T18:59:01
|
pretrained
| 1,074
| 11.267
|
apache-2.0
| null |
|
google/gemma-1.1-7b-it
|
main
| false
|
bfloat16
|
Original
|
RUNNING
| 2024-05-16T16:23:41
|
π¬ : chat models (RLHF, DPO, IFT, ...)
| 230
| 8.538
|
gemma
| null |
End of preview.
README.md exists but content is empty.
- Downloads last month
- 32,582