Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
antqin
/
dpo-model-2
like
0
PEFT
Safetensors
arxiv:
1910.09700
Model card
Files
Files and versions
Community
Use this model
main
dpo-model-2
1 contributor
History:
2 commits
antqin
Upload DPO model
239db0b
verified
about 1 month ago
.gitattributes
Safe
1.52 kB
initial commit
about 1 month ago
README.md
Safe
5.11 kB
Upload DPO model
about 1 month ago
adapter_config.json
Safe
656 Bytes
Upload DPO model
about 1 month ago
adapter_model.safetensors
Safe
3.42 MB
LFS
Upload DPO model
about 1 month ago
special_tokens_map.json
Safe
325 Bytes
Upload DPO model
about 1 month ago
tokenizer.json
Safe
9.09 MB
Upload DPO model
about 1 month ago
tokenizer_config.json
Safe
54.6 kB
Upload DPO model
about 1 month ago
training_args.bin
pickle
Detected Pickle imports (9)
"transformers.training_args.TrainingArguments"
,
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.training_args.OptimizerNames"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.trainer_pt_utils.AcceleratorConfig"
,
"accelerate.utils.dataclasses.DistributedType"
,
"torch.device"
,
"accelerate.state.PartialState"
How to fix it?
5.11 kB
LFS
Upload DPO model
about 1 month ago