Upload trans-sample.png
f5aa6a4
verified
-
1.52 kB
initial commit
-
5.94 kB
Update README.md
llama3.1_8b_translate_w_bit_4_awq_amd.pt
Detected Pickle imports (23)
- "modeling_llama_amd_3_1.LlamaAttention",
- "modeling_llama_amd_3_1.LlamaModel",
- "torch.CharStorage",
- "torch._utils._rebuild_tensor_v2",
- "torch.BFloat16Storage",
- "modeling_llama_amd_3_1.LlamaRMSNorm",
- "modeling_llama_amd_3_1.LlamaDecoderLayer",
- "modeling_llama_amd_3_1.LlamaMLP",
- "transformers.generation.configuration_utils.GenerationConfig",
- "modeling_llama_amd_3_1.LlamaForCausalLM",
- "collections.OrderedDict",
- "torch.nn.modules.container.ModuleList",
- "modeling_llama_amd_3_1.LlamaRotaryEmbedding",
- "torch.FloatStorage",
- "torch.nn.modules.activation.SiLU",
- "torch.Size",
- "qlinear.QLinearPerGrp",
- "torch.nn.modules.linear.Linear",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.sparse.Embedding",
- "torch.bfloat16",
- "__builtin__.set",
- "transformers.models.llama.configuration_llama.LlamaConfig"
How to fix it?
11.5 GB
Rename llama3.1_8b_translation_w_bit_4_awq_amd.pt to llama3.1_8b_translate_w_bit_4_awq_amd.pt
-
312 Bytes
Upload 4 files
-
9.5 MB
Upload 4 files
-
52.9 kB
Upload 4 files
-
9.73 kB
Upload trans-sample.png