Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
PrunaAI
/
llama-moe-LLaMA-MoE-v1-3_5B-4_16-QUANTO-float8bit-smashed
like
0
Follow
Pruna AI
112
Transformers
pruna-ai
Inference Endpoints
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
e5d7769
llama-moe-LLaMA-MoE-v1-3_5B-4_16-QUANTO-float8bit-smashed
1 contributor
History:
2 commits
sharpenb
209e754b7aa209318291119cb6241261e3aa4be83dde1aa78eae61edfb885e4e
e5d7769
verified
5 months ago
.gitattributes
1.52 kB
initial commit
5 months ago
model.pt
pickle
Detected Pickle imports (33)
"quanto.tensor.qtype.qtype"
,
"torch.nn.modules.container.ParameterList"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LinearGLUMoELayer"
,
"torch.nn.modules.sparse.Embedding"
,
"torch._utils._rebuild_tensor_v2"
,
"quanto.nn.qlinear.QLinear"
,
"torch.nn.modules.activation.Softplus"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LinearGLUExperts"
,
"torch._utils._rebuild_parameter"
,
"torch.nn.modules.container.ModuleList"
,
"torch.nn.modules.container.Sequential"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaMoEModel"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaRMSNorm"
,
"torch.Size"
,
"torch.nn.modules.activation.Tanh"
,
"torch.nn.modules.activation.SiLU"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaMoEDecoderLayer"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.TopKBalancedNoisyGate"
,
"__builtin__.set"
,
"collections.OrderedDict"
,
"torch.BFloat16Storage"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaAttention"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.UniversalCalculator"
,
"torch.nn.modules.activation.Softmax"
,
"torch.bfloat16"
,
"torch.FloatStorage"
,
"torch.distributions.normal.Normal"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaRotaryEmbedding"
,
"torch.device"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"torch.float8_e4m3fn"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.configuration_llama_moe.LlamaMoEConfig"
,
"transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaMoEForCausalLM"
How to fix it?
13.6 GB
LFS
209e754b7aa209318291119cb6241261e3aa4be83dde1aa78eae61edfb885e4e
5 months ago