Upload qa_model_new.pkl
eb8275e
verified
-
1.52 kB
initial commit
-
21 Bytes
initial commit
qa_model_new.pkl
Detected Pickle imports (42)
- "scipy.sparse._csr.csr_matrix",
- "sklearn.feature_extraction.text.TfidfTransformer",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaPooler",
- "torch.nn.modules.container.ModuleList",
- "torch._utils._rebuild_tensor_v2",
- "torch.float32",
- "numpy.dtype",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaAttention",
- "tokenizers.models.Model",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaOutput",
- "numpy.core.multiarray._reconstruct",
- "numpy.ndarray",
- "numpy.float64",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaModel",
- "torch._utils._rebuild_parameter",
- "torch._C._nn.gelu",
- "sklearn.feature_extraction.text.TfidfVectorizer",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaEmbeddings",
- "langchain.text_splitter.RecursiveCharacterTextSplitter",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaIntermediate",
- "tokenizers.AddedToken",
- "torch.storage._load_from_bytes",
- "sentence_transformers.SentenceTransformer.SentenceTransformer",
- "transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaLayer",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaEncoder",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaSelfAttention",
- "torch.nn.modules.linear.Linear",
- "torch.nn.modules.activation.Tanh",
- "transformers.models.xlm_roberta.tokenization_xlm_roberta_fast.XLMRobertaTokenizerFast",
- "sentence_transformers.models.Pooling.Pooling",
- "transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaSelfOutput",
- "transformers.activations.GELUActivation",
- "torch.nn.modules.normalization.LayerNorm",
- "collections.OrderedDict",
- "torch.nn.modules.dropout.Dropout",
- "torch.device",
- "tokenizers.Tokenizer",
- "question_answering.data.data_record.DataRecord",
- "sentence_transformers.models.Transformer.Transformer",
- "builtins.len"
How to fix it?
1.18 GB
Upload qa_model_new.pkl