Is it possible to get encoder embeddings?
#44
by
asimkin
- opened
When trying to get encoder output I get this error:
AttributeError: 'MPTForCausalLM' object has no attribute 'encoder'
I tried to run this code:
text = "Here is some input text."
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
outputs = model.encoder(inputs['input_ids'])
embeddings = outputs.last_hidden_state
Is there an alternative way to get embeddings?
You can print the model
:
MPTForCausalLM(
(transformer): MPTModel(
(wte): Embedding(50432, 4096)
(emb_drop): Dropout(p=0, inplace=False)
(blocks): ModuleList(
(0-31): 32 x MPTBlock(
(norm_1): LPLayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(attn): MultiheadAttention(
(Wqkv): Linear(in_features=4096, out_features=12288, bias=False)
(out_proj): Linear(in_features=4096, out_features=4096, bias=False)
)
(norm_2): LPLayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(ffn): MPTMLP(
(up_proj): Linear(in_features=4096, out_features=16384, bias=False)
(act): GELU(approximate='none')
(down_proj): Linear(in_features=16384, out_features=4096, bias=False)
)
(resid_attn_dropout): Dropout(p=0, inplace=False)
(resid_ffn_dropout): Dropout(p=0, inplace=False)
)
)
(norm_f): LPLayerNorm((4096,), eps=1e-05, elementwise_affine=True)
)
)
It has transformer.wte.weights
matrix, if you want a low-level access to embeddings.
jacobfulano
changed discussion status to
closed