test-flex-gpt / README.md
oweller2
init
c46937d
|
raw
history blame
942 Bytes
metadata
language:
  - en
pipeline_tag: fill-mask

How to run:

Install these requirements

pip install flash_attn
pip install transformers==4.45.2 # (probably works with newer/older but tested with >=4.45.2)

Then you can load with

from transformers import AutoModel, AutoTokenizer
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")

model = AutoModel.from_pretrained("ModernBERT/bert24-base-v2-2ep-decay_100B-0.08-lr", trust_remote_code=True)
model = model.to(device)
tokenizer = AutoTokenizer.from_pretrained("ModernBERT/bert24-base-v2-2ep-decay_100B-0.08-lr", trust_remote_code=True)

# test it out and encode some text
text = "This is a test sentence"
inputs = tokenizer(text, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}

outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
print(last_hidden_states.shape)