Adding `safetensors` variant of this model
#31 opened about 1 month ago
by
SFconvertbot
How to use an specific version/commit/fix of the model?
#30 opened 2 months ago
by
stavisav
Example Notebook
#29 opened 2 months ago
by
GCabas
Encoding for long contexts
1
#28 opened 3 months ago
by
yoavkt
Fix AutoModel not loading model correctly due to config_class inconsistency
12
#26 opened 6 months ago
by
liamclarkza
Update bert_layers.py
#25 opened 6 months ago
by
ruffy369
A modified DNABERT2 that returns the attention too
#24 opened 6 months ago
by
jaandoui
Triton version
9
#23 opened 7 months ago
by
JiayiJennie
how do I output the attention scores from the last layer of the encoder?
1
#22 opened 7 months ago
by
jkb0722
Tokenization of more than 2 sequences
1
#21 opened 8 months ago
by
jaandoui
Impact of Padding on DNABERT Model Performance
#20 opened 8 months ago
by
poilkjhytg
Setting DNABERT-2 revision
#19 opened 8 months ago
by
SeanDoyle
TypeError: forward() got an unexpected keyword argument 'attention_mask'
#18 opened 8 months ago
by
jkb0722
Adding `safetensors` variant of this model
#17 opened 8 months ago
by
SFconvertbot
Assertion Error / Implementation Error
1
#16 opened 8 months ago
by
8497prashant
Extract attention from model
1
#15 opened 9 months ago
by
kaustabanv
error on inference using DNABERT2 - can you please share the environment you used for running it?
2
#12 opened 11 months ago
by
NettaB
Adding `safetensors` variant of this model
#11 opened 12 months ago
by
SFconvertbot
Is the datasets for foundational model pre-training publicly accessible?
#10 opened 12 months ago
by
JayceCeleste
Expecting bi-modal disstribution of probabilities
#9 opened about 1 year ago
by
christianclough
Inference fails with output_all_encoded_layers=True.
1
#8 opened about 1 year ago
by
pg20sanger
Adding `safetensors` variant of this model
#7 opened about 1 year ago
by
SFconvertbot
The model class you are passing has a `config_class` attribute that is not consistent with the config class you passed
4
#6 opened about 1 year ago
by
saikiran7
About the output of tokenizer and the model
2
#4 opened about 1 year ago
by
RandyWang504
Despite multiple trials and examining the model configuration, it seems that the model hosted on Hugging Face (`huggingface.co`) cannot handle sequences that exceed a length of 512 tokens.
2
#3 opened about 1 year ago
by
hengchuangyin
Issue in Code, at 114:24: def _fwd_kernel(..)
4
#1 opened over 1 year ago
by
Hosna