URL incorrectly handled in tokenization

#5
by FocacciaX - opened

Hi, I am using the tokenizer. but it behaves in a way that I don't understand.

In the example below, the last few words are about an URL, which should not be put into different subwords. However, the tokenizer chopped the URL.
I wonder if this is because I am not handlling it correctly, or scibert behaves incorrectly, or it's unevitable problem in bert tokenization.

Thanks!

Sentence:
Methods PSY expression correlation analysis An expression correlation analysis was performed for PSY using the freely available Arabidopsis co-expression tool (ACT) (__http://www.arabidopsis.leeds.ac.uk/__)[31].

Tokens:
['[CLS]', 'methods', 'ps', '##y', 'expression', 'correlation', 'analysis', 'an', 'expression', 'correlation', 'analysis', 'was', 'performed', 'for', 'ps', '##y', 'using', 'the', 'freely', 'available', 'ar', '##abi', '##do', '##ps', '##is', 'co', '-', 'expression', 'tool', '(', 'act', ')', '(', __'http', ':', '/', '/', 'www', '.', 'ar', '##abi', '##do', '##ps', '##is', '.', 'le', '##eds', '.', 'ac', '.', 'uk', '/'__, ')', '[', '31', ']', '.', '[SEP]']

word_ids:
 [None, 0, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 12, 13, 14, 15, 16, 17, 17, 17, 17, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 32, 32, 32, 32, 33, 34, __34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44__, None}

Corresponding code:

tokenized_inputs = tokenizer(
            sent,
            padding="max_length",
            truncation=True,
            max_length=data_args_max_seq_length,
            is_split_into_words=False,
            return_tensors="pt"  # Return PyTorch tensors
        )
FocacciaX changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment