Edit model card

Hierarchical Attention Transformer (HAT) / kiddothe2b/adhoc-hierarchical-transformer-base-4096

Model description

This is a Hierarchical Attention Transformer (HAT) model as presented in An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022).

The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), BUT has not been continued pre-trained. It supports sequences of length up to 4,096.

HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences.

Note: If you wish to use a fully pre-trained HAT model, you have to use kiddothe2b/adhoc-hat-base-4096.

Intended uses & limitations

The model is intended to be fine-tuned on a downstream task. See the model hub to look for other versions of HAT, or fine-tuned versions on a task that interests you.

Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering.

How to use

You can fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:

from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/adhoc-hierarchical-transformer-base-4096", trust_remote_code=True)
doc_classifier = AutoModelForSequenceClassification("kiddothe2b/adhoc-hierarchical-transformer-base-4096", trust_remote_code=True)

Note: If you wish to use a fully pre-trained HAT model, you have to use kiddothe2b/hierarchical-transformer-base-4096.

Limitations and bias

The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions.

Training procedure

Training and evaluation data

The model has been warm-started from roberta-base checkpoint.

Framework versions

  • Transformers 4.19.0.dev0
  • Pytorch 1.11.0+cu102
  • Datasets 2.0.0
  • Tokenizers 0.11.6

Citing

If you use HAT in your research, please cite:

An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification. Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).

@misc{chalkidis-etal-2022-hat,
  url = {https://arxiv.org/abs/2210.05529},
  author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
  title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
  publisher = {arXiv},
  year = {2022},
}
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train kiddothe2b/adhoc-hierarchical-transformer-base-4096