modelId stringlengths 4 112 | sha stringlengths 40 40 | lastModified stringlengths 24 24 | tags sequence | pipeline_tag stringclasses 29
values | private bool 1
class | author stringlengths 2 38 ⌀ | config null | id stringlengths 4 112 | downloads float64 0 36.8M ⌀ | likes float64 0 712 ⌀ | library_name stringclasses 17
values | __index_level_0__ int64 0 38.5k | readme stringlengths 0 186k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
hfl/chinese-macbert-base | a986e004d2a7f2a1c2f5a3edef4e20604a974ed1 | 2021-05-19T19:09:45.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | hfl | null | hfl/chinese-macbert-base | 36,823,840 | 43 | transformers | 0 | ---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
<p align="center">
<br>
<img src="https://github.com/ymcui/MacBERT/raw/master/pics/banner.png" width="500"/>
<br>
</p>
<p align="center">
<a href="https://github.com/ymcui/MacBERT/blob/master/LICENSE">
<img alt="GitHub" src="https://img.... |
microsoft/deberta-base | 7d4c0126b06bd59dccd3e48e467ed11e37b77f3f | 2022-01-13T13:56:18.000Z | [
"pytorch",
"tf",
"rust",
"deberta",
"en",
"arxiv:2006.03654",
"transformers",
"deberta-v1",
"license:mit"
] | null | false | microsoft | null | microsoft/deberta-base | 23,662,412 | 15 | transformers | 1 | ---
language: en
tags: deberta-v1
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It... |
bert-base-uncased | 418430c3b5df7ace92f2aede75700d22c78a0f95 | 2022-06-06T11:41:24.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | bert-base-uncased | 22,268,934 | 204 | transformers | 2 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](http... |
gpt2 | 6c0e6080953db56375760c0471a8c5f2929baf11 | 2021-05-19T16:25:59.000Z | [
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"gpt2",
"text-generation",
"en",
"transformers",
"exbert",
"license:mit"
] | text-generation | false | null | null | gpt2 | 11,350,803 | 164 | transformers | 3 | ---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better... |
distilbert-base-uncased | 043235d6088ecd3dd5fb5ca3592b6913fd516027 | 2022-05-31T19:08:36.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"distilbert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | distilbert-base-uncased | 11,250,037 | 70 | transformers | 4 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the disti... |
Jean-Baptiste/camembert-ner | dbec8489a1c44ecad9da8a9185115bccabd799fe | 2022-04-04T01:13:33.000Z | [
"pytorch",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"transformers",
"autotrain_compatible"
] | token-classification | false | Jean-Baptiste | null | Jean-Baptiste/camembert-ner | 9,833,060 | 11 | transformers | 5 | ---
language: fr
datasets:
- Jean-Baptiste/wikiner_fr
widget:
- text: "Je m'appelle jean-baptiste et je vis à montréal"
- text: "george washington est allé à washington"
---
# camembert-ner: model fine-tuned from camemBERT for NER task.
## Introduction
[camembert-ner] is a NER model that was fine-tuned from camemBER... |
bert-base-cased | a8d257ba9925ef39f3036bfc338acf5283c512d9 | 2021-09-06T08:07:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | bert-base-cased | 7,598,326 | 30 | transformers | 6 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https:... |
roberta-base | 251c3c36356d3ad6845eb0554fdb9703d632c6cc | 2021-07-06T10:34:50.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:1806.02847",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | null | null | roberta-base | 7,254,067 | 45 | transformers | 7 | ---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa base model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com... |
SpanBERT/spanbert-large-cased | a49cba45de9565a5d3e7b089a94dbae679e64e79 | 2021-05-19T11:31:33.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | SpanBERT | null | SpanBERT/spanbert-large-cased | 7,120,559 | 3 | transformers | 8 | Entry not found |
xlm-roberta-base | f6d161e8f5f6f2ed433fb4023d6cb34146506b3f | 2022-06-06T11:40:43.000Z | [
"pytorch",
"tf",
"jax",
"xlm-roberta",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha"... | fill-mask | false | null | null | xlm-roberta-base | 6,960,013 | 42 | transformers | 9 | ---
tags:
- exbert
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
... |
distilbert-base-uncased-finetuned-sst-2-english | 00c3f1ef306e837efb641eaca05d24d161d9513c | 2022-07-22T08:00:55.000Z | [
"pytorch",
"tf",
"rust",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"transformers",
"license:apache-2.0",
"model-index"
] | text-classification | false | null | null | distilbert-base-uncased-finetuned-sst-2-english | 5,401,984 | 77 | transformers | 10 | ---
language: en
license: apache-2.0
datasets:
- sst2
- glue
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
... |
distilroberta-base | c1149320821601524a8d373726ed95bbd2bc0dc2 | 2022-07-22T08:13:21.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"fill-mask",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | distilroberta-base | 5,192,102 | 21 | transformers | 11 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
---
# Model Card for DistilRoBERTa base
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluat... |
distilgpt2 | ca98be8f8f0994e707b944a9ef55e66fbcf9e586 | 2022-07-22T08:12:56.000Z | [
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"gpt2",
"text-generation",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"transformers",
"exbert",
"license:apache-2.0",
"model-index",
"co2_eq_emissions"
... | text-generation | false | null | null | distilgpt2 | 4,525,173 | 77 | transformers | 12 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
... |
cross-encoder/ms-marco-MiniLM-L-12-v2 | 97f7dcbdd6ab58fe7f44368c795fc5200b48fcbe | 2021-08-05T08:39:01.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | cross-encoder | null | cross-encoder/ms-marco-MiniLM-L-12-v2 | 3,951,063 | 10 | transformers | 13 | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch).... |
albert-base-v2 | 51dbd9db43a0c6eba97f74b91ce26fface509e0b | 2021-08-30T12:04:48.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | albert-base-v2 | 3,862,051 | 15 | transformers | 14 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Base v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-rese... |
bert-base-chinese | 38fda776740d17609554e879e3ac7b9837bdb5ee | 2022-07-22T08:09:06.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"transformers",
"autotrain_compatible"
] | fill-mask | false | null | null | bert-base-chinese | 3,660,463 | 107 | transformers | 15 | ---
language: zh
---
# Bert-base-chinese
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
# Model Detai... |
bert-base-multilingual-cased | aff660c4522e466f4d0de19eaf94f91e4e2e7375 | 2021-05-18T16:18:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | bert-base-multilingual-cased | 3,089,919 | 40 | transformers | 16 | ---
language: multilingual
license: apache-2.0
datasets:
- wikipedia
---
# BERT multilingual base model (cased)
Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released ... |
xlm-roberta-large-finetuned-conll03-english | 33a83d9855a119c0453ce450858c07835a0bdbed | 2022-07-22T08:04:08.000Z | [
"pytorch",
"rust",
"xlm-roberta",
"token-classification",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
... | token-classification | false | null | null | xlm-roberta-large-finetuned-conll03-english | 2,851,282 | 23 | transformers | 17 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
... |
tals/albert-xlarge-vitaminc-mnli | 4c79eb5353f6104eb148d9221560c913f45677c7 | 2022-06-24T01:33:47.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"python",
"dataset:fever",
"dataset:glue",
"dataset:multi_nli",
"dataset:tals/vitaminc",
"transformers"
] | text-classification | false | tals | null | tals/albert-xlarge-vitaminc-mnli | 2,529,752 | null | transformers | 18 | ---
language: python
datasets:
- fever
- glue
- multi_nli
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When ... |
bert-large-uncased | 3835a195d41f7ddc47d5ecab84b64f71d6f144e9 | 2021-05-18T16:40:29.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | bert-large-uncased | 2,362,221 | 9 | transformers | 19 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com... |
valhalla/t5-small-qa-qg-hl | a9d81e686f2169360fd59d8329235d3c4ba74f4f | 2021-06-23T14:42:41.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"dataset:squad",
"arxiv:1910.10683",
"transformers",
"question-generation",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | valhalla | null | valhalla/t5-small-qa-qg-hl | 2,171,047 | 5 | transformers | 20 | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
license: mit
---
## T5 for multi-task QA and QG
This is multi-... |
google/t5-v1_1-xl | a9e51c46bd6f3893213c51edf9498be6f0426797 | 2020-11-19T19:55:34.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2002.05202",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-v1_1-xl | 1,980,571 | 3 | transformers | 21 | ---
language: en
datasets:
- c4
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1
## Version 1.1
[T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the f... |
sentence-transformers/all-MiniLM-L6-v2 | 717413c64de70e37b55cf53c9cdff0e2d331fac3 | 2022-07-11T21:08:45.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:MS Marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_... | sentence-similarity | false | sentence-transformers | null | sentence-transformers/all-MiniLM-L6-v2 | 1,933,749 | 60 | sentence-transformers | 22 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- MS Marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- nat... |
sentence-transformers/paraphrase-MiniLM-L6-v2 | 68b97aaedb0c72be3c88c1af64296b3bbb8001fa | 2022-06-15T18:39:43.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/paraphrase-MiniLM-L6-v2 | 1,710,481 | 16 | sentence-transformers | 23 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dens... |
t5-small | d78aea13fa7ecd06c29e3e46195d6341255065d5 | 2022-07-22T08:11:14.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"transformers",
... | translation | false | null | null | t5-small | 1,707,833 | 20 | transformers | 24 | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
# Model Card for T5 Small
 after being trained on the [MultiNLI (MNLI)](https://huggingface.co/da... |
cardiffnlp/twitter-xlm-roberta-base-sentiment | f3e34b6c30bf27b6649f72eca85d0bbe79df1e55 | 2022-06-22T19:15:32.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"multilingual",
"arxiv:2104.12250",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/twitter-xlm-roberta-base-sentiment | 1,479,744 | 25 | transformers | 26 | ---
language: multilingual
widget:
- text: "🤗"
- text: "T'estimo! ❤️"
- text: "I love you!"
- text: "I hate you 🤮"
- text: "Mahal kita!"
- text: "사랑해!"
- text: "난 너가 싫어"
- text: "😍😍😍"
---
# twitter-XLM-roBERTa-base for Sentiment Analysis
This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and ... |
roberta-large | 619fd8c2ca2bc7ac3959b7f71b6c426c897ba407 | 2021-05-21T08:57:02.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:1806.02847",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | null | null | roberta-large | 1,479,252 | 39 | transformers | 27 | ---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa large model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](htt... |
DeepPavlov/rubert-base-cased-conversational | 645946ce91842a52eaacb2705c77e59194145ffa | 2021-11-08T13:06:54.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"transformers"
] | feature-extraction | false | DeepPavlov | null | DeepPavlov/rubert-base-cased-conversational | 1,418,924 | 5 | transformers | 28 | ---
language:
- ru
---
# rubert-base-cased-conversational
Conversational RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\]. We assembled a new vocabulary f... |
microsoft/codebert-base | 3b0952feddeffad0063f274080e3c23d75e7eb39 | 2022-02-11T19:59:44.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"feature-extraction",
"arxiv:2002.08155",
"transformers"
] | feature-extraction | false | microsoft | null | microsoft/codebert-base | 1,347,269 | 30 | transformers | 29 | ## CodeBERT-base
Pretrained weights for [CodeBERT: A Pre-Trained Model for Programming and Natural Languages](https://arxiv.org/abs/2002.08155).
### Training Data
The model is trained on bi-modal data (documents & code) of [CodeSearchNet](https://github.com/github/CodeSearchNet)
### Training Objective
This model is i... |
ProsusAI/finbert | 5ea63b3d0c737ad6f06e061d9af36b1f7bbd1a4b | 2022-06-03T06:34:37.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"arxiv:1908.10063",
"transformers",
"financial-sentiment-analysis",
"sentiment-analysis"
] | text-classification | false | ProsusAI | null | ProsusAI/finbert | 1,254,493 | 81 | transformers | 30 | ---
language: "en"
tags:
- financial-sentiment-analysis
- sentiment-analysis
widget:
- text: "Stocks rallied and the British pound gained."
---
FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financi... |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 23