Embedding Papers
updated
Improving Text Embeddings with Large Language Models
Paper
• 2401.00368
• Published
• 82
BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding
Paper
• 1810.04805
• Published
• 26
Metadata Might Make Language Models Better
Paper
• 2211.10086
• Published
• 4
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Paper
• 2310.03686
• Published
• 3
German FinBERT: A German Pre-trained Language Model
Paper
• 2311.08793
• Published
• 3
Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion
Tokens
Paper
• 2401.17377
• Published
• 38
Sequence to Sequence Learning with Neural Networks
Paper
• 1409.3215
• Published
• 3
RoFormer: Enhanced Transformer with Rotary Position Embedding
Paper
• 2104.09864
• Published
• 17
Swivel: Improving Embeddings by Noticing What's Missing
Paper
• 1602.02215
• Published
• 2
Distributed Representations of Words and Phrases and their
Compositionality
Paper
• 1310.4546
• Published
• 3
A Neural Conversational Model
Paper
• 1506.05869
• Published
• 2