-
Order Matters in the Presence of Dataset Imbalance for Multilingual Learning
Paper • 2312.06134 • Published • 2 -
Efficient Monotonic Multihead Attention
Paper • 2312.04515 • Published • 6 -
Contrastive Decoding Improves Reasoning in Large Language Models
Paper • 2309.09117 • Published • 37 -
Exploring Format Consistency for Instruction Tuning
Paper • 2307.15504 • Published • 7
Collections
Discover the best community collections!
Collections including paper arxiv:2312.12742
-
aMUSEd: An Open MUSE Reproduction
Paper • 2401.01808 • Published • 28 -
From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations
Paper • 2401.01885 • Published • 27 -
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity
Paper • 2401.00604 • Published • 4 -
LARP: Language-Agent Role Play for Open-World Games
Paper • 2312.17653 • Published • 30
-
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 56 -
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU
Paper • 2312.12456 • Published • 41 -
Cached Transformers: Improving Transformers with Differentiable Memory Cache
Paper • 2312.12742 • Published • 12 -
Mini-GPTs: Efficient Large Language Models through Contextual Pruning
Paper • 2312.12682 • Published • 8
-
Cached Transformers: Improving Transformers with Differentiable Memory Cache
Paper • 2312.12742 • Published • 12 -
ProTIP: Progressive Tool Retrieval Improves Planning
Paper • 2312.10332 • Published • 7 -
Paloma: A Benchmark for Evaluating Language Model Fit
Paper • 2312.10523 • Published • 12 -
The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale
Paper • 2406.17557 • Published • 86
-
togethercomputer/StripedHyena-Hessian-7B
Text Generation • Updated • 100 • 62 -
Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention
Paper • 2312.08618 • Published • 11 -
SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention
Paper • 2312.07987 • Published • 40 -
LLM360: Towards Fully Transparent Open-Source LLMs
Paper • 2312.06550 • Published • 56
-
Trellis Networks for Sequence Modeling
Paper • 1810.06682 • Published • 1 -
ProSG: Using Prompt Synthetic Gradients to Alleviate Prompt Forgetting of RNN-like Language Models
Paper • 2311.01981 • Published • 1 -
Gated recurrent neural networks discover attention
Paper • 2309.01775 • Published • 7 -
Inverse Approximation Theory for Nonlinear Recurrent Neural Networks
Paper • 2305.19190 • Published • 1
-
Efficient LLM Inference on CPUs
Paper • 2311.00502 • Published • 7 -
Exponentially Faster Language Modelling
Paper • 2311.10770 • Published • 118 -
Cached Transformers: Improving Transformers with Differentiable Memory Cache
Paper • 2312.12742 • Published • 12 -
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
Paper • 2312.11514 • Published • 258
-
The Impact of Depth and Width on Transformer Language Model Generalization
Paper • 2310.19956 • Published • 9 -
Retentive Network: A Successor to Transformer for Large Language Models
Paper • 2307.08621 • Published • 170 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 14 -
Attention Is All You Need
Paper • 1706.03762 • Published • 44
-
Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Paper • 2204.04581 • Published • 1 -
Retrieval-Augmented Multimodal Language Modeling
Paper • 2211.12561 • Published • 1 -
When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories
Paper • 2212.10511 • Published • 1 -
Memorizing Transformers
Paper • 2203.08913 • Published • 2