Configurable Foundation Models: Building LLMs from a Modular Perspective Paper • 2409.02877 • Published Sep 4 • 27
Dolphin: Long Context as a New Modality for Energy-Efficient On-Device Language Models Paper • 2408.15518 • Published Aug 28 • 42
Scaling Synthetic Data Creation with 1,000,000,000 Personas Paper • 2406.20094 • Published Jun 28 • 94
Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing Paper • 2406.08464 • Published Jun 12 • 65
Q-Sparse: All Large Language Models can be Fully Sparsely-Activated Paper • 2407.10969 • Published Jul 15 • 20
ReLU^2 Wins: Discovering Efficient Activation Functions for Sparse LLMs Paper • 2402.03804 • Published Feb 6 • 2
PowerInfer-2: Fast Large Language Model Inference on a Smartphone Paper • 2406.06282 • Published Jun 10 • 36
Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters Paper • 2406.05955 • Published Jun 10 • 22
Physics of Language Models: Part 3.2, Knowledge Manipulation Paper • 2309.14402 • Published Sep 25, 2023 • 6
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits Paper • 2402.17764 • Published Feb 27 • 602
BrockportGPT v2 Collection The improved version of BrockportGPT v1. This generation has enhanced datasets that are more usable. See https://github.com/msaad02/honors-thesis • 7 items • Updated Mar 18 • 1
PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models Paper • 2312.13964 • Published Dec 21, 2023 • 18