Collections
Discover the best community collections!
Collections including paper arxiv:2404.06773
-
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Paper • 2404.05961 • Published • 64 -
OmniFusion Technical Report
Paper • 2404.06212 • Published • 74 -
Adapting LLaMA Decoder to Vision Transformer
Paper • 2404.06773 • Published • 17 -
BRAVE: Broadening the visual encoding of vision-language models
Paper • 2404.07204 • Published • 18
-
Realism in Action: Anomaly-Aware Diagnosis of Brain Tumors from Medical Images Using YOLOv8 and DeiT
Paper • 2401.03302 • Published • 1 -
MLP Can Be A Good Transformer Learner
Paper • 2404.05657 • Published • 1 -
Detecting and recognizing characters in Greek papyri with YOLOv8, DeiT and SimCLR
Paper • 2401.12513 • Published • 1 -
DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
Paper • 2404.02900 • Published • 1
-
Event Camera Demosaicing via Swin Transformer and Pixel-focus Loss
Paper • 2404.02731 • Published • 1 -
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Paper • 2309.12284 • Published • 18 -
RALL-E: Robust Codec Language Modeling with Chain-of-Thought Prompting for Text-to-Speech Synthesis
Paper • 2404.03204 • Published • 7 -
Adapting LLaMA Decoder to Vision Transformer
Paper • 2404.06773 • Published • 17
-
CameraCtrl: Enabling Camera Control for Text-to-Video Generation
Paper • 2404.02101 • Published • 22 -
Adapting LLaMA Decoder to Vision Transformer
Paper • 2404.06773 • Published • 17 -
Interactive3D: Create What You Want by Interactive 3D Generation
Paper • 2404.16510 • Published • 18 -
Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo Tree Self-refine with LLaMa-3 8B
Paper • 2406.07394 • Published • 22