-
RefalMachine/RuadaptQwen2.5-32B-Pro-Beta
Text Generation • 33B • Updated • 318 • 11 -
RefalMachine/RuadaptQwen2.5-7B-Lite-Beta
Text Generation • 8B • Updated • 37 • 10 -
RefalMachine/RuadaptQwen2.5-14B-Instruct
Text Generation • 15B • Updated • 466 • 5 -
msu-rcc-lair/RuadaptQwen2.5-32B-Instruct
Text Generation • 33B • Updated • 170 • 48
Collections
Discover the best community collections!
Collections trending this week
-
Dissociating language and thought in large language models: a cognitive perspective
Paper • 2301.06627 • Published • 1 -
A Latent Space Theory for Emergent Abilities in Large Language Models
Paper • 2304.09960 • Published • 3 -
Are Emergent Abilities of Large Language Models a Mirage?
Paper • 2304.15004 • Published • 8 -
Do LLMs Really Adapt to Domains? An Ontology Learning Perspective
Paper • 2407.19998 • Published • 1
-
RefalMachine/RuadaptQwen2.5-32B-Pro-Beta
Text Generation • 33B • Updated • 318 • 11 -
RefalMachine/RuadaptQwen2.5-7B-Lite-Beta
Text Generation • 8B • Updated • 37 • 10 -
RefalMachine/RuadaptQwen2.5-14B-Instruct
Text Generation • 15B • Updated • 466 • 5 -
msu-rcc-lair/RuadaptQwen2.5-32B-Instruct
Text Generation • 33B • Updated • 170 • 48
-
Dissociating language and thought in large language models: a cognitive perspective
Paper • 2301.06627 • Published • 1 -
A Latent Space Theory for Emergent Abilities in Large Language Models
Paper • 2304.09960 • Published • 3 -
Are Emergent Abilities of Large Language Models a Mirage?
Paper • 2304.15004 • Published • 8 -
Do LLMs Really Adapt to Domains? An Ontology Learning Perspective
Paper • 2407.19998 • Published • 1