Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints
scitulu-7b / README.md
dwadden's picture
Update README.md
49fd11c verified
metadata
license: apache-2.0
datasets:
  - allenai/SciRIFF-train-mix
  - allenai/tulu-v2-sft-mixture
language:
  - en

Model Card for SciTulu 7B

SciTulu is a collection of instruction-following language models targeting scientific literature understanding use cases. Starting from the Tulu v2 7B model, SciTulu is trained on a mix of science-specific demonstrations from the SciRIFF dataset, together with general-domain instructions from the Tulu v2 SFT mix. SciTulu 7B achives a 28.1% average improvement over Tulu v2 7B on nine held-out scientific literature understanding tasks. More information can be found in our preprint: SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature.

Training and evaluation code for SciTulu is available in our GitHub repository: https://github.com/allenai/SciRIFF.

See the Tulu model card for more information on potential risks, biases, and limitations.