Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Leia-Swallow-7b - GGUF

Original model description:

license: apache-2.0 language: - ja

Leia-Swallow-7B

LEIA is a training technique for autoregressive LLMs that effectively improves their performance in languages other than English by enhancing cross-lingual knowledge transfer from English to a target language. This model is constructed by applying LEIA to Swallow, a Japanese-English bilingual LLM based on LLaMA 2. The model achieves enhanced performance on six Japanese question-answering benchmarks, as reported below.

Please refer to our paper or blog post (in Japanese) for further technical details.

Model List

Empirical Results

The model is assessed using the following six question answering benchmarks:

  • X-CODAH
  • X-CSQA
  • JCommonsenseQA
  • NIILC
  • JEMHopQA
  • JAQKET v2
Model X-CODAH X-CSQA JCommonsenseQA NIILC JEMHopQA JAQKET v2
Swallow 42.0 41.0 80.3 59.5 50.8 86.2
LEIA 42.7 42.4 80.6 60.3 54.7 86.5

For further details of this experiment, please refer to our paper.

Contributors

  • Ikuya Yamada (Studio Ousia, RIKEN)
  • Ryokan Ri (LY Corporation, SB Intuitions)
Downloads last month
129
GGUF
Model size
6.83B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .