File size: 6,246 Bytes
3fe22d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
language:
- it
license: apache-2.0
library_name: transformers
tags:
- text-generation-inference
- unsloth
- llama
- llama3.1
- trl
- word-game
- rebus
- italian
- word-puzzle
- crossword
datasets:
- gsarti/eureka-rebus
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
model-index:
- name: gsarti/llama-3.1-8b-rebus-solver-fp16
results:
- task:
type: verbalized-rebus-solving
name: Verbalized Rebus Solving
dataset:
type: gsarti/eureka-rebus
name: EurekaRebus
config: llm_sft
split: test
revision: 0f24ebc3b66cd2f8968077a5eb058be1d5af2f05
metrics:
- type: exact_match
value: 0.59
name: First Pass Exact Match
- type: exact_match
value: 0.56
name: Solution Exact Match
---
# LLaMA-3.1 8B Verbalized Rebus Solver - PEFT Adapters 🇮🇹
This model is a parameter-efficient fine-tuned version of LLaMA-3.1 8B trained for verbalized rebus solving in Italian, as part of the [release](https://huggingface.co/collections/gsarti/verbalized-rebus-clic-it-2024-66ab8f11cb04e68bdf4fb028) for our paper [Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses](https://arxiv.org/abs/2408.00584). The task of verbalized rebus solving consists of converting an encrypted sequence of letters and crossword definitions into a solution phrase matching the word lengths specified in the solution key. An example is provided below.
The model was trained in 4-bit precision for 5070 steps on the verbalized subset of the [EurekaRebus](https://huggingface.co/datasets/gsarti/eureka-rebus) using QLora via [Unsloth](https://github.com/unslothai/unsloth) and [TRL](https://github.com/huggingface/trl). This repository contains PEFT-compatible adapters saved throughout training. Use the revision=<GIT_HASH> parameter in from_pretrained to load mid-training adapter checkpoints.
We also provide [FP16 merged](https://huggingface.co/gsarti/llama-3.1-8b-rebus-solver-fp16) and [8-bit GGUF](https://huggingface.co/gsarti/gsarti/llama-3.1-8b-rebus-solver-Q8_0-GGUF) versions of this model for analysis and local execution.
## Using the Model
The following example shows how to perform inference using Unsloth:
```python
# With Unsloth (efficient, requires GPU)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "gsarti/llama-3.1-8b-rebus-solver-adapters",
max_seq_length = 1248,
load_in_4bit = True,
)
# Inference
verbalized_rebus = "[Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]"
solution_key = "1 ' 5 6 5 3 3 1 14"
template = """<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Risolvi gli indizi tra parentesi per ottenere una prima lettura, e usa la chiave di lettura per ottenere la soluzione del rebus.
Rebus: {rebus}
Chiave risolutiva: {key}<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
input = template.format(rebus=verbalized_rebus, key=solution_key)
inputs = tokenizer(input, return_tensors="pt")["input_ids"]
outputs = model.generate(input_ids = inputs, max_new_tokens = 500, use_cache = True)
model_generations = tokenizer.batch_decode(outputs)
print(model_generations[0])
# Procediamo alla risoluzione del rebus passo per passo:
# - [Materiale espulso dai vulcani] = lava
# - R O = R O
# - [Strumento del calzolaio] = lesina
# - [Si trovano ai lati del bacino] = anche
# - C I = C I
# - [Si ingrassano con la polenta] = oche
# - E I N = E I N
# - [Contiene scorte di cibi] = dispensa
# - B = B
# - [Isola in francese] = ile
#
# Prima lettura: lava R O lesina anche C I oche E I N silos B ile
#
# Ora componiamo la soluzione seguendo la chiave risolutiva:
# 1 = L
# ' = '
# 5 = avaro
# 6 = lesina
# 5 = anche
# 3 = ciò
# 3 = che
# 1 = è
# 14 = indispensabile
#
# Soluzione: L'avaro lesina anche ciò che è indispensabile
```
See the official [code release](https://github.com/gsarti/verbalized-rebus) for more examples.
### Local usage with Ollama
A ready-to-use local version of this model is hosted on the [Ollama Hub](https://ollama.com/gsarti/llama3.1-8b-rebus-solver) and can be used as follows:
```shell
ollama run gsarti/llama3.1-8b-rebus-solver "Rebus: [Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]\nChiave risolutiva: 1 ' 5 6 5 3 3 1 14"
```
## Limitations
**Lexical overfitting**: As remarked in the related publication, the model overfitted the set of definitions/answers for first pass words. As a result, words that were [explicitly witheld](https://huggingface.co/datasets/gsarti/eureka-rebus/blob/main/ood_words.txt) from the training set cause significant performance degradation when used as solutions for verbalized rebuses' definitions. You can compare model performances between [in-domain](https://huggingface.co/datasets/gsarti/eureka-rebus/blob/main/id_test.jsonl) and [out-of-domain](https://huggingface.co/datasets/gsarti/eureka-rebus/blob/main/ood_test.jsonl) test examples to verify this limitation.
## Model curators
For problems or updates on this model, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Citation Information
If you use this model in your work, please cite our paper as follows:
```bibtex
@article{sarti-etal-2024-rebus,
title = "Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses",
author = "Sarti, Gabriele and Caselli, Tommaso and Nissim, Malvina and Bisazza, Arianna",
journal = "ArXiv",
month = jul,
year = "2024",
volume = {abs/2408.00584},
url = {https://arxiv.org/abs/2408.00584},
}
```
## Acknowledgements
We are grateful to the [Associazione Culturale "Biblioteca Enigmistica Italiana - G. Panini"](http://www.enignet.it/home) for making its rebus collection freely accessible on the [Eureka5 platform](http://www.eureka5.it).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |