File size: 18,273 Bytes
d2c5c92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d99df9f
d2c5c92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
adc2c1a
d2c5c92
 
 
 
 
 
 
 
d99df9f
d2c5c92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d99df9f
d2c5c92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
---
language:
- es
license: apache-2.0
library_name: transformers
tags:
- clickbait
- noticia
- spanish
- summary
- summarization
base_model: openchat/openchat-3.5-0106
datasets:
- somosnlp/NoticIA-it
metrics:
- rouge
pipeline_tag: text-generation
widget:
- example_title: Summary Example
  messages:
  - role: user
    content: 'Ahora eres una Inteligencia Artificial experta en desmontar titulares
      sensacionalistas o clickbait. Tu tarea consiste en analizar noticias con titulares
      sensacionalistas y generar un resumen de una sola frase que revele la verdad
      detrás del titular.\nEste es el titular de la noticia: Le compra un abrigo a
      su abuela de 97 años y la reacción de esta es una fantasía\nEl titular plantea
      una pregunta o proporciona información incompleta. Debes buscar en el cuerpo
      de la noticia una frase que responda lo que se sugiere en el título. Siempre
      que puedas cita el texto original, especialmente si se trata de una frase que
      alguien ha dicho. Si citas una frase que alguien ha dicho, usa comillas para
      indicar que es una cita. Usa siempre las mínimas palabras posibles. No es necesario
      que la respuesta sea una oración completa. Puede ser sólo el foco de la pregunta.
      Recuerda responder siempre en Español.\nEste es el cuerpo de la noticia:\nLa
      usuaria de X @Kokreta1 ha relatado la conversación que ha tenido con su abuela
      de 97 años cuando le ha dado el abrigo que le ha comprado para su cumpleaños.\nTeniendo
      en cuenta la avanzada edad de la señora, la tuitera le ha regalado una prenda
      acorde a sus años, algo con lo que su yaya no ha estado de acuerdo.\nEl abrigo
      es de vieja, ha opinado la mujer cuando lo ha visto. Os juro que soy muy fan.
      Mañana vamos las dos (a por otro). Eso sí, la voy a llevar al Bershka, ha asegurado
      entre risas la joven.\nSegún la propia cadena de ropa, la cual pertenece a Inditex,
      su público se caracteriza por ser jóvenes atrevidos, conocedores de las últimas
      tendencias e interesados en la música, las redes sociales y las nuevas tecnologías,
      por lo que la gente mayor no suele llevar este estilo.\nLa inusual personalidad
      de la señora ha encantado a los usuarios de la red. Es por eso que el relato
      ha acumulado más de 1.000 me gusta y cerca de 100 retuits, además de una multitud
      de comentarios.\n'
---

<p align="center">
    <img src="https://huggingface.co/datasets/Iker/NoticIA/resolve/main/assets/logo.png" style="width: 50%;">
</p>
<h1 align="center">NoticIA-7B: A Model for Clickbait Article Summarization in Spanish.</h1>


- 📖 Dataset Card en Español: https://huggingface.co/somosnlp/NoticIA-7B/blob/main/README_es.md


## Model Details

### Model Description

We define a clickbait article as one that seeks to attract the reader's attention through curiosity. To do this, the headline poses a question or an incomplete, sensationalist, exaggerated, or misleading statement. The answer to the question generated by the headline usually does not appear until the end of the article, which is preceded by a large amount of irrelevant content. The goal is for the user to enter the website through the headline and then scroll to the end of the article, viewing as much advertising as possible. Clickbait articles tend to be of low quality and do not provide value to the reader beyond the initial curiosity. This phenomenon undermines public trust in news sources and negatively affects the advertising revenues of legitimate content creators, who could see their web traffic reduced.

We present a 7B parameter model, trained with the dataset [NoticIA-it](https://huggingface.co/datasets/somosnlp/NoticIA-it). This model is capable of generating concise and high-quality summaries of articles with clickbait headlines.

- **Developed by:** [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Begoña Altuna](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139)
- **Funded by:** SomosNLP, HuggingFace, [HiTZ Zentroa](https://www.hitz.eus/)
- **Model type:** Language model, instruction tuned
- **Language(s):** es-ES
- **License:** apache-2.0 
- **Fine-tuned from model:** [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
- **Dataset used:** https://huggingface.co/datasets/somosnlp/NoticIA-it

### Model Sources

- **💻 Repository:** https://github.com/ikergarcia1996/NoticIA
- **📖 Paper:** [NoticIA: A Clickbait Article Summarization Dataset in Spanish](https://arxiv.org/abs/2404.07611)
- **🤖 Dataset and Pre Trained Models** [https://huggingface.co/collections/Iker/noticia-and-clickbaitfighter-65fdb2f80c34d7c063d3e48e](https://huggingface.co/collections/Iker/noticia-and-clickbaitfighter-65fdb2f80c34d7c063d3e48e)
- **🔌 Demo:**  https://huggingface.co/spaces/somosnlp/NoticIA-demo
- **▶️ Video presentation (Spanish):** https://youtu.be/xc60K_NzUgk?si=QMqk6OzQZfKP1EUS
- **🐱‍💻 Hackathon #Somos600M**: https://somosnlp.org/hackathon 


## Uses


This model is tailored for scientific research, particularly for evaluating the performance of task-specific models in contrast to using instruction-tuned models in zero-shot settings. It can also be used by individuals to summarize clickbait articles for personal use.

### Direct Use

- 📖 Summarization of clickbait articles
- 📈 Evaluation of Language Models in Spanish.
- 📚 Develop new academic resources (ie. synthetic data generation)
- 🎓 Any other academic research purpose.


### Out-of-Scope Use

We prohibit the use of this model for any action that may harm the legitimacy or economic viability of legitimate and professional media outlets.

## Bias, Risks, and Limitations

The model has been primarily trained with Spanish news from Spain, and the annotators of the data are also from Spain. Therefore, we expect this model to be proficient with Spanish from Spain. However, we cannot assure that it will perform well with news from Latin America or news in other languages.


## How to Get Started with the Model

### Making a summary of a clickbait article on the Web

The following code shows an example of how to use the template to generate a summary from the URL of a clickbait article. 


```python
import torch # pip install torch
from newspaper import Article #pip3 install newspaper3k
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig # pip install transformers
from transformers import BitsAndBytesConfig # pip install bitsandbytes

article_url ="https://www.huffingtonpost.es/virales/le-compra-abrigo-abuela-97nos-reaccion-fantasia.html"
article = Article(article_url)
article.download()
article.parse()
headline=article.title
body = article.text

def prompt(
    headline: str,
    body: str,
) -> str:
    """
    Generate the prompt for the model.

    Args:
        headline (`str`):
            The headline of the article.
        body (`str`):
            The body of the article.
    Returns:
        `str`: The formatted prompt.
    """

    return (
        f"Ahora eres una Inteligencia Artificial experta en desmontar titulares sensacionalistas o clickbait. "
        f"Tu tarea consiste en analizar noticias con titulares sensacionalistas y "
        f"generar un resumen de una sola frase que revele la verdad detrás del titular.\n"
        f"Este es el titular de la noticia: {headline}\n"
        f"El titular plantea una pregunta o proporciona información incompleta. "
        f"Debes buscar en el cuerpo de la noticia una frase que responda lo que se sugiere en el título. "
        f"Siempre que puedas cita el texto original, especialmente si se trata de una frase que alguien ha dicho. "
        f"Si citas una frase que alguien ha dicho, usa comillas para indicar que es una cita. "
        f"Usa siempre las mínimas palabras posibles. No es necesario que la respuesta sea una oración completa. "
        f"Puede ser sólo el foco de la pregunta. "
        f"Recuerda responder siempre en Español.\n"
        f"Este es el cuerpo de la noticia:\n"
        f"{body}\n"
    )

prompt = prompt(headline=headline, body=body)

tokenizer = AutoTokenizer.from_pretrained("somosnlp/NoticIA-7B")


quantization_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_compute_dtype=torch.bfloat16,
   bnb_4bit_use_double_quant=True,
)

model = AutoModelForCausalLM.from_pretrained(
    "somosnlp/NoticIA-7B", torch_dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config,
)

formatted_prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": prompt}],
    tokenize=False,
    add_generation_prompt=True,
)

model_inputs = tokenizer(
    [formatted_prompt], return_tensors="pt", add_special_tokens=False
)

model_output = model.generate(**model_inputs.to(model.device), generation_config=GenerationConfig(
  max_new_tokens=64,
  min_new_tokens=1,
  do_sample=False,
  num_beams=1,
  use_cache=True
))

summary = tokenizer.batch_decode(model_output,skip_special_tokens=True)[0]

print(summary.strip().split("\n")[-1]) # Get only the summary, without the prompt. 
```

# Performing inference on the NoticIA dataset
The following code shows an example of how to perform an inference on an example of our dataset.

```python
import torch # pip install torch
from datasets import load_dataset # pip install datasets
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig # pip install transformers
from transformers import BitsAndBytesConfig # pip install bitsandbytes


dataset = load_dataset("somosnlp/NoticIA-it",split="test")

tokenizer = AutoTokenizer.from_pretrained("somosnlp/NoticIA-7B")

quantization_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_compute_dtype=torch.bfloat16,
   bnb_4bit_use_double_quant=True,
)

model = AutoModelForCausalLM.from_pretrained(
    "somosnlp/NoticIA-7B", torch_dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config,
)

formatted_prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": dataset[0]["pregunta"]}],
    tokenize=False,
    add_generation_prompt=True,
)

model_inputs = tokenizer(
    [formatted_prompt], return_tensors="pt", add_special_tokens=False
)

model_output = model.generate(**model_inputs.to(model.device), generation_config=GenerationConfig(
  max_new_tokens=64,
  min_new_tokens=1,
  do_sample=False,
  num_beams=1,
  use_cache=True
))

summary = tokenizer.batch_decode(model_output,skip_special_tokens=True)[0]

print(summary.strip().split("\n")[-1]) # Get only the summary, without the prompt. 
```

## Training Details

### Training Data

We define a clickbait article as one that seeks to attract the reader's attention through curiosity. For this purpose, the headline poses a question or an incomplete, sensationalist, exaggerated, or misleading statement. The answer to the question raised in the headline usually does not appear until the end of the article, preceded by a large amount of irrelevant content. The goal is for the user to enter the website through the headline and then scroll to the end of the article, viewing as much advertising as possible. Clickbait articles tend to be of low quality and provide no value to the reader beyond the initial curiosity. This phenomenon undermines public trust in news sources and negatively affects the advertising revenue of legitimate content creators, who could see their web traffic reduced.

We train the model with [NoticIA](https://huggingface.co/datasets/somosnlp/NoticIA-it), a dataset consisting of 850 Spanish news articles with clickbait headlines, each paired with high-quality, single-sentence generative summaries written by humans. This task demands advanced skills in text comprehension and summarization, challenging the ability of models to infer and connect various pieces of information to satisfy the user's informational curiosity generated by the clickbait headline.


### Training Procedure

To train the model, we have developed our own training and annotation library: [https://github.com/ikergarcia1996/NoticIA](https://github.com/ikergarcia1996/NoticIA). This library utilizes 🤗 Transformers, 🤗 PEFT, Bitsandbytes, and Deepspeed.

For the hackathon, we decided to train a model with 7 trillion parameters, since using 4-bit quantization, it is possible to run the model on domestic hardware. After analyzing the performance of a large number of LLMs, we chose [openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) due to its high performance without the need for pretraining. To minimally disturb the prior knowledge of the model that allows for this performance, we opted to use the *Low-Rank Adaptation* (LoRA) training technique.

#### Training Hyperparameters


- **Training regime:** bfloat16
- **Training method:** LoRA + Deepspeed Zero3
- **Batch size:** 64
- **Sequence Length**: 8192
- **Epochs:** 3
- **Optimizer:**: AdamW
- **Software**: Huggingface, Peft, Pytorch, Deepspeed

The exact training configuration is available at: https://huggingface.co/somosnlp/NoticIA-7B/blob/main/openchat-3.5-0106_LoRA.yaml

## Evaluation


### Testing Data, Factors & Metrics

#### Testing Data

We use the Test split of the NoticIA-it dataset: https://huggingface.co/datasets/somosnlp/NoticIA-it

#### Prompts

The prompt used for training is the same as defined and explained at [https://huggingface.co/datasets/somosnlp/NoticIA-it](https://huggingface.co/datasets/somosnlp/NoticIA-it). The prompt is converted into the chat template specific to each model.

#### Metrics

As is customary in summarization tasks, we use the ROUGE scoring metric to automatically evaluate the summaries produced by the models. Our main metric is ROUGE-1, which considers whole words as basic units. To calculate the ROUGE score, we lowercase both summaries and remove punctuation marks. In addition to the ROUGE score, we also consider the average length of the summaries. For our task, we aim for the summaries to be concise, an aspect that the ROUGE score does not evaluate. Therefore, when evaluating models, we consider both the ROUGE-1 score and the average length of the summaries. Our goal is to find a model that achieves the highest possible ROUGE score with the shortest possible summary length, balancing quality and brevity.

### Results

We have evaluated the best language models trained to follow current instructions, and we have also included the performance obtained by a human annotator. The code to reproduce the results is available at the following link: [https://github.com/ikergarcia1996/NoticIA](https://github.com/ikergarcia1996/NoticIA)

<p align="center">
    <img src="https://huggingface.co/somosnlp/Resumen_Noticias_Clickbait/resolve/main/Results_finetune.png" style="width: 100%;">
</p>

After training, our model acquires the ability to perform summaries with a capacity close to that of humans, significantly surpassing any model in a zero-shot setting. At the same time, the model produces more concise and shorter summaries.


## Environmental Impact

For the carbon footprint estimation, we estimated the values considering a 400W consumption per GPU with a 0.083 kg/kWh carbon intensity: https://app.electricitymaps.com/map

- **Hardware Type:** 4 X Nvidia A100 80Gb
- **Hours used:** 2 hours
- **Compute Region:** Donostia, Basque Country, Spain
- **Carbon Emitted:** 0.3984 kg Co2


### Model Architecture and Objective

Decoder-only model. Pretrained for instruction. We employ the standard Next Token Prediction (NTP) loss for training our models. To prevent the loss associated with the article body tokens from overshadowing the loss of the summary output tokens, we compute the loss exclusively over the summary tokens.
### Compute Infrastructure

We conducted all our experiments on a machine equipped with four NVIDIA A100 GPUs, each with 80GB of memory, interconnected via NVLink. The machine features two AMD EPYC 7513 32-Core Processors and 1TB (1024GB) of RAM.


#### Software

- Huggingface Transformers: https://github.com/huggingface/transformers
- PEFT: https://github.com/huggingface/peft
- Deepspeed: https://github.com/microsoft/DeepSpeed
- Pytorch: https://pytorch.org/

Our code is available at [https://github.com/ikergarcia1996/NoticIA](https://github.com/ikergarcia1996/NoticIA)


## License

We release our model under the Apache 2.0 license.

## Citation

If you use this dataset, please cite our paper: [NoticIA: A Clickbait Article Summarization Dataset in Spanish](https://arxiv.org/abs/2404.07611)

**BibTeX:**

```
@misc{garcíaferrero2024noticia,
      title={NoticIA: A Clickbait Article Summarization Dataset in Spanish}, 
      author={Iker García-Ferrero and Begoña Altuna},
      year={2024},
      eprint={2404.07611},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```



## More Information


This project was developed during the [Hackathon #Somos600M](https://somosnlp.org/hackathon) organized by SomosNLP. Demo endpoints were sponsored by HuggingFace.

**Team:** 


- [Iker García-Ferrero](https://huggingface.co/Iker)
- [Begoña Altura](https://huggingface.co/baltuna)

**Contact**: {iker.garciaf,begona.altuna}@ehu.eus


This dataset was created by [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/) and [Begoña Altuna](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139). We are researchers in NLP at the University of the Basque Country, within the [IXA](https://www.ixa.eus/) research group, and we are part of [HiTZ, the Basque Language Technology Center](https://www.hitz.eus/es).

<div style="display: flex; justify-content: space-around; width: 100%;">
  <div style="width: 50%;" align="left">
    <a href="http://ixa.si.ehu.es/">
      <img src="https://raw.githubusercontent.com/ikergarcia1996/Iker-Garcia-Ferrero/master/icons/ixa.png" width="50" height="50"  alt="Ixa NLP Group">
    </a>
  </div>
  <div style="width: 50%;" align="right">
    <a href="http://www.hitz.eus/">
      <img src="https://raw.githubusercontent.com/ikergarcia1996/Iker-Garcia-Ferrero/master/icons/Hitz.png" width="300" height="50" alt="HiTZ Basque Center for Language Technologies">
    </a>
  </div>
</div>