easy-read / README.md
mirari's picture
Update README.md
6d4a85e verified
|
raw
history blame
8.83 kB
metadata
language:
  - es
task_categories:
  - translation
dataset_info:
  features:
    - name: Lectura Compleja
      dtype: string
    - name: Lectura Fácil
      dtype: string
  splits:
    - name: train
      num_bytes: 6108303.2
      num_examples: 1012
    - name: test
      num_bytes: 1527075.8
      num_examples: 253
  download_size: 3935898
  dataset_size: 7635379
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

According to the World Health Organization, one third of the world's population has comprehension difficulties. This includes people with intellectual and developmental disabilities, older adults, migrants, people with low literacy levels, or anyone who experiences comprehension challenges at some point in their lives. Cognitive accessibility - that is, the condition that texts, signs, technology, and pictograms must meet to be easily understood by everyone - is therefore a fundamental right. One approach to making texts cognitively accessible is to adapt them for easy reading; however, this process requires following specific rules and experience in making this type of adaptation.

In this dataset we have texts provided by Plena Inlcusión La Rioja and Plena Inclusión España, which are an associative movement that fights for the rights of people with intellectual and developmental disabilities and their families. These texts are in two versions: the original text and the text adaptated to be easy-to-read. To do so, we have downloaded them from their web as follows.

In the case of Plena Inclusión La Rioja, we are going to use only the news, which are in the url https://www.plenainclusionlarioja.org/actualidad/noticias/. Here there is a list with all of the news of the web, so we are getting the links that direct us to those news by searching in the html code. For that, we are using the BeautifulSoup library in Python looking for the "btn btn-secondary" classes, which contain the url of the news. Some of the texts are only in their original version, so we are going to discard them. To do so, we are using again the BeatufiulSoup library and search the "lecturafacil_texto" classs in the html code, since that is the class used to classify the easy-to-read parts. Then, we are going to save that part of the text in an txt file. Moreover, we can find the text in the original version in the "articleBody" div, so we are saving it in another txt file. These two files are the ones that we have uploaded to this dataset. We can see the code below.

import os
from bs4 import BeautifulSoup
from urllib.request import urlopen

urls = []
url ="https://www.plenainclusionlarioja.org/actualidad/noticias/"
page = urlopen(url).read()
html = page.decode("utf-8")
soup = BeautifulSoup(html, 'html.parser')
mydivs = soup.find_all("a", {'class':"btn btn-secondary"})
for i in range(len(mydivs)):
    nombre = str(mydivs[i]).split('/actualidad/noticias/')[1].split('"')[0]
    urls.append(url+nombre)

for enlace in urls:
    page = urlopen(enlace).read()
    html = page.decode("utf-8")
    soup = BeautifulSoup(html, 'html.parser')
    mydivs = soup.find_all("div", {"class": "lecturafacil_texto"})
    if str(mydivs)!='[]':
        mydivsComplejo = soup.find_all("div", {"itemprop": "articleBody"})
        with open('./lecturaFacil/lf-'+enlace.split('/')[-1]+'.txt', 'w') as f:
            f.write(str(mydivs))
        with open('./lecturaCompleja/lc-'+enlace.split('/')[-1]+'.txt', 'w') as f:
            f.write(str(mydivsComplejo))

About the Plena Inclusión España data, we are using the news as well. In this case, they are in the url https://www.plenainclusion.org/noticias. We are getting the links to all the news directed from there using BeautifulSoup, but in this case we have to scroll through the 194 pages and get the links from the "elementor-post__read-more" class. Then, we are looking for the "post-lectura-dificil" class, which has the easy-to-read text, and the "articleBody" class, which has the original text, and we are saving them in two txt files as we can see in the following code.

enlaces = []
for i in range(194):
    url = "https://www.plenainclusion.org/noticias/?sf_paged="+str(i)
    page = urlopen(url).read()
    html = page.decode("utf-8")
    soup = BeautifulSoup(html, 'html.parser')
    mydivs = soup.find_all("a", {'class':"elementor-post__read-more"})
    for i in range(len(mydivs)):
        enlaces.append(str(mydivs[i]).split('href="')[1].split('"')[0])

for en in enlaces:
    page = urlopen(en).read()
    html = page.decode("utf-8")
    soup = BeautifulSoup(html, 'html.parser')
    mydivs = soup.find_all("p", {'class':"enlace-lectura-dificil"})
    if str(mydivs)!='[]':
        nombre = en.split('/')[-2]
        lf = soup.find_all("section", {"itemprop":"articleBody"})
        lf = str(lf).split("<figure")
        texto = lf[0]
        for i in range(len(lf)-1):
            if "</figure>" in lf[i+1]:
                if(len(lf[i+1].split("</figure>"))>2):
                    sinIm = lf[i+1].split("</figure>")[2]
                else:
                    sinIm = lf[i+1].split("</figure>")[1]
            texto = texto + sinIm
        lc = soup.find_all("section", {"class":"post-lectura-dificil"})
        with open('./plenaInclusionEspaña/lecturaFacil/lf-'+nombre+'.txt', 'w') as f:
            f.write(str(lf[0]))
        with open('./plenaInclusionEspaña/lecturaCompleja/lc-'+nombre+'.txt', 'w') as f:
            f.write(str(lc[0]))
    else:
        nombre = en.split('/')[-2]
        lf = soup.find_all("section", {"itemprop":"articleBody"})
        lf = str(lf).split("<figure")
        texto = lf[0]
        for i in range(len(lf)-1):
            if "</figure>" in lf[i+1]:
                if(len(lf[i+1].split("</figure>"))>2):
                    sinIm = lf[i+1].split("</figure>")[2]
                else:
                    sinIm = lf[i+1].split("</figure>")[1]
            texto = texto + sinIm
        with open('./plenaInclusionEspaña/soloFacil/' + nombre + '.txt', 'w') as f:
            f.write(texto)

Once we have the texts, we need to clean the code and remove all the html code. The first step is to identify the line breaks, so we are replacing the "

","

" and
with "\n" . Then, we keep the contents of each container, using the divs. In the next step we want to delete the videos and images in order to keep only the text. Then, we get the text from the references of the urls. Once we have all of this done, we delete all the remaining html code and we write it in another txt file.

Finally, the ones obtained by Plena inclusión España have, in their original version, the sentence "Este contenido NO está adaptado a Lectura Fácil", so we are keeping just the text thereafter. We can see the code below:

def limpiarHTML(directorio):
    for carpeta, directorios, ficheros in os.walk(directorio):
        for fichero in ficheros:
            if fichero.endswith('txt') and fichero[-14:]!='checkpoint.txt':
                contenido = open(directorio+fichero)
                texto = contenido.read()
                texto = texto.replace('<p>', '\n').replace('</p>', '\n').replace('<br/>', '\n')  #Los párrafos
                soup = BeautifulSoup(texto, 'html.parser')
                for div in soup.find_all('div', dir=True):
                    texto = texto.replace(str(div), str(div.get_text()))
                soup = BeautifulSoup(texto, 'html.parser')
                for frame in soup.find_all('iframe'): #Imágenes
                    texto = texto.replace(str(frame), str(frame['src']))
                soup = BeautifulSoup(texto, 'html.parser')
                for a in soup.find_all('a', href=True): #Enlaces
                    texto = texto.replace(str(a), str(a.get_text()))
                soup = BeautifulSoup(texto, 'html.parser')
                texto = soup.get_text()    
                with open('./limpios/'+directorio[2:]+fichero, 'w') as f:
                    f.write(texto[1:-1]) #Por que se me quedaban los corchetes del div
                    
                # Los de plena inclusión España tienen, en lectura compleja, una línea igual. Todos la tienen en la tercera línea así que 
                # una vez que se ha escrito el texto se busca la segunda línea y,si la tiene, se elimina lo previo
                
                contenido2 = open('./limpios/'+directorio[2:]+fichero)
                lines = contenido2.readlines()
                if len(lines)>1:
                    if lines[1] == " Este contenido NO está adaptado a Lectura Fácil\n":
                        with open('./limpios/'+directorio[2:]+fichero, 'w') as f:
                            for i in range(len(lines[2:])):
                                f.write(str(lines[i+2]))