Edit model card

Llama-2-7b-ocr

This model is released as part of the paper Leveraging LLMs for Post-OCR Correction of Historical Newspapers and designed to correct OCR text. Llama 2 7B is instruction-tuned for post-OCR correction of historical English, using BLN600, a parallel corpus of 19th century newspaper machine/human transcription.

Usage

from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, BitsAndBytesConfig
import torch

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type='nf4',
    bnb_4bit_compute_dtype=torch.bfloat16,
)

model = AutoPeftModelForCausalLM.from_pretrained(
    'pykale/llama-2-7b-ocr',
    quantization_config=bnb_config,
    low_cpu_mem_usage=True,
    torch_dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained('pykale/llama-2-7b-ocr')

ocr = "The defendant wits'fined �5 and costs."

prompt = f"""### Instruction:
Fix the OCR errors in the provided text.

### Input:
{ocr}

### Response:
"""

input_ids = tokenizer(prompt, max_length=1024, return_tensors='pt', truncation=True).input_ids.cuda()
with torch.inference_mode():
    outputs = model.generate(input_ids=input_ids, max_new_tokens=1024, do_sample=True, temperature=0.7, top_p=0.1, top_k=40)
pred = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):].strip()

print(pred)

Citation

@inproceedings{thomas-etal-2024-leveraging,
    title = "Leveraging {LLM}s for Post-{OCR} Correction of Historical Newspapers",
    author = "Thomas, Alan and Gaizauskas, Robert and Lu, Haiping",
    editor = "Sprugnoli, Rachele and Passarotti, Marco",
    booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024",
    month = "may",
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.lt4hala-1.14",
    pages = "116--121",
}
Downloads last month
113
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for pykale/llama-2-7b-ocr

Adapter
(1201)
this model

Space using pykale/llama-2-7b-ocr 1

Collection including pykale/llama-2-7b-ocr