{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "99b677ff-3399-4f27-ac0f-782bfe25f151", "metadata": {}, "source": [ "# Grammatical Error Correction with OpenVINO\n", "\n", "AI-based auto-correction products are becoming increasingly popular due to their ease of use, editing speed, and affordability. These products improve the quality of written text in emails, blogs, and chats.\n", "\n", "Grammatical Error Correction (GEC) is the task of correcting different types of errors in text such as spelling, punctuation, grammatical and word choice errors.\n", "GEC is typically formulated as a sentence correction task. A GEC system takes a potentially erroneous sentence as input and is expected to transform it into a more correct version. See the example given below:\n", "\n", "| Input (Erroneous) | Output (Corrected) |\n", "| --------------------------------------------------------- | -------------------------------------------------------- |\n", "| I like to rides my bicycle. | I like to ride my bicycle. |\n", "\n", " As shown in the image below, different types of errors in written language can be corrected.\n", "\n", "![error_types](https://cdn-images-1.medium.com/max/540/1*Voez5hEn5MU8Knde3fIZfw.png)\n", "\n", "This tutorial shows how to perform grammatical error correction using OpenVINO. We will use pre-trained models from the [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) library. To simplify the user experience, the [Hugging Face Optimum](https://huggingface.co/docs/optimum) library is used to convert the models to OpenVINOβ’ IR format.\n", "\n", "It consists of the following steps:\n", "\n", "- Install prerequisites\n", "- Download and convert models from a public source using the [OpenVINO integration with Hugging Face Optimum](https://huggingface.co/blog/openvino).\n", "- Create an inference pipeline for grammatical error checking\n", "- Optimize grammar correction pipeline with [NNCF](https://github.com/openvinotoolkit/nncf/) quantization\n", "- Compare original and optimized pipelines from performance and accuracy standpoints\n", "\n", "\n", "#### Table of contents:\n", "\n", "- [How does it work?](#How-does-it-work?)\n", "- [Prerequisites](#Prerequisites)\n", "- [Download and Convert Models](#Download-and-Convert-Models)\n", " - [Select inference device](#Select-inference-device)\n", " - [Grammar Checker](#Grammar-Checker)\n", " - [Grammar Corrector](#Grammar-Corrector)\n", "- [Prepare Demo Pipeline](#Prepare-Demo-Pipeline)\n", "- [Quantization](#Quantization)\n", " - [Run Quantization](#Run-Quantization)\n", " - [Compare model size, performance and accuracy](#Compare-model-size,-performance-and-accuracy)\n", "- [Interactive demo](#Interactive-demo)\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "efafd7fb-95ea-47c0-9441-7b2bbb8e6b89", "metadata": {}, "source": [ "## How does it work?\n", "[back to top β¬οΈ](#Table-of-contents:)\n", "\n", "A Grammatical Error Correction task can be thought of as a sequence-to-sequence task where a model is trained to take a grammatically incorrect sentence as input and return a grammatically correct sentence as output. We will use the [FLAN-T5](https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis) model finetuned on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset.\n", "\n", "The version of FLAN-T5 released with the [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) paper is an enhanced version of [T5](https://huggingface.co/t5-large) that has been finetuned on a combination of tasks. The paper explores instruction finetuning with a particular focus on scaling the number of tasks, scaling the model size, and finetuning on chain-of-thought data. The paper discovers that overall instruction finetuning is a general method that improves the performance and usability of pre-trained language models.\n", "\n", "![flan-t5_training](https://production-media.paperswithcode.com/methods/a04cb14e-e6b8-449e-9487-bc4262911d74.png)\n", "\n", "For more details about the model, please check out [paper](https://arxiv.org/abs/2210.11416), original [repository](https://github.com/google-research/t5x), and Hugging Face [model card](https://huggingface.co/google/flan-t5-large)\n", "\n", "Additionally, to reduce the number of sentences required to be processed, you can perform grammatical correctness checking. This task should be considered as a simple binary text classification, where the model gets input text and predicts label 1 if a text contains any grammatical errors and 0 if it does not. You will use the [roberta-base-CoLA](https://huggingface.co/textattack/roberta-base-CoLA) model, the RoBERTa Base model finetuned on the CoLA dataset. The RoBERTa model was proposed in [RoBERTa: A Robustly Optimized BERT Pretraining Approach paper](https://arxiv.org/abs/1907.11692). It builds on BERT and modifies key hyperparameters, removing the next-sentence pre-training objective and training with much larger mini-batches and learning rates. Additional details about the model can be found in a [blog post](https://ai.facebook.com/blog/roberta-an-optimized-method-for-pretraining-self-supervised-nlp-systems/) by Meta AI and in the [Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/roberta)\n", "\n", "Now that we know more about FLAN-T5 and RoBERTa, let us get started. π" ] }, { "attachments": {}, "cell_type": "markdown", "id": "ed9a760a-aaf7-41f6-ab0d-da993e486336", "metadata": {}, "source": [ "## Prerequisites\n", "[back to top β¬οΈ](#Table-of-contents:)\n", "\n", "First, we need to install the [Hugging Face Optimum](https://huggingface.co/docs/transformers/index) library accelerated by OpenVINO integration.\n", "The Hugging Face Optimum API is a high-level API that enables us to convert and quantize models from the Hugging Face Transformers library to the OpenVINOβ’ IR format. For more details, refer to the [Hugging Face Optimum documentation](https://huggingface.co/docs/optimum/intel/inference)." ] }, { "cell_type": "code", "execution_count": 1, "id": "2974cad4-bd3f-4552-82ac-ebd21bf75d9d", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:41.250268200Z", "start_time": "2023-09-27T12:36:41.126825900Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n", "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install -q \"torch>=2.1.0\" \"git+https://github.com/huggingface/optimum-intel.git\" \"openvino>=2024.0.0\" onnx tqdm \"gradio>=4.19\" \"transformers>=4.33.0\" --extra-index-url https://download.pytorch.org/whl/cpu\n", "%pip install -q \"nncf>=2.9.0\" datasets jiwer" ] }, { "attachments": {}, "cell_type": "markdown", "id": "c13b157a-2bbb-49db-9046-47c2b6ba2953", "metadata": {}, "source": [ "## Download and Convert Models\n", "[back to top β¬οΈ](#Table-of-contents:)\n", "\n", "Optimum Intel can be used to load optimized models from the [Hugging Face Hub](https://huggingface.co/docs/optimum/intel/hf.co/models) and create pipelines to run an inference with OpenVINO Runtime using Hugging Face APIs. The Optimum Inference models are API compatible with Hugging Face Transformers models. This means we just need to replace `AutoModelForXxx` class with the corresponding `OVModelForXxx` class.\n", "\n", "Below is an example of the RoBERTa text classification model\n", "\n", "```diff\n", "-from transformers import AutoModelForSequenceClassification\n", "+from optimum.intel.openvino import OVModelForSequenceClassification\n", "from transformers import AutoTokenizer, pipeline\n", "\n", "model_id = \"textattack/roberta-base-CoLA\"\n", "-model = AutoModelForSequenceClassification.from_pretrained(model_id)\n", "+model = OVModelForSequenceClassification.from_pretrained(model_id, from_transformers=True)\n", "```\n", "\n", "Model class initialization starts with calling `from_pretrained` method. When downloading and converting Transformers model, the parameter `from_transformers=True` should be added. We can save the converted model for the next usage with the `save_pretrained` method.\n", "Tokenizer class and pipelines API are compatible with Optimum models." ] }, { "cell_type": "code", "execution_count": 2, "id": "b99c7c6c-256d-43ae-9b8b-fc1d4f501e06", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:45.293606200Z", "start_time": "2023-09-27T12:36:41.140617200Z" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2024-03-25 11:56:04.043628: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n", "2024-03-25 11:56:04.045940: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n", "2024-03-25 11:56:04.079112: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n", "2024-03-25 11:56:04.079147: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n", "2024-03-25 11:56:04.079167: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n", "2024-03-25 11:56:04.085243: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n", "2024-03-25 11:56:04.085971: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n", "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2024-03-25 11:56:05.314633: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino\n" ] } ], "source": [ "from pathlib import Path\n", "from transformers import pipeline, AutoTokenizer\n", "from optimum.intel.openvino import OVModelForSeq2SeqLM, OVModelForSequenceClassification" ] }, { "attachments": {}, "cell_type": "markdown", "id": "833e0871-c828-4104-a986-230a27c913a5", "metadata": {}, "source": [ "### Select inference device\n", "[back to top β¬οΈ](#Table-of-contents:)\n", "\n", "select device from dropdown list for running inference using OpenVINO" ] }, { "cell_type": "code", "execution_count": 3, "id": "053b4f68-a329-43ac-920c-9d86949edc05", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:45.365875Z", "start_time": "2023-09-27T12:36:45.358347Z" } }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "42061109f15641afbc97b6ec04d77682", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Dropdown(description='Device:', index=3, options=('CPU', 'GPU.0', 'GPU.1', 'AUTO'), value='AUTO')" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import ipywidgets as widgets\n", "import openvino as ov\n", "\n", "core = ov.Core()\n", "\n", "device = widgets.Dropdown(\n", " options=core.available_devices + [\"AUTO\"],\n", " value=\"AUTO\",\n", " description=\"Device:\",\n", " disabled=False,\n", ")\n", "\n", "device" ] }, { "attachments": {}, "cell_type": "markdown", "id": "6131a0ec-654e-435e-a668-55ad33cff74b", "metadata": {}, "source": [ "### Grammar Checker\n", "[back to top β¬οΈ](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "47af0ecf-99ff-4852-bfaa-6692caeaca21", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:46.565522500Z", "start_time": "2023-09-27T12:36:45.374663900Z" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Framework not specified. Using pt to export the model.\n", "Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']\n", "- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", "Using the export variant default. Available variants are:\n", " - default: The default ONNX variant.\n", "Using framework PyTorch: 2.2.1+cpu\n", "Overriding 1 configuration item(s)\n", "\t- use_cache -> False\n", "/home/ea/miniconda3/lib/python3.11/site-packages/transformers/modeling_utils.py:4225: FutureWarning: `_is_quantized_training_enabled` is going to be deprecated in transformers 4.39.0. Please use `model.hf_quantizer.is_trainable` instead\n", " warnings.warn(\n", "Compiling the model to AUTO ...\n" ] } ], "source": [ "grammar_checker_model_id = \"textattack/roberta-base-CoLA\"\n", "grammar_checker_dir = Path(\"roberta-base-cola\")\n", "grammar_checker_tokenizer = AutoTokenizer.from_pretrained(grammar_checker_model_id)\n", "\n", "if grammar_checker_dir.exists():\n", " grammar_checker_model = OVModelForSequenceClassification.from_pretrained(grammar_checker_dir, device=device.value)\n", "else:\n", " grammar_checker_model = OVModelForSequenceClassification.from_pretrained(grammar_checker_model_id, export=True, device=device.value, load_in_8bit=False)\n", " grammar_checker_model.save_pretrained(grammar_checker_dir)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "482a5d75-916a-4363-bf24-5b642a6bf437", "metadata": {}, "source": [ "Let us check model work, using inference pipeline for `text-classification` task. You can find more information about usage Hugging Face inference pipelines in this [tutorial](https://huggingface.co/docs/transformers/pipeline_tutorial)" ] }, { "cell_type": "code", "execution_count": 5, "id": "90e48d59-9eea-4962-ac9a-fc9a6330b406", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:46.609135600Z", "start_time": "2023-09-27T12:36:46.570867800Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "input text: They are moved by salar energy\n", "predicted label: contains_errors\n", "predicted score: 0.88\n" ] } ], "source": [ "input_text = \"They are moved by salar energy\"\n", "grammar_checker_pipe = pipeline(\n", " \"text-classification\",\n", " model=grammar_checker_model,\n", " tokenizer=grammar_checker_tokenizer,\n", ")\n", "result = grammar_checker_pipe(input_text)[0]\n", "print(f\"input text: {input_text}\")\n", "print(f'predicted label: {\"contains_errors\" if result[\"label\"] == \"LABEL_1\" else \"no errors\"}')\n", "print(f'predicted score: {result[\"score\"] :.2}')" ] }, { "attachments": {}, "cell_type": "markdown", "id": "5c4e358c-bbf8-4ea8-9b19-d8616c41562d", "metadata": {}, "source": [ "Great! Looks like the model can detect errors in the sample." ] }, { "attachments": {}, "cell_type": "markdown", "id": "cdba3c17-9f94-4d1c-afae-39c857caf5af", "metadata": {}, "source": [ "### Grammar Corrector\n", "[back to top β¬οΈ](#Table-of-contents:)\n", "\n", "The steps for loading the Grammar Corrector model are very similar, except for the model class that is used. Because FLAN-T5 is a sequence-to-sequence text generation model, we should use the `OVModelForSeq2SeqLM` class and the `text2text-generation` pipeline to run it." ] }, { "cell_type": "code", "execution_count": 6, "id": "a4771627-a3d1-4023-a016-c668ec079f34", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:54.537211100Z", "start_time": "2023-09-27T12:36:46.613175900Z" }, "test_replace": { "flan-t5-large-grammar-synthesis": "grammar-synthesis-small" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Framework not specified. Using pt to export the model.\n", "Using the export variant default. Available variants are:\n", " - default: The default ONNX variant.\n", "Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.\n", "Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4}\n", "Using framework PyTorch: 2.2.1+cpu\n", "Overriding 1 configuration item(s)\n", "\t- use_cache -> False\n", "/home/ea/miniconda3/lib/python3.11/site-packages/transformers/modeling_utils.py:4225: FutureWarning: `_is_quantized_training_enabled` is going to be deprecated in transformers 4.39.0. Please use `model.hf_quantizer.is_trainable` instead\n", " warnings.warn(\n", "Using framework PyTorch: 2.2.1+cpu\n", "Overriding 1 configuration item(s)\n", "\t- use_cache -> True\n", "/home/ea/miniconda3/lib/python3.11/site-packages/transformers/modeling_utils.py:943: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\n", " if causal_mask.shape[1] < attention_mask.shape[1]:\n", "Using framework PyTorch: 2.2.1+cpu\n", "Overriding 1 configuration item(s)\n", "\t- use_cache -> True\n", "/home/ea/miniconda3/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py:509: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\n", " elif past_key_value.shape[2] != key_value_states.shape[1]:\n", "Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.\n", "Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4}\n", "Compiling the encoder to AUTO ...\n", "Compiling the decoder to AUTO ...\n", "Compiling the decoder to AUTO ...\n", "Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.\n", "Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4}\n" ] } ], "source": [ "grammar_corrector_model_id = \"pszemraj/flan-t5-large-grammar-synthesis\"\n", "grammar_corrector_dir = Path(\"flan-t5-large-grammar-synthesis\")\n", "grammar_corrector_tokenizer = AutoTokenizer.from_pretrained(grammar_corrector_model_id)\n", "\n", "if grammar_corrector_dir.exists():\n", " grammar_corrector_model = OVModelForSeq2SeqLM.from_pretrained(grammar_corrector_dir, device=device.value)\n", "else:\n", " grammar_corrector_model = OVModelForSeq2SeqLM.from_pretrained(grammar_corrector_model_id, export=True, device=device.value)\n", " grammar_corrector_model.save_pretrained(grammar_corrector_dir)" ] }, { "cell_type": "code", "execution_count": 7, "id": "cf3d0d24-c94a-42c7-b603-499bd9d251d6", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:54.543943100Z", "start_time": "2023-09-27T12:36:54.543943100Z" } }, "outputs": [], "source": [ "grammar_corrector_pipe = pipeline(\n", " \"text2text-generation\",\n", " model=grammar_corrector_model,\n", " tokenizer=grammar_corrector_tokenizer,\n", ")" ] }, { "cell_type": "code", "execution_count": 8, "id": "4bdf3a9d-1b4d-415f-8e7a-6be89f700898", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:55.348843Z", "start_time": "2023-09-27T12:36:54.544960300Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "input text: They are moved by salar energy\n", "generated text: They are powered by solar energy.\n" ] } ], "source": [ "result = grammar_corrector_pipe(input_text)[0]\n", "print(f\"input text: {input_text}\")\n", "print(f'generated text: {result[\"generated_text\"]}')" ] }, { "attachments": {}, "cell_type": "markdown", "id": "992cb162-efd3-49da-99c5-0c44af34afaf", "metadata": {}, "source": [ "Nice! The result looks pretty good!" ] }, { "attachments": {}, "cell_type": "markdown", "id": "69faa673-45fd-481d-9573-4f54ea17fb77", "metadata": {}, "source": [ "## Prepare Demo Pipeline\n", "[back to top β¬οΈ](#Table-of-contents:)\n", "\n", "Now let us put everything together and create the pipeline for grammar correction.\n", "The pipeline accepts input text, verifies its correctness, and generates the correct version if required. It will consist of several steps:\n", "\n", "1. Split text on sentences.\n", "2. Check grammatical correctness for each sentence using Grammar Checker.\n", "3. Generate an improved version of the sentence if required." ] }, { "cell_type": "code", "execution_count": 9, "id": "15edc678-6bf7-4241-a230-5de5dd251d5b", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:55.353403100Z", "start_time": "2023-09-27T12:36:55.350607600Z" } }, "outputs": [], "source": [ "import re\n", "import transformers\n", "from tqdm.notebook import tqdm\n", "\n", "\n", "def split_text(text: str) -> list:\n", " \"\"\"\n", " Split a string of text into a list of sentence batches.\n", "\n", " Parameters:\n", " text (str): The text to be split into sentence batches.\n", "\n", " Returns:\n", " list: A list of sentence batches. Each sentence batch is a list of sentences.\n", " \"\"\"\n", " # Split the text into sentences using regex\n", " sentences = re.split(r\"(?<=[^A-Z].[.?]) +(?=[A-Z])\", text)\n", "\n", " # Initialize a list to store the sentence batches\n", " sentence_batches = []\n", "\n", " # Initialize a temporary list to store the current batch of sentences\n", " temp_batch = []\n", "\n", " # Iterate through the sentences\n", " for sentence in sentences:\n", " # Add the sentence to the temporary batch\n", " temp_batch.append(sentence)\n", "\n", " # If the length of the temporary batch is between 2 and 3 sentences, or if it is the last batch, add it to the list of sentence batches\n", " if len(temp_batch) >= 2 and len(temp_batch) <= 3 or sentence == sentences[-1]:\n", " sentence_batches.append(temp_batch)\n", " temp_batch = []\n", "\n", " return sentence_batches\n", "\n", "\n", "def correct_text(\n", " text: str,\n", " checker: transformers.pipelines.Pipeline,\n", " corrector: transformers.pipelines.Pipeline,\n", " separator: str = \" \",\n", ") -> str:\n", " \"\"\"\n", " Correct the grammar in a string of text using a text-classification and text-generation pipeline.\n", "\n", " Parameters:\n", " text (str): The inpur text to be corrected.\n", " checker (transformers.pipelines.Pipeline): The text-classification pipeline to use for checking the grammar quality of the text.\n", " corrector (transformers.pipelines.Pipeline): The text-generation pipeline to use for correcting the text.\n", " separator (str, optional): The separator to use when joining the corrected text into a single string. Default is a space character.\n", "\n", " Returns:\n", " str: The corrected text.\n", " \"\"\"\n", " # Split the text into sentence batches\n", " sentence_batches = split_text(text)\n", "\n", " # Initialize a list to store the corrected text\n", " corrected_text = []\n", "\n", " # Iterate through the sentence batches\n", " for batch in tqdm(sentence_batches, total=len(sentence_batches), desc=\"correcting text..\"):\n", " # Join the sentences in the batch into a single string\n", " raw_text = \" \".join(batch)\n", "\n", " # Check the grammar quality of the text using the text-classification pipeline\n", " results = checker(raw_text)\n", "\n", " # Only correct the text if the results of the text-classification are not LABEL_1 or are LABEL_1 with a score below 0.9\n", " if results[0][\"label\"] != \"LABEL_1\" or (results[0][\"label\"] == \"LABEL_1\" and results[0][\"score\"] < 0.9):\n", " # Correct the text using the text-generation pipeline\n", " corrected_batch = corrector(raw_text)\n", " corrected_text.append(corrected_batch[0][\"generated_text\"])\n", " else:\n", " corrected_text.append(raw_text)\n", "\n", " # Join the corrected text into a single string\n", " corrected_text = separator.join(corrected_text)\n", "\n", " return corrected_text" ] }, { "attachments": {}, "cell_type": "markdown", "id": "26d3d759-3cb2-418d-82f8-3be2e445916a", "metadata": {}, "source": [ "Let us see it in action." ] }, { "cell_type": "code", "execution_count": 10, "id": "aee397f5-12cb-460b-8824-327f19af8e5f", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:59.264642800Z", "start_time": "2023-09-27T12:36:55.360645Z" } }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "fad1db66c31644c0a9e9ed1db7a749fb", "version_major": 2, "version_minor": 0 }, "text/plain": [ "correcting text..: 0%| | 0/1 [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "default_text = (\n", " \"Most of the course is about semantic or content of language but there are also interesting\"\n", " \" topics to be learned from the servicefeatures except statistics in characters in documents.At\"\n", " \" this point, He introduces herself as his native English speaker and goes on to say that if\"\n", " \" you contine to work on social scnce\"\n", ")\n", "\n", "corrected_text = correct_text(default_text, grammar_checker_pipe, grammar_corrector_pipe)" ] }, { "cell_type": "code", "execution_count": 11, "id": "5862ec36-8d77-418f-9295-5dc644b50068", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:59.316574700Z", "start_time": "2023-09-27T12:36:59.263138800Z" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "input text: Most of the course is about semantic or content of language but there are also interesting topics to be learned from the servicefeatures except statistics in characters in documents.At this point, He introduces herself as his native English speaker and goes on to say that if you contine to work on social scnce\n", "\n", "generated text: Most of the course is about the semantic content of language but there are also interesting topics to be learned from the service features except statistics in characters in documents. At this point, she introduces herself as a native English speaker and goes on to say that if you continue to work on social science, you will continue to be successful.\n" ] } ], "source": [ "print(f\"input text: {default_text}\\n\")\n", "print(f\"generated text: {corrected_text}\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "21c60879", "metadata": { "jupyter": { "outputs_hidden": false } }, "source": [ "## Quantization\n", "[back to top β¬οΈ](#Table-of-contents:)\n", "\n", "[NNCF](https://github.com/openvinotoolkit/nncf/) enables post-training quantization by adding quantization layers into model graph and then using a subset of the training dataset to initialize the parameters of these additional quantization layers. Quantized operations are executed in `INT8` instead of `FP32`/`FP16` making model inference faster.\n", "\n", "Grammar checker model takes up a tiny portion of the whole text correction pipeline so we optimize only the grammar corrector model. Grammar corrector itself consists of three models: encoder, first call decoder and decoder with past. The last model's share of inference dominates the other ones. Because of this we quantize only it.\n", "\n", "The optimization process contains the following steps:\n", "\n", "1. Create a calibration dataset for quantization.\n", "2. Run `nncf.quantize()` to obtain quantized models.\n", "3. Serialize the `INT8` model using `openvino.save_model()` function." ] }, { "attachments": {}, "cell_type": "markdown", "id": "f87e8395", "metadata": { "jupyter": { "outputs_hidden": false } }, "source": [ "Please select below whether you would like to run quantization to improve model inference speed." ] }, { "cell_type": "code", "execution_count": 12, "id": "cbedc1a5", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:36:59.316574700Z", "start_time": "2023-09-27T12:36:59.306224100Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "be9c720d620744c88255ffd47c0f7663", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Checkbox(value=True, description='Quantization')" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "to_quantize = widgets.Checkbox(\n", " value=True,\n", " description=\"Quantization\",\n", " disabled=False,\n", ")\n", "\n", "to_quantize" ] }, { "attachments": {}, "cell_type": "markdown", "id": "b2b35b38", "metadata": { "jupyter": { "outputs_hidden": false } }, "source": [ "### Run Quantization\n", "[back to top β¬οΈ](#Table-of-contents:)\n", "\n", "Below we retrieve the quantized model. Please see `utils.py` for source code. Quantization is relatively time-consuming and will take some time to complete." ] }, { "cell_type": "code", "execution_count": 13, "id": "b1e36c1e", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:37:08.158695900Z", "start_time": "2023-09-27T12:36:59.307312900Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false }, "test_replace": { "calibration_dataset_size=CALIBRATION_DATASET_SIZE,": "calibration_dataset_size=1," } }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3f8e1722e24043cd9aec9a2e214aeac6", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading readme: 0%| | 0.00/5.94k [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 148k/148k [00:01<00:00, 79.1kB/s]\n", "Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 141k/141k [00:01<00:00, 131kB/s]\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "f243e883be084cfbad2b46403f25d9e4", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Generating validation split: 0%| | 0/755 [00:00, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "f79e153f02c747b2a5559d872acdf098", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Generating test split: 0%| | 0/748 [00:00, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a39ea77517a34a54878f18b053cb053f", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Collecting calibration data: 0%| | 0/10 [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "c04a10727b0943f1b2e24d51948a7a1f", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "14ab11fb72da401cbe101525c1d3b258", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:72 ignored nodes were found by name in the NNCFGraph\n", "INFO:nncf:145 ignored nodes were found by name in the NNCFGraph\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "8e2cd1c1802044c0ba402a1ef6e6862a", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "Compiling the encoder to AUTO ...\n", "Compiling the decoder to AUTO ...\n", "Compiling the decoder to AUTO ...\n", "Compiling the decoder to AUTO ...\n" ] } ], "source": [ "from utils import get_quantized_pipeline, CALIBRATION_DATASET_SIZE\n", "\n", "grammar_corrector_pipe_fp32 = grammar_corrector_pipe\n", "grammar_corrector_pipe_int8 = None\n", "if to_quantize.value:\n", " quantized_model_path = Path(\"quantized_decoder_with_past\") / \"openvino_model.xml\"\n", " grammar_corrector_pipe_int8 = get_quantized_pipeline(\n", " grammar_corrector_pipe_fp32,\n", " grammar_corrector_tokenizer,\n", " core,\n", " grammar_corrector_dir,\n", " quantized_model_path,\n", " device.value,\n", " calibration_dataset_size=CALIBRATION_DATASET_SIZE,\n", " )" ] }, { "attachments": {}, "cell_type": "markdown", "id": "50123853-f621-4cab-b836-b8f210d03c04", "metadata": {}, "source": [ "Let's see correction results. The generated texts for quantized INT8 model and original FP32 model should be almost the same." ] }, { "cell_type": "code", "execution_count": 14, "id": "86d39904-21a8-4125-bb1d-1785aaadd85a", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:37:11.035199500Z", "start_time": "2023-09-27T12:37:08.172901100Z" } }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "216c8924e3b34353a042836cf0b58545", "version_major": 2, "version_minor": 0 }, "text/plain": [ "correcting text..: 0%| | 0/1 [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Input text: Most of the course is about semantic or content of language but there are also interesting topics to be learned from the servicefeatures except statistics in characters in documents.At this point, He introduces herself as his native English speaker and goes on to say that if you contine to work on social scnce\n", "\n", "Generated text by INT8 model: Most of the course is about semantics or content of language but there are also interesting topics to be learned from the service features except statistics in characters in documents. At this point, she introduces himself as a native English speaker and goes on to say that if you continue to work on social science, you will continue to do so.\n" ] } ], "source": [ "if to_quantize.value:\n", " corrected_text_int8 = correct_text(default_text, grammar_checker_pipe, grammar_corrector_pipe_int8)\n", " print(f\"Input text: {default_text}\\n\")\n", " print(f\"Generated text by INT8 model: {corrected_text_int8}\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "176da8b1", "metadata": { "jupyter": { "outputs_hidden": false } }, "source": [ "### Compare model size, performance and accuracy\n", "[back to top β¬οΈ](#Table-of-contents:)\n", "\n", "First, we compare file size of `FP32` and `INT8` models." ] }, { "cell_type": "code", "execution_count": 15, "id": "e8277b8b", "metadata": { "ExecuteTime": { "end_time": "2023-09-27T12:37:11.039089700Z", "start_time": "2023-09-27T12:37:11.038799100Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model footprint comparison:\n", " * FP32 IR model size: 1658150.25 KB\n", " * INT8 IR model size: 415711.39 KB\n" ] } ], "source": [ "from utils import calculate_compression_rate\n", "\n", "if to_quantize.value:\n", " model_size_fp32, model_size_int8 = calculate_compression_rate(\n", " grammar_corrector_dir / \"openvino_decoder_with_past_model.xml\",\n", " quantized_model_path,\n", " )" ] }, { "attachments": {}, "cell_type": "markdown", "id": "82d69626", "metadata": { "jupyter": { "outputs_hidden": false } }, "source": [ "Second, we compare two grammar correction pipelines from performance and accuracy stand points.\n", "\n", "Test split of