katuni4ka commited on
Commit
a06713a
1 Parent(s): 04a637e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -10
README.md CHANGED
@@ -3,7 +3,7 @@ license: apache-2.0
3
  license_link: https://choosealicense.com/licenses/apache-2.0/
4
  ---
5
  # whisper-large-v3-int4-ov
6
- * Model creator: [Openai](https://huggingface.co/openai)
7
  * Original model: [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3)
8
 
9
  ## Description
@@ -27,7 +27,7 @@ The provided OpenVINO™ IR model is compatible with:
27
  * OpenVINO version 2024.4.0 and higher
28
  * Optimum Intel 1.20.0 and higher
29
 
30
- ## Running Model Inference
31
 
32
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
33
 
@@ -38,21 +38,62 @@ pip install optimum[openvino]
38
  2. Run model inference:
39
 
40
  ```
41
- from transformers import AutoTokenizer
42
- from optimum.intel.openvino import OVModelForCausalLM
43
 
44
  model_id = "OpenVINO/whisper-large-v3-int4-ov"
45
- tokenizer = AutoTokenizer.from_pretrained(model_id)
46
- model = OVModelForCausalLM.from_pretrained(model_id)
47
 
48
- inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
 
49
 
50
- outputs = model.generate(**inputs, max_length=200)
51
- text = tokenizer.batch_decode(outputs)[0]
 
 
 
 
 
 
52
  print(text)
53
  ```
54
 
55
- For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
  ## Limitations
58
 
 
3
  license_link: https://choosealicense.com/licenses/apache-2.0/
4
  ---
5
  # whisper-large-v3-int4-ov
6
+ * Model creator: [OpenAI](https://huggingface.co/openai)
7
  * Original model: [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3)
8
 
9
  ## Description
 
27
  * OpenVINO version 2024.4.0 and higher
28
  * Optimum Intel 1.20.0 and higher
29
 
30
+ ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
31
 
32
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
33
 
 
38
  2. Run model inference:
39
 
40
  ```
41
+ from transformers import AutoProcessor
42
+ from optimum.intel.openvino import OVModelForSpeechSeq2Seq
43
 
44
  model_id = "OpenVINO/whisper-large-v3-int4-ov"
45
+ tokenizer = AutoProcessor.from_pretrained(model_id)
46
+ model = OVModelForSpeechSeq2Seq.from_pretrained(model_id)
47
 
48
+ dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
49
+ sample = dataset[0]
50
 
51
+ input_features = processor(
52
+ sample["audio"]["array"],
53
+ sampling_rate=sample["audio"]["sampling_rate"],
54
+ return_tensors="pt",
55
+ ).input_features
56
+
57
+ outputs = model.generate(input_features)
58
+ text = processor.batch_decode(outputs)[0]
59
  print(text)
60
  ```
61
 
62
+ ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
63
+
64
+ 1. Install packages required for using OpenVINO GenAI.
65
+ ```
66
+ pip install huggingface_hub
67
+ pip install -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino openvino-tokenizers openvino-genai
68
+ ```
69
+
70
+ 2. Download model from HuggingFace Hub
71
+
72
+ ```
73
+ import huggingface_hub as hf_hub
74
+
75
+ model_id = "OpenVINO/whisper-large-v3-int4-ov"
76
+ model_path = "whisper-large-v3-int4-ov"
77
+
78
+ hf_hub.snapshot_download(model_id, local_dir=model_path)
79
+
80
+ ```
81
+
82
+ 3. Run model inference:
83
+
84
+ ```
85
+ import openvino_genai as ov_genai
86
+ import datasets
87
+
88
+ device = "CPU"
89
+ pipe = ov_genai.WhisperPipeline(model_path, device)
90
+
91
+ dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
92
+ sample = dataset[0]["audio]["array"]
93
+ print(pipe.generate(sample))
94
+ ```
95
+
96
+ More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
97
 
98
  ## Limitations
99