katuni4ka commited on
Commit
e00c831
1 Parent(s): 042c32b

add genai usage

Browse files
Files changed (1) hide show
  1. README.md +36 -10
README.md CHANGED
@@ -28,7 +28,7 @@ The provided OpenVINO™ IR model is compatible with:
28
  * OpenVINO version 2024.1.0 and higher
29
  * Optimum Intel 1.16.0 and higher
30
 
31
- ## Running Model Inference
32
 
33
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
34
 
@@ -46,20 +46,46 @@ model_id = "OepnVINO/mistral-7b-instrcut-v0.1-int8-ov"
46
  tokenizer = AutoTokenizer.from_pretrained(model_id)
47
  model = OVModelForCausalLM.from_pretrained(model_id)
48
 
 
49
 
50
- messages = [
51
- {"role": "user", "content": "What is your favourite condiment?"},
52
- {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
53
- {"role": "user", "content": "Do you have mayonnaise recipes?"}
54
- ]
 
55
 
56
- inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
57
 
58
- outputs = model.generate(inputs, max_new_tokens=20)
59
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
60
  ```
61
 
62
- For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
  ## Limitations
65
 
 
28
  * OpenVINO version 2024.1.0 and higher
29
  * Optimum Intel 1.16.0 and higher
30
 
31
+ ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
32
 
33
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
34
 
 
46
  tokenizer = AutoTokenizer.from_pretrained(model_id)
47
  model = OVModelForCausalLM.from_pretrained(model_id)
48
 
49
+ inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
50
 
51
+ outputs = model.generate(**inputs, max_length=200)
52
+ text = tokenizer.batch_decode(outputs)[0]
53
+ print(text)
54
+ ```
55
+
56
+ For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
57
 
58
+ ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
59
 
60
+ 1. Install packages required for using OpenVINO GenAI.
61
+ ```
62
+ pip install openvino-genai huggingface_hub
63
  ```
64
 
65
+ 2. Download model from HuggingFace Hub
66
+
67
+ ```
68
+ import huggingface_hub as hf_hub
69
+
70
+ model_id = "OpenVINO/<model_name>"
71
+ model_path = "<model_name>"
72
+
73
+ hf_hub.snapshot_download(model_id, local_dir=model_path)
74
+
75
+ ```
76
+
77
+ 3. Run model inference:
78
+
79
+ ```
80
+ import openvino_genai as ov_genai
81
+
82
+ device = "CPU"
83
+ pipe = ov_genai.LLMPipeline(model_path, device)
84
+ print(pipe.generate("What is OpenVINO?"))
85
+ ```
86
+
87
+ More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
88
+
89
 
90
  ## Limitations
91