This model is a quantized version of TinyLlama/TinyLlama-1.1B-Chat-v1.0
and was exported to the OpenVINO format using optimum-intel via the nncf-quantization space.
First make sure you have optimum-intel installed:
pip install optimum[openvino]
To load your model you can do as follows:
from optimum.intel import OVModelForCausalLM
model_id = "NikolayL/TinyLlama-1.1B-Chat-v1.0-openvino-int4"
model = OVModelForCausalLM.from_pretrained(model_id)
- Downloads last month
- 14
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for NikolayL/TinyLlama-1.1B-Chat-v1.0-openvino-int4
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0