{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# OpenVINO™ model conversion API\n", "\n", "This notebook shows how to convert a model from original framework format to OpenVINO Intermediate Representation (IR).\n", "\n", "\n", "#### Table of contents:\n", "\n", "- [OpenVINO IR format](#OpenVINO-IR-format)\n", "- [IR preparation with Python conversion API and Model Optimizer command-line tool](#IR-preparation-with-Python-conversion-API-and-Model-Optimizer-command-line-tool)\n", "- [Fetching example models](#Fetching-example-models)\n", "- [Basic conversion](#Basic-conversion)\n", "- [Model conversion parameters](#Model-conversion-parameters)\n", " - [Setting Input Shapes](#Setting-Input-Shapes)\n", " - [Cutting Off Parts of a Model](#Cutting-Off-Parts-of-a-Model)\n", " - [Embedding Preprocessing Computation](#Embedding-Preprocessing-Computation)\n", " - [Specifying Layout](#Specifying-Layout)\n", " - [Changing Model Layout](#Changing-Model-Layout)\n", " - [Specifying Mean and Scale Values](#Specifying-Mean-and-Scale-Values)\n", " - [Reversing Input Channels](#Reversing-Input-Channels)\n", " - [Compressing a Model to FP16](#Compressing-a-Model-to-FP16)\n", "- [Convert Models Represented as Python Objects](#Convert-Models-Represented-as-Python-Objects)\n", "\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# Required imports. Please execute this cell first.\n", "%pip install -q --extra-index-url https://download.pytorch.org/whl/cpu \\\n", "\"openvino-dev>=2024.0.0\" \"requests\" \"tqdm\" \"transformers[onnx]>=4.21.1\" \"torch>=2.1\" \"torchvision\"" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## OpenVINO IR format\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "OpenVINO [Intermediate Representation (IR)](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) is the proprietary model format of OpenVINO. It is produced after converting a model with model conversion API. Model conversion API translates the frequently used deep learning operations to their respective similar representation in OpenVINO and tunes them with the associated weights and biases from the trained model. The resulting IR contains two files: an `.xml` file, containing information about network topology, and a `.bin` file, containing the weights and biases binary data." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## IR preparation with Python conversion API and Model Optimizer command-line tool\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "There are two ways to convert a model from the original framework format to OpenVINO IR: Python conversion API and Model Optimizer command-line tool. You can choose one of them based on whichever is most convenient for you. There should not be any differences in the results of model conversion if the same set of parameters is used. For more details, refer to [Model Preparation](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html) documentation." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "usage: main.py [options]\n", "\n", "optional arguments:\n", " -h, --help show this help message and exit\n", " --framework FRAMEWORK\n", " Name of the framework used to train the input model.\n", "\n", "Framework-agnostic parameters:\n", " --model_name MODEL_NAME, -n MODEL_NAME\n", " Model_name parameter passed to the final create_ir\n", " transform. This parameter is used to name a network in\n", " a generated IR and output .xml/.bin files.\n", " --output_dir OUTPUT_DIR, -o OUTPUT_DIR\n", " Directory that stores the generated IR. By default, it\n", " is the directory from where the Model Conversion is\n", " launched.\n", " --freeze_placeholder_with_value FREEZE_PLACEHOLDER_WITH_VALUE\n", " Replaces input layer with constant node with provided\n", " value, for example: \"node_name->True\". It will be\n", " DEPRECATED in future releases. Use \"input\" option to\n", " specify a value for freezing.\n", " --static_shape Enables IR generation for fixed input shape (folding\n", " `ShapeOf` operations and shape-calculating sub-graphs\n", " to `Constant`). Changing model input shape using the\n", " OpenVINO Runtime API in runtime may fail for such an\n", " IR.\n", " --use_new_frontend Force the usage of new Frontend for model conversion\n", " into IR. The new Frontend is C++ based and is\n", " available for ONNX* and PaddlePaddle* models. Model\n", " Conversion API uses new Frontend for ONNX* and\n", " PaddlePaddle* by default that means `use_new_frontend`\n", " and `use_legacy_frontend` options are not specified.\n", " --use_legacy_frontend\n", " Force the usage of legacy Frontend for model\n", " conversion into IR. The legacy Frontend is Python\n", " based and is available for TensorFlow*, ONNX*, MXNet*,\n", " Caffe*, and Kaldi* models.\n", " --input_model INPUT_MODEL, -w INPUT_MODEL, -m INPUT_MODEL\n", " Tensorflow*: a file with a pre-trained model (binary\n", " or text .pb file after freezing). Caffe*: a model\n", " proto file with model weights.\n", " --input INPUT Quoted list of comma-separated input nodes names with\n", " shapes, data types, and values for freezing. The order\n", " of inputs in converted model is the same as order of\n", " specified operation names. The shape and value are\n", " specified as comma-separated lists. The data type of\n", " input node is specified in braces and can have one of\n", " the values: f64 (float64), f32 (float32), f16\n", " (float16), i64 (int64), i32 (int32), u8 (uint8),\n", " boolean (bool). Data type is optional. If it's not\n", " specified explicitly then there are two options: if\n", " input node is a parameter, data type is taken from the\n", " original node dtype, if input node is not a parameter,\n", " data type is set to f32. Example, to set `input_1`\n", " with shape [1,100], and Parameter node `sequence_len`\n", " with scalar input with value `150`, and boolean input\n", " `is_training` with `False` value use the following\n", " format:\n", " \"input_1[1,100],sequence_len->150,is_training->False\".\n", " Another example, use the following format to set input\n", " port 0 of the node `node_name1` with the shape [3,4]\n", " as an input node and freeze output port 1 of the node\n", " \"node_name2\" with the value [20,15] of the int32 type\n", " and shape [2]:\n", " \"0:node_name1[3,4],node_name2:1[2]{i32}->[20,15]\".\n", " --output OUTPUT The name of the output operation of the model or list\n", " of names. For TensorFlow*, do not add :0 to this\n", " name.The order of outputs in converted model is the\n", " same as order of specified operation names.\n", " --input_shape INPUT_SHAPE\n", " Input shape(s) that should be fed to an input node(s)\n", " of the model. Shape is defined as a comma-separated\n", " list of integer numbers enclosed in parentheses or\n", " square brackets, for example [1,3,227,227] or\n", " (1,227,227,3), where the order of dimensions depends\n", " on the framework input layout of the model. For\n", " example, [N,C,H,W] is used for ONNX* models and\n", " [N,H,W,C] for TensorFlow* models. The shape can\n", " contain undefined dimensions (? or -1) and should fit\n", " the dimensions defined in the input operation of the\n", " graph. Boundaries of undefined dimension can be\n", " specified with ellipsis, for example\n", " [1,1..10,128,128]. One boundary can be undefined, for\n", " example [1,..100] or [1,3,1..,1..]. If there are\n", " multiple inputs in the model, --input_shape should\n", " contain definition of shape for each input separated\n", " by a comma, for example: [1,3,227,227],[2,4] for a\n", " model with two inputs with 4D and 2D shapes.\n", " Alternatively, specify shapes with the --input option.\n", " --example_input EXAMPLE_INPUT\n", " Sample of model input in original framework. For\n", " PyTorch it can be torch.Tensor. For Tensorflow it can\n", " be tf.Tensor or numpy.ndarray. For PaddlePaddle it can\n", " be Paddle Variable.\n", " --batch BATCH, -b BATCH\n", " Set batch size. It applies to 1D or higher dimension\n", " inputs. The default dimension index for the batch is\n", " zero. Use a label 'n' in --layout or --source_layout\n", " option to set the batch dimension. For example,\n", " \"x(hwnc)\" defines the third dimension to be the batch.\n", " --mean_values MEAN_VALUES\n", " Mean values to be used for the input image per\n", " channel. Values to be provided in the (R,G,B) or\n", " [R,G,B] format. Can be defined for desired input of\n", " the model, for example: \"--mean_values\n", " data[255,255,255],info[255,255,255]\". The exact\n", " meaning and order of channels depend on how the\n", " original model was trained.\n", " --scale_values SCALE_VALUES\n", " Scale values to be used for the input image per\n", " channel. Values are provided in the (R,G,B) or [R,G,B]\n", " format. Can be defined for desired input of the model,\n", " for example: \"--scale_values\n", " data[255,255,255],info[255,255,255]\". The exact\n", " meaning and order of channels depend on how the\n", " original model was trained. If both --mean_values and\n", " --scale_values are specified, the mean is subtracted\n", " first and then scale is applied regardless of the\n", " order of options in command line.\n", " --scale SCALE, -s SCALE\n", " All input values coming from original network inputs\n", " will be divided by this value. When a list of inputs\n", " is overridden by the --input parameter, this scale is\n", " not applied for any input that does not match with the\n", " original input of the model. If both --mean_values and\n", " --scale are specified, the mean is subtracted first\n", " and then scale is applied regardless of the order of\n", " options in command line.\n", " --reverse_input_channels [REVERSE_INPUT_CHANNELS]\n", " Switch the input channels order from RGB to BGR (or\n", " vice versa). Applied to original inputs of the model\n", " if and only if a number of channels equals 3. When\n", " --mean_values/--scale_values are also specified,\n", " reversing of channels will be applied to user's input\n", " data first, so that numbers in --mean_values and\n", " --scale_values go in the order of channels used in the\n", " original model. In other words, if both options are\n", " specified, then the data flow in the model looks as\n", " following: Parameter -> ReverseInputChannels -> Mean\n", " apply-> Scale apply -> the original body of the model.\n", " --source_layout SOURCE_LAYOUT\n", " Layout of the input or output of the model in the\n", " framework. Layout can be specified in the short form,\n", " e.g. nhwc, or in complex form, e.g. \"[n,h,w,c]\".\n", " Example for many names: \"in_name1([n,h,w,c]),in_name2(\n", " nc),out_name1(n),out_name2(nc)\". Layout can be\n", " partially defined, \"?\" can be used to specify\n", " undefined layout for one dimension, \"...\" can be used\n", " to specify undefined layout for multiple dimensions,\n", " for example \"?c??\", \"nc...\", \"n...c\", etc.\n", " --target_layout TARGET_LAYOUT\n", " Same as --source_layout, but specifies target layout\n", " that will be in the model after processing by\n", " ModelOptimizer.\n", " --layout LAYOUT Combination of --source_layout and --target_layout.\n", " Can't be used with either of them. If model has one\n", " input it is sufficient to specify layout of this\n", " input, for example --layout nhwc. To specify layouts\n", " of many tensors, names must be provided, for example:\n", " --layout \"name1(nchw),name2(nc)\". It is possible to\n", " instruct ModelOptimizer to change layout, for example:\n", " --layout \"name1(nhwc->nchw),name2(cn->nc)\". Also \"*\"\n", " in long layout form can be used to fuse dimensions,\n", " for example \"[n,c,...]->[n*c,...]\".\n", " --compress_to_fp16 [COMPRESS_TO_FP16]\n", " If the original model has FP32 weights or biases, they\n", " are compressed to FP16. All intermediate data is kept\n", " in original precision. Option can be specified alone\n", " as \"--compress_to_fp16\", or explicit True/False values\n", " can be set, for example: \"--compress_to_fp16=False\",\n", " or \"--compress_to_fp16=True\"\n", " --extensions EXTENSIONS\n", " Paths or a comma-separated list of paths to libraries\n", " (.so or .dll) with extensions. For the legacy MO path\n", " (if `--use_legacy_frontend` is used), a directory or a\n", " comma-separated list of directories with extensions\n", " are supported. To disable all extensions including\n", " those that are placed at the default location, pass an\n", " empty string.\n", " --transform TRANSFORM\n", " Apply additional transformations. Usage: \"--transform\n", " transformation_name1[args],transformation_name2...\"\n", " where [args] is key=value pairs separated by\n", " semicolon. Examples: \"--transform LowLatency2\" or \"--\n", " transform Pruning\" or \"--transform\n", " LowLatency2[use_const_initializer=False]\" or \"--\n", " transform \"MakeStateful[param_res_names= {'input_name_\n", " 1':'output_name_1','input_name_2':'output_name_2'}]\"\n", " Available transformations: \"LowLatency2\",\n", " \"MakeStateful\", \"Pruning\"\n", " --transformations_config TRANSFORMATIONS_CONFIG\n", " Use the configuration file with transformations\n", " description. Transformations file can be specified as\n", " relative path from the current directory, as absolute\n", " path or as arelative path from the mo root directory.\n", " --silent [SILENT] Prevent any output messages except those that\n", " correspond to log level equals ERROR, that can be set\n", " with the following option: --log_level. By default,\n", " log level is already ERROR.\n", " --log_level {CRITICAL,ERROR,WARN,WARNING,INFO,DEBUG,NOTSET}\n", " Logger level of logging massages from MO. Expected one\n", " of ['CRITICAL', 'ERROR', 'WARN', 'WARNING', 'INFO',\n", " 'DEBUG', 'NOTSET'].\n", " --version Version of Model Optimizer\n", " --progress [PROGRESS]\n", " Enable model conversion progress display.\n", " --stream_output [STREAM_OUTPUT]\n", " Switch model conversion progress display to a\n", " multiline mode.\n", " --share_weights [SHARE_WEIGHTS]\n", " Map memory of weights instead reading files or share\n", " memory from input model. Currently, mapping feature is\n", " provided only for ONNX models that do not require\n", " fallback to the legacy ONNX frontend for the\n", " conversion.\n", "\n", "TensorFlow*-specific parameters:\n", " --input_model_is_text [INPUT_MODEL_IS_TEXT]\n", " TensorFlow*: treat the input model file as a text\n", " protobuf format. If not specified, the Model Optimizer\n", " treats it as a binary file by default.\n", " --input_checkpoint INPUT_CHECKPOINT\n", " TensorFlow*: variables file to load.\n", " --input_meta_graph INPUT_META_GRAPH\n", " Tensorflow*: a file with a meta-graph of the model\n", " before freezing\n", " --saved_model_dir SAVED_MODEL_DIR\n", " TensorFlow*: directory with a model in SavedModel\n", " format of TensorFlow 1.x or 2.x version.\n", " --saved_model_tags SAVED_MODEL_TAGS\n", " Group of tag(s) of the MetaGraphDef to load, in string\n", " format, separated by ','. For tag-set contains\n", " multiple tags, all tags must be passed in.\n", " --tensorflow_custom_operations_config_update TENSORFLOW_CUSTOM_OPERATIONS_CONFIG_UPDATE\n", " TensorFlow*: update the configuration file with node\n", " name patterns with input/output nodes information.\n", " --tensorflow_object_detection_api_pipeline_config TENSORFLOW_OBJECT_DETECTION_API_PIPELINE_CONFIG\n", " TensorFlow*: path to the pipeline configuration file\n", " used to generate model created with help of Object\n", " Detection API.\n", " --tensorboard_logdir TENSORBOARD_LOGDIR\n", " TensorFlow*: dump the input graph to a given directory\n", " that should be used with TensorBoard.\n", " --tensorflow_custom_layer_libraries TENSORFLOW_CUSTOM_LAYER_LIBRARIES\n", " TensorFlow*: comma separated list of shared libraries\n", " with TensorFlow* custom operations implementation.\n", "\n", "Caffe*-specific parameters:\n", " --input_proto INPUT_PROTO, -d INPUT_PROTO\n", " Deploy-ready prototxt file that contains a topology\n", " structure and layer attributes\n", " --caffe_parser_path CAFFE_PARSER_PATH\n", " Path to Python Caffe* parser generated from\n", " caffe.proto\n", " --k K Path to CustomLayersMapping.xml to register custom\n", " layers\n", " --disable_omitting_optional [DISABLE_OMITTING_OPTIONAL]\n", " Disable omitting optional attributes to be used for\n", " custom layers. Use this option if you want to transfer\n", " all attributes of a custom layer to IR. Default\n", " behavior is to transfer the attributes with default\n", " values and the attributes defined by the user to IR.\n", " --enable_flattening_nested_params [ENABLE_FLATTENING_NESTED_PARAMS]\n", " Enable flattening optional params to be used for\n", " custom layers. Use this option if you want to transfer\n", " attributes of a custom layer to IR with flattened\n", " nested parameters. Default behavior is to transfer the\n", " attributes without flattening nested parameters.\n", "\n", "MXNet-specific parameters:\n", " --input_symbol INPUT_SYMBOL\n", " Symbol file (for example, model-symbol.json) that\n", " contains a topology structure and layer attributes\n", " --nd_prefix_name ND_PREFIX_NAME\n", " Prefix name for args.nd and argx.nd files.\n", " --pretrained_model_name PRETRAINED_MODEL_NAME\n", " Name of a pretrained MXNet model without extension and\n", " epoch number. This model will be merged with args.nd\n", " and argx.nd files\n", " --save_params_from_nd [SAVE_PARAMS_FROM_ND]\n", " Enable saving built parameters file from .nd files\n", " --legacy_mxnet_model [LEGACY_MXNET_MODEL]\n", " Enable MXNet loader to make a model compatible with\n", " the latest MXNet version. Use only if your model was\n", " trained with MXNet version lower than 1.0.0\n", " --enable_ssd_gluoncv [ENABLE_SSD_GLUONCV]\n", " Enable pattern matchers replacers for converting\n", " gluoncv ssd topologies.\n", "\n", "Kaldi-specific parameters:\n", " --counts COUNTS Path to the counts file\n", " --remove_output_softmax [REMOVE_OUTPUT_SOFTMAX]\n", " Removes the SoftMax layer that is the output layer\n", " --remove_memory [REMOVE_MEMORY]\n", " Removes the Memory layer and use additional inputs\n", " outputs instead\n" ] } ], "source": [ "# Model Optimizer CLI tool parameters description\n", "\n", "! mo --help" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Optional parameters:\n", " --help \n", "\t\t\tPrint available parameters.\n", " --framework \n", "\t\t\tName of the framework used to train the input model.\n", "\n", "Framework-agnostic parameters:\n", " --input_model \n", "\t\t\tModel object in original framework (PyTorch, Tensorflow) or path to\n", "\t\t\tmodel file.\n", "\t\t\tTensorflow*: a file with a pre-trained model (binary or text .pb file\n", "\t\t\tafter freezing).\n", "\t\t\tCaffe*: a model proto file with model weights\n", "\t\t\t\n", "\t\t\tSupported formats of input model:\n", "\t\t\t\n", "\t\t\tPaddlePaddle\n", "\t\t\tpaddle.hapi.model.Model\n", "\t\t\tpaddle.fluid.dygraph.layers.Layer\n", "\t\t\tpaddle.fluid.executor.Executor\n", "\t\t\t\n", "\t\t\tPyTorch\n", "\t\t\ttorch.nn.Module\n", "\t\t\ttorch.jit.ScriptModule\n", "\t\t\ttorch.jit.ScriptFunction\n", "\t\t\t\n", "\t\t\tTF\n", "\t\t\ttf.compat.v1.Graph\n", "\t\t\ttf.compat.v1.GraphDef\n", "\t\t\ttf.compat.v1.wrap_function\n", "\t\t\ttf.compat.v1.session\n", "\t\t\t\n", "\t\t\tTF2 / Keras\n", "\t\t\ttf.keras.Model\n", "\t\t\ttf.keras.layers.Layer\n", "\t\t\ttf.function\n", "\t\t\ttf.Module\n", "\t\t\ttf.train.checkpoint\n", " --input \n", "\t\t\tInput can be set by passing a list of InputCutInfo objects or by a list\n", "\t\t\tof tuples. Each tuple can contain optionally input name, input\n", "\t\t\ttype or input shape. Example: input=(\"op_name\", PartialShape([-1,\n", "\t\t\t3, 100, 100]), Type(np.float32)). Alternatively input can be set by\n", "\t\t\ta string or list of strings of the following format. Quoted list of comma-separated\n", "\t\t\tinput nodes names with shapes, data types, and values for freezing.\n", "\t\t\tIf operation names are specified, the order of inputs in converted\n", "\t\t\tmodel will be the same as order of specified operation names (applicable\n", "\t\t\tfor TF2, ONNX, MxNet).\n", "\t\t\tThe shape and value are specified as comma-separated lists. The data\n", "\t\t\ttype of input node is specified\n", "\t\t\tin braces and can have one of the values: f64 (float64), f32 (float32),\n", "\t\t\tf16 (float16), i64\n", "\t\t\t(int64), i32 (int32), u8 (uint8), boolean (bool). Data type is optional.\n", "\t\t\tIf it's not specified explicitly then there are two options: if input\n", "\t\t\tnode is a parameter, data type is taken from the original node dtype,\n", "\t\t\tif input node is not a parameter, data type is set to f32. Example, to set\n", "\t\t\t`input_1` with shape [1,100], and Parameter node `sequence_len` with\n", "\t\t\tscalar input with value `150`, and boolean input `is_training` with\n", "\t\t\t`False` value use the following format: \"input_1[1,100],sequence_len->150,is_training->False\".\n", "\t\t\tAnother example, use the following format to set input port 0 of the node\n", "\t\t\t`node_name1` with the shape [3,4] as an input node and freeze output\n", "\t\t\tport 1 of the node `node_name2` with the value [20,15] of the int32 type\n", "\t\t\tand shape [2]: \"0:node_name1[3,4],node_name2:1[2]{i32}->[20,15]\".\n", "\t\t\t\n", " --output \n", "\t\t\tThe name of the output operation of the model or list of names. For TensorFlow*,\n", "\t\t\tdo not add :0 to this name.The order of outputs in converted model is the\n", "\t\t\tsame as order of specified operation names.\n", " --input_shape \n", "\t\t\tInput shape(s) that should be fed to an input node(s) of the model. Input\n", "\t\t\tshapes can be defined by passing a list of objects of type PartialShape,\n", "\t\t\tShape, [Dimension, ...] or [int, ...] or by a string of the following\n", "\t\t\tformat. Shape is defined as a comma-separated list of integer numbers\n", "\t\t\tenclosed in parentheses or square brackets, for example [1,3,227,227]\n", "\t\t\tor (1,227,227,3), where the order of dimensions depends on the framework\n", "\t\t\tinput layout of the model. For example, [N,C,H,W] is used for ONNX* models\n", "\t\t\tand [N,H,W,C] for TensorFlow* models. The shape can contain undefined\n", "\t\t\tdimensions (? or -1) and should fit the dimensions defined in the input\n", "\t\t\toperation of the graph. Boundaries of undefined dimension can be specified\n", "\t\t\twith ellipsis, for example [1,1..10,128,128]. One boundary can be\n", "\t\t\tundefined, for example [1,..100] or [1,3,1..,1..]. If there are multiple\n", "\t\t\tinputs in the model, --input_shape should contain definition of shape\n", "\t\t\tfor each input separated by a comma, for example: [1,3,227,227],[2,4]\n", "\t\t\tfor a model with two inputs with 4D and 2D shapes. Alternatively, specify\n", "\t\t\tshapes with the --input option.\n", " --example_input \n", "\t\t\tSample of model input in original framework.\n", "\t\t\tFor PyTorch it can be torch.Tensor.\n", "\t\t\tFor Tensorflow it can be tf.Tensor or numpy.ndarray.\n", "\t\t\tFor PaddlePaddle it can be Paddle Variable.\n", " --batch \n", "\t\t\tSet batch size. It applies to 1D or higher dimension inputs.\n", "\t\t\tThe default dimension index for the batch is zero.\n", "\t\t\tUse a label 'n' in --layout or --source_layout option to set the batch\n", "\t\t\tdimension.\n", "\t\t\tFor example, \"x(hwnc)\" defines the third dimension to be the batch.\n", "\t\t\t\n", " --mean_values \n", "\t\t\tMean values to be used for the input image per channel. Mean values can\n", "\t\t\tbe set by passing a dictionary, where key is input name and value is mean\n", "\t\t\tvalue. For example mean_values={'data':[255,255,255],'info':[255,255,255]}.\n", "\t\t\tOr mean values can be set by a string of the following format. Values to\n", "\t\t\tbe provided in the (R,G,B) or [R,G,B] format. Can be defined for desired\n", "\t\t\tinput of the model, for example: \"--mean_values data[255,255,255],info[255,255,255]\".\n", "\t\t\tThe exact meaning and order of channels depend on how the original model\n", "\t\t\twas trained.\n", " --scale_values \n", "\t\t\tScale values to be used for the input image per channel. Scale values\n", "\t\t\tcan be set by passing a dictionary, where key is input name and value is\n", "\t\t\tscale value. For example scale_values={'data':[255,255,255],'info':[255,255,255]}.\n", "\t\t\tOr scale values can be set by a string of the following format. Values\n", "\t\t\tare provided in the (R,G,B) or [R,G,B] format. Can be defined for desired\n", "\t\t\tinput of the model, for example: \"--scale_values data[255,255,255],info[255,255,255]\".\n", "\t\t\tThe exact meaning and order of channels depend on how the original model\n", "\t\t\twas trained. If both --mean_values and --scale_values are specified,\n", "\t\t\tthe mean is subtracted first and then scale is applied regardless of\n", "\t\t\tthe order of options in command line.\n", " --scale \n", "\t\t\tAll input values coming from original network inputs will be divided\n", "\t\t\tby this value. When a list of inputs is overridden by the --input parameter,\n", "\t\t\tthis scale is not applied for any input that does not match with the original\n", "\t\t\tinput of the model. If both --mean_values and --scale are specified,\n", "\t\t\tthe mean is subtracted first and then scale is applied regardless of\n", "\t\t\tthe order of options in command line.\n", " --reverse_input_channels \n", "\t\t\tSwitch the input channels order from RGB to BGR (or vice versa). Applied\n", "\t\t\tto original inputs of the model if and only if a number of channels equals\n", "\t\t\t3. When --mean_values/--scale_values are also specified, reversing\n", "\t\t\tof channels will be applied to user's input data first, so that numbers\n", "\t\t\tin --mean_values and --scale_values go in the order of channels used\n", "\t\t\tin the original model. In other words, if both options are specified,\n", "\t\t\tthen the data flow in the model looks as following: Parameter -> ReverseInputChannels\n", "\t\t\t-> Mean apply-> Scale apply -> the original body of the model.\n", " --source_layout \n", "\t\t\tLayout of the input or output of the model in the framework. Layout can\n", "\t\t\tbe set by passing a dictionary, where key is input name and value is LayoutMap\n", "\t\t\tobject. Or layout can be set by string of the following format. Layout\n", "\t\t\tcan be specified in the short form, e.g. nhwc, or in complex form, e.g.\n", "\t\t\t\"[n,h,w,c]\". Example for many names: \"in_name1([n,h,w,c]),in_name2(nc),out_name1(n),out_name2(nc)\".\n", "\t\t\tLayout can be partially defined, \"?\" can be used to specify undefined\n", "\t\t\tlayout for one dimension, \"...\" can be used to specify undefined layout\n", "\t\t\tfor multiple dimensions, for example \"?c??\", \"nc...\", \"n...c\", etc.\n", "\t\t\t\n", " --target_layout \n", "\t\t\tSame as --source_layout, but specifies target layout that will be in\n", "\t\t\tthe model after processing by ModelOptimizer.\n", " --layout \n", "\t\t\tCombination of --source_layout and --target_layout. Can't be used\n", "\t\t\twith either of them. If model has one input it is sufficient to specify\n", "\t\t\tlayout of this input, for example --layout nhwc. To specify layouts\n", "\t\t\tof many tensors, names must be provided, for example: --layout \"name1(nchw),name2(nc)\".\n", "\t\t\tIt is possible to instruct ModelOptimizer to change layout, for example:\n", "\t\t\t--layout \"name1(nhwc->nchw),name2(cn->nc)\".\n", "\t\t\tAlso \"*\" in long layout form can be used to fuse dimensions, for example\n", "\t\t\t\"[n,c,...]->[n*c,...]\".\n", " --compress_to_fp16 \n", "\t\t\tIf the original model has FP32 weights or biases, they are compressed\n", "\t\t\tto FP16. All intermediate data is kept in original precision. Option\n", "\t\t\tcan be specified alone as \"--compress_to_fp16\", or explicit True/False\n", "\t\t\tvalues can be set, for example: \"--compress_to_fp16=False\", or \"--compress_to_fp16=True\"\n", "\t\t\t\n", " --extensions \n", "\t\t\tPaths to libraries (.so or .dll) with extensions, comma-separated\n", "\t\t\tlist of paths, objects derived from BaseExtension class or lists of\n", "\t\t\tobjects. For the legacy MO path (if `--use_legacy_frontend` is used),\n", "\t\t\ta directory or a comma-separated list of directories with extensions\n", "\t\t\tare supported. To disable all extensions including those that are placed\n", "\t\t\tat the default location, pass an empty string.\n", " --transform \n", "\t\t\tApply additional transformations. 'transform' can be set by a list\n", "\t\t\tof tuples, where the first element is transform name and the second element\n", "\t\t\tis transform parameters. For example: [('LowLatency2', {{'use_const_initializer':\n", "\t\t\tFalse}}), ...]\"--transform transformation_name1[args],transformation_name2...\"\n", "\t\t\twhere [args] is key=value pairs separated by semicolon. Examples:\n", "\t\t\t \"--transform LowLatency2\" or\n", "\t\t\t \"--transform Pruning\" or\n", "\t\t\t \"--transform LowLatency2[use_const_initializer=False]\" or\n", "\t\t\t \"--transform \"MakeStateful[param_res_names=\n", "\t\t\t{'input_name_1':'output_name_1','input_name_2':'output_name_2'}]\"\"\n", "\t\t\tAvailable transformations: \"LowLatency2\", \"MakeStateful\", \"Pruning\"\n", "\t\t\t\n", " --transformations_config \n", "\t\t\tUse the configuration file with transformations description or pass\n", "\t\t\tobject derived from BaseExtension class. Transformations file can\n", "\t\t\tbe specified as relative path from the current directory, as absolute\n", "\t\t\tpath or as relative path from the mo root directory.\n", " --silent \n", "\t\t\tPrevent any output messages except those that correspond to log level\n", "\t\t\tequals ERROR, that can be set with the following option: --log_level.\n", "\t\t\tBy default, log level is already ERROR.\n", " --log_level \n", "\t\t\tLogger level of logging massages from MO.\n", "\t\t\tExpected one of ['CRITICAL', 'ERROR', 'WARN', 'WARNING', 'INFO',\n", "\t\t\t'DEBUG', 'NOTSET'].\n", " --version \n", "\t\t\tVersion of Model Optimizer\n", " --progress \n", "\t\t\tEnable model conversion progress display.\n", " --stream_output \n", "\t\t\tSwitch model conversion progress display to a multiline mode.\n", " --share_weights \n", "\t\t\tMap memory of weights instead reading files or share memory from input\n", "\t\t\tmodel.\n", "\t\t\tCurrently, mapping feature is provided only for ONNX models\n", "\t\t\tthat do not require fallback to the legacy ONNX frontend for the conversion.\n", "\t\t\t\n", "\n", "PaddlePaddle-specific parameters:\n", " --example_output \n", "\t\t\tSample of model output in original framework. For PaddlePaddle it can\n", "\t\t\tbe Paddle Variable.\n", "\n", "TensorFlow*-specific parameters:\n", " --input_model_is_text \n", "\t\t\tTensorFlow*: treat the input model file as a text protobuf format. If\n", "\t\t\tnot specified, the Model Optimizer treats it as a binary file by default.\n", "\t\t\t\n", " --input_checkpoint \n", "\t\t\tTensorFlow*: variables file to load.\n", " --input_meta_graph \n", "\t\t\tTensorflow*: a file with a meta-graph of the model before freezing\n", " --saved_model_dir \n", "\t\t\tTensorFlow*: directory with a model in SavedModel format of TensorFlow\n", "\t\t\t1.x or 2.x version.\n", " --saved_model_tags \n", "\t\t\tGroup of tag(s) of the MetaGraphDef to load, in string format, separated\n", "\t\t\tby ','. For tag-set contains multiple tags, all tags must be passed in.\n", "\t\t\t\n", " --tensorflow_custom_operations_config_update \n", "\t\t\tTensorFlow*: update the configuration file with node name patterns\n", "\t\t\twith input/output nodes information.\n", " --tensorflow_object_detection_api_pipeline_config \n", "\t\t\tTensorFlow*: path to the pipeline configuration file used to generate\n", "\t\t\tmodel created with help of Object Detection API.\n", " --tensorboard_logdir \n", "\t\t\tTensorFlow*: dump the input graph to a given directory that should be\n", "\t\t\tused with TensorBoard.\n", " --tensorflow_custom_layer_libraries \n", "\t\t\tTensorFlow*: comma separated list of shared libraries with TensorFlow*\n", "\t\t\tcustom operations implementation.\n", "\n", "MXNet-specific parameters:\n", " --input_symbol \n", "\t\t\tSymbol file (for example, model-symbol.json) that contains a topology\n", "\t\t\tstructure and layer attributes\n", " --nd_prefix_name \n", "\t\t\tPrefix name for args.nd and argx.nd files.\n", " --pretrained_model_name \n", "\t\t\tName of a pretrained MXNet model without extension and epoch number.\n", "\t\t\tThis model will be merged with args.nd and argx.nd files\n", " --save_params_from_nd \n", "\t\t\tEnable saving built parameters file from .nd files\n", " --legacy_mxnet_model \n", "\t\t\tEnable MXNet loader to make a model compatible with the latest MXNet\n", "\t\t\tversion. Use only if your model was trained with MXNet version lower\n", "\t\t\tthan 1.0.0\n", " --enable_ssd_gluoncv \n", "\t\t\tEnable pattern matchers replacers for converting gluoncv ssd topologies.\n", "\t\t\t\n", "\n", "Caffe*-specific parameters:\n", " --input_proto \n", "\t\t\tDeploy-ready prototxt file that contains a topology structure and\n", "\t\t\tlayer attributes\n", " --caffe_parser_path \n", "\t\t\tPath to Python Caffe* parser generated from caffe.proto\n", " --k \n", "\t\t\tPath to CustomLayersMapping.xml to register custom layers\n", " --disable_omitting_optional \n", "\t\t\tDisable omitting optional attributes to be used for custom layers.\n", "\t\t\tUse this option if you want to transfer all attributes of a custom layer\n", "\t\t\tto IR. Default behavior is to transfer the attributes with default values\n", "\t\t\tand the attributes defined by the user to IR.\n", " --enable_flattening_nested_params \n", "\t\t\tEnable flattening optional params to be used for custom layers. Use\n", "\t\t\tthis option if you want to transfer attributes of a custom layer to IR\n", "\t\t\twith flattened nested parameters. Default behavior is to transfer\n", "\t\t\tthe attributes without flattening nested parameters.\n", "\n", "Kaldi-specific parameters:\n", " --counts \n", "\t\t\tPath to the counts file\n", " --remove_output_softmax \n", "\t\t\tRemoves the SoftMax layer that is the output layer\n", " --remove_memory \n", "\t\t\tRemoves the Memory layer and use additional inputs outputs instead\n", "\t\t\t\n", "\n" ] } ], "source": [ "# Python conversion API parameters description\n", "from openvino.tools import mo\n", "\n", "\n", "mo.convert_model(help=True)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Fetching example models\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "This notebook uses two models for conversion examples:\n", "\n", "* [Distilbert](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) NLP model from Hugging Face\n", "* [Resnet50](https://pytorch.org/vision/stable/models/generated/torchvision.models.resnet50.html#torchvision.models.ResNet50_Weights) CV classification model from torchvision" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from pathlib import Path\n", "\n", "# create a directory for models files\n", "MODEL_DIRECTORY_PATH = Path(\"model\")\n", "MODEL_DIRECTORY_PATH.mkdir(exist_ok=True)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Fetch [distilbert](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) NLP model from Hugging Face and export it in ONNX format:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2023-10-30 09:15:39.568630: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n", "2023-10-30 09:15:39.665054: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n", "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2023-10-30 09:15:41.296271: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n", "/home/ea/work/ov_venv/lib/python3.8/site-packages/torch/cuda/__init__.py:138: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11080). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)\n", " return torch._C._cuda_getDeviceCount() > 0\n", "/home/ea/work/ov_venv/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py:223: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\n", " mask, torch.tensor(torch.finfo(scores.dtype).min)\n" ] }, { "data": { "text/plain": [ "(['input_ids', 'attention_mask'], ['logits'])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from transformers import AutoModelForSequenceClassification, AutoTokenizer\n", "from transformers.onnx import export, FeaturesManager\n", "\n", "\n", "ONNX_NLP_MODEL_PATH = MODEL_DIRECTORY_PATH / \"distilbert.onnx\"\n", "\n", "# download model\n", "hf_model = AutoModelForSequenceClassification.from_pretrained(\"distilbert-base-uncased-finetuned-sst-2-english\")\n", "# initialize tokenizer\n", "tokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased-finetuned-sst-2-english\")\n", "\n", "# get model onnx config function for output feature format sequence-classification\n", "model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(hf_model, feature=\"sequence-classification\")\n", "# fill onnx config based on pytorch model config\n", "onnx_config = model_onnx_config(hf_model.config)\n", "\n", "# export to onnx format\n", "export(\n", " preprocessor=tokenizer,\n", " model=hf_model,\n", " config=onnx_config,\n", " opset=onnx_config.default_onnx_opset,\n", " output=ONNX_NLP_MODEL_PATH,\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Fetch [Resnet50](https://pytorch.org/vision/stable/models/generated/torchvision.models.resnet50.html#torchvision.models.ResNet50_Weights) CV classification model from torchvision:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Downloading: \"https://download.pytorch.org/models/resnet50-11ad3fa6.pth\" to /home/ea/.cache/torch/hub/checkpoints/resnet50-11ad3fa6.pth\n", "100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97.8M/97.8M [00:06<00:00, 14.9MB/s]\n" ] }, { "data": { "text/plain": [ "ResNet(\n", " (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)\n", " (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)\n", " (layer1): Sequential(\n", " (0): Bottleneck(\n", " (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " (downsample): Sequential(\n", " (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " )\n", " )\n", " (1): Bottleneck(\n", " (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " (2): Bottleneck(\n", " (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " )\n", " (layer2): Sequential(\n", " (0): Bottleneck(\n", " (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " (downsample): Sequential(\n", " (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)\n", " (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " )\n", " )\n", " (1): Bottleneck(\n", " (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " (2): Bottleneck(\n", " (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " (3): Bottleneck(\n", " (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " )\n", " (layer3): Sequential(\n", " (0): Bottleneck(\n", " (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " (downsample): Sequential(\n", " (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)\n", " (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " )\n", " )\n", " (1): Bottleneck(\n", " (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " (2): Bottleneck(\n", " (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " (3): Bottleneck(\n", " (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " (4): Bottleneck(\n", " (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " (5): Bottleneck(\n", " (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " )\n", " (layer4): Sequential(\n", " (0): Bottleneck(\n", " (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " (downsample): Sequential(\n", " (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)\n", " (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " )\n", " )\n", " (1): Bottleneck(\n", " (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " (2): Bottleneck(\n", " (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n", " (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", " (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", " (relu): ReLU(inplace=True)\n", " )\n", " )\n", " (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))\n", " (fc): Linear(in_features=2048, out_features=1000, bias=True)\n", ")" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from torchvision.models import resnet50, ResNet50_Weights\n", "\n", "\n", "# create model object\n", "pytorch_model = resnet50(weights=ResNet50_Weights.DEFAULT)\n", "# switch model from training to inference mode\n", "pytorch_model.eval()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Convert PyTorch model to ONNX format:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ONNX model exported to model/resnet.onnx\n" ] } ], "source": [ "import torch\n", "import warnings\n", "\n", "\n", "ONNX_CV_MODEL_PATH = MODEL_DIRECTORY_PATH / \"resnet.onnx\"\n", "\n", "if ONNX_CV_MODEL_PATH.exists():\n", " print(f\"ONNX model {ONNX_CV_MODEL_PATH} already exists.\")\n", "else:\n", " with warnings.catch_warnings():\n", " warnings.filterwarnings(\"ignore\")\n", " torch.onnx.export(model=pytorch_model, args=torch.randn(1, 3, 780, 520), f=ONNX_CV_MODEL_PATH)\n", " print(f\"ONNX model exported to {ONNX_CV_MODEL_PATH}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Basic conversion\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "To convert a model to OpenVINO IR, use the following command:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "! mo --input_model model/distilbert.onnx --output_dir model" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] } ], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "# mo.convert_model returns an openvino.runtime.Model object\n", "ov_model = mo.convert_model(ONNX_NLP_MODEL_PATH)\n", "\n", "# then model can be serialized to *.xml & *.bin files\n", "from openvino.runtime import serialize\n", "\n", "serialize(ov_model, xml_path=MODEL_DIRECTORY_PATH / \"distilbert.xml\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Model conversion parameters\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Both Python conversion API and Model Optimizer command-line tool provide the following capabilities:\n", "* overriding original input shapes for model conversion with `input` and `input_shape` parameters. [Setting Input Shapes guide](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/setting-input-shapes.html).\n", "* cutting off unwanted parts of a model (such as unsupported operations and training sub-graphs) using the `input` and `output` parameters to define new inputs and outputs of the converted model. [Cutting Off Parts of a Model guide](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model.html).\n", "* inserting additional input pre-processing sub-graphs into the converted model by using the `mean_values`, `scales_values`, `layout`, and other parameters. [Embedding Preprocessing Computation article](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_Additional_Optimization_Use_Cases.html).\n", "* compressing the model weights (for example, weights for convolutions and matrix multiplications) to FP16 data type using `compress_to_fp16` compression parameter. [Compression of a Model to FP16 guide](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html).\n", "\n", "If the out-of-the-box conversion (only the `input_model` parameter is specified) is not successful, it may be required to use the parameters mentioned above to override input shapes and cut the model." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Setting Input Shapes\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Model conversion is supported for models with dynamic input shapes that contain undefined dimensions. However, if the shape of data is not going to change from one inference request to another, it is recommended to set up static shapes (when all dimensions are fully defined) for the inputs. Doing it at this stage, instead of during inference in runtime, can be beneficial in terms of performance and memory consumption. To set up static shapes, model conversion API provides the `input` and `input_shape` parameters.\n", "\n", "For more information refer to [Setting Input Shapes guide](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/setting-input-shapes.html)." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.bin\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "! mo --input_model model/distilbert.onnx --input input_ids,attention_mask --input_shape [1,128],[1,128] --output_dir model\n", "\n", "# alternatively\n", "! mo --input_model model/distilbert.onnx --input input_ids[1,128],attention_mask[1,128] --output_dir model" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(\n", " ONNX_NLP_MODEL_PATH,\n", " input=[\"input_ids\", \"attention_mask\"],\n", " input_shape=[[1, 128], [1, 128]],\n", ")\n", "\n", "# alternatively specify input shapes, using the input parameter\n", "ov_model = mo.convert_model(ONNX_NLP_MODEL_PATH, input=[(\"input_ids\", [1, 128]), (\"attention_mask\", [1, 128])])" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "The input_shape parameter allows overriding original input shapes to ones compatible with a given model. Dynamic shapes, i.e. with dynamic dimensions, can be replaced in the original model with static shapes for the converted model, and vice versa. The dynamic dimension can be marked in the model conversion API parameter as `-1` or `?`. For example, launch model conversion for the ONNX Bert model and specify a dynamic sequence length dimension for inputs:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "! mo --input_model model/distilbert.onnx --input input_ids,attention_mask --input_shape [1,-1],[1,-1] --output_dir model" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(\n", " ONNX_NLP_MODEL_PATH,\n", " input=[\"input_ids\", \"attention_mask\"],\n", " input_shape=[[1, -1], [1, -1]],\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "To optimize memory consumption for models with undefined dimensions in runtime, model conversion API provides the capability to define boundaries of dimensions. The boundaries of undefined dimensions can be specified with ellipsis. For example, launch model conversion for the ONNX Bert model and specify a boundary for the sequence length dimension:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "! mo --input_model model/distilbert.onnx --input input_ids,attention_mask --input_shape [1,10..128],[1,10..128] --output_dir model" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(\n", " ONNX_NLP_MODEL_PATH,\n", " input=[\"input_ids\", \"attention_mask\"],\n", " input_shape=[[1, \"10..128\"], [1, \"10..128\"]],\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Cutting Off Parts of a Model\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "The following examples show when model cutting is useful or even required:\n", "\n", "* A model has pre- or post-processing parts that cannot be translated to existing OpenVINO operations.\n", "* A model has a training part that is convenient to be kept in the model but not used during inference.\n", "* A model is too complex to be converted at once because it contains many unsupported operations that cannot be easily implemented as custom layers.\n", "* A problem occurs with model conversion or inference in OpenVINO Runtime. To identify the issue, limit the conversion scope by an iterative search for problematic areas in the model.\n", "* A single custom layer or a combination of custom layers is isolated for debugging purposes.\n", "\n", "For a more detailed description, refer to the [Cutting Off Parts of a Model guide](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model.html)." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.bin\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/distilbert.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "# cut at the end\n", "! mo --input_model model/distilbert.onnx --output /classifier/Gemm --output_dir model\n", "\n", "\n", "# cut from the beginning\n", "! mo --input_model model/distilbert.onnx --input /distilbert/embeddings/LayerNorm/Add_1,attention_mask --output_dir model" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "# cut at the end\n", "ov_model = mo.convert_model(ONNX_NLP_MODEL_PATH, output=\"/classifier/Gemm\")\n", "\n", "# cut from the beginning\n", "ov_model = mo.convert_model(\n", " ONNX_NLP_MODEL_PATH,\n", " input=[\"/distilbert/embeddings/LayerNorm/Add_1\", \"attention_mask\"],\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Embedding Preprocessing Computation\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Input data for inference can be different from the training dataset and requires additional preprocessing before inference. To accelerate the whole pipeline, including preprocessing and inference, model conversion API provides special parameters such as `mean_values`, `scale_values`, `reverse_input_channels`, and `layout`. Based on these parameters, model conversion API generates OpenVINO IR with additionally inserted sub-graphs to perform the defined preprocessing. This preprocessing block can perform mean-scale normalization of input data, reverting data along channel dimension, and changing the data layout. For more information on preprocessing, refer to the [Embedding Preprocessing Computation article](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_Additional_Optimization_Use_Cases.html)." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### Specifying Layout\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Layout defines the meaning of dimensions in a shape and can be specified for both inputs and outputs. Some preprocessing requires to set input layouts, for example, setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB). For the layout syntax, check the [Layout API overview](https://docs.openvino.ai/2024/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview.html). To specify the layout, you can use the layout option followed by the layout value.\n", "\n", "The following command specifies the `NCHW` layout for a Pytorch Resnet50 model that was exported to the ONNX format:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "! mo --input_model model/resnet.onnx --layout nchw --output_dir model" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, layout=\"nchw\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### Changing Model Layout\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Changing the model layout may be necessary if it differs from the one presented by input data. Use either `layout` or `source_layout` with `target_layout` to change the layout." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.bin\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "! mo --input_model model/resnet.onnx --layout \"nchw->nhwc\" --output_dir model\n", "\n", "# alternatively use source_layout and target_layout parameters\n", "! mo --input_model model/resnet.onnx --source_layout nchw --target_layout nhwc --output_dir model" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, layout=\"nchw->nhwc\")\n", "\n", "# alternatively use source_layout and target_layout parameters\n", "ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, source_layout=\"nchw\", target_layout=\"nhwc\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### Specifying Mean and Scale Values\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Model conversion API has the following parameters to specify the values: `mean_values`, `scale_values`, `scale`. Using these parameters, model conversion API embeds the corresponding preprocessing block for mean-value normalization of the input data and optimizes this block so that the preprocessing takes negligible time for inference." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.bin\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "! mo --input_model model/resnet.onnx --mean_values [123,117,104] --scale 255 --output_dir model\n", "\n", "! mo --input_model model/resnet.onnx --mean_values [123,117,104] --scale_values [255,255,255] --output_dir model" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, mean_values=[123, 117, 104], scale=255)\n", "\n", "ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, mean_values=[123, 117, 104], scale_values=[255, 255, 255])" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### Reversing Input Channels\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Sometimes, input images for your application can be of the `RGB` (or `BGR`) format, and the model is trained on images of the `BGR` (or `RGB`) format, which is in the opposite order of color channels. In this case, it is important to preprocess the input images by reverting the color channels before inference." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "! mo --input_model model/resnet.onnx --reverse_input_channels --output_dir model" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, reverse_input_channels=True)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Compressing a Model to FP16\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Optionally all relevant floating-point weights can be compressed to FP16 data type during the model conversion, creating a compressed FP16 model. This smaller model occupies about half of the original space in the file system. While the compression may introduce a drop in accuracy, for most models, this decrease is negligible." ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.\n", "Find more information about compression to FP16 at https://docs.openvino.ai/2024/openvino-workflow/model-preparation/conversion-parameters.html\n", "[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n", "Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n", "[ SUCCESS ] Generated IR version 11 model.\n", "[ SUCCESS ] XML file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.xml\n", "[ SUCCESS ] BIN file: /home/ea/work/openvino_notebooks/notebooks/convert-to-openvino/model/resnet.bin\n" ] } ], "source": [ "# Model Optimizer CLI\n", "\n", "! mo --input_model model/resnet.onnx --compress_to_fp16=True --output_dir model" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, compress_to_fp16=True)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Convert Models Represented as Python Objects\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Python conversion API can pass Python model objects, such as a Pytorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts)." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11.\n" ] } ], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(pytorch_model)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "`convert_model()` accepts all parameters available in the MO command-line tool. Parameters can be specified by Python classes or string analogs, similar to the command-line tool." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "# Python conversion API\n", "from openvino.tools import mo\n", "\n", "\n", "ov_model = mo.convert_model(\n", " pytorch_model,\n", " input_shape=[1, 3, 100, 100],\n", " mean_values=[127, 127, 127],\n", " layout=\"nchw\",\n", ")\n", "\n", "ov_model = mo.convert_model(pytorch_model, source_layout=\"nchw\", target_layout=\"nhwc\")\n", "\n", "ov_model = mo.convert_model(pytorch_model, compress_to_fp16=True, reverse_input_channels=True)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "openvino_notebooks": { "imageUrl": "", "tags": { "categories": [ "Convert", "API Overview" ], "libraries": [], "other": [], "tasks": [ "Image Classification", "Text Classification" ] } }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 4 }