File size: 2,072 Bytes
ed22dc9 e1ff6a2 ed22dc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
license: cc
---
## Install following python libs
```
pip3 install tensorflow
pip3 install tensorflowjs
pip3 install tf2onnx
pip3 install onnxruntime
pip3 install pillow
pip3 install optimum[exporters]
```
Change to compatible version of numpy for tensorflow
```
pip3 uninstall numpy
pip3 install numpy==1.23.5
```
## Node Install
Download install project dependencies.
```
npm install
```
### Summary of Commands:
- Run the Node training script to save the Layers Model.
- Convert tfjs_layers_model → tfjs_graph_model
- Convert graph model to onnx
- Validate onnx structure
- Test Model
# 1. Create Tensorflow model in node
This will loop through the training images taking base folder name as the label for the images to be associated against.
Once complete saved-model/model.json is created.
```
node generate.js
```
# 2. Convert Model
Convert from layers to graph model this is required to generate an onnx from tf2onnx
```
tensorflowjs_converter --input_format=tfjs_layers_model \ --output_format=tfjs_graph_model \ ./saved-model/layers-model/model.json \ ./saved-model/graph-model
```
# 3. Convert to ONNX Model
This will convert to a ONNX model to be used with transformers.js on web or nodejs.
```
python3 -m tf2onnx.convert --tfjs ./saved-model/graph-model/model.json --output ./saved-model/model.onnx
```
Unable to figure a way to use Optimum with tensorflow.js models atm..
# 4. Validate ONNX
Make sure the conversion worked and no issues
```
python3 validate_onnx.py
```
# 5. Test ONNX Model python
update the image path in the code to point to an image to confirm working as expected
- I tested against one of the trained image that should give 1.
```
python3 test_image.py
```
Inference outputs: [array([[0., 1.]], dtype=float32)]
# 5. Test ONNX Model JS onnxruntime-node
update the image path in the code to point to an image to confirm working as expected
```
node onnxruntime-node
```
Inference outputs: Tensor {
cpuData: Float32Array(2) [ 0, 1 ],
dataLocation: 'cpu',
type: 'float32',
dims: [ 1, 2 ],
size: 2
}
|