The Open Neural Network Exchange Format (ONNX) is a new standard/ format for exchanging deep learning models. It will make deep learning models portable thus preventing vendor lock in.
The idea is that you can train a model with one tool stack and then deploy it using another for inference and prediction
There are several benefits of ONNX such as:
- You can use a framework to train your model and use another framework for inference/ deployment.
- A single framework need not be complete and support everything from training to deployment challenges (Network protocols, efficient allocation, parameter sharing). Each framework can do a small section and all of them can be glued together by ONNX.
- Depending upon the runtime environment/ situation, you can use any framework according to your advantage
- Any tools exporting ONNX models can benefit ONNX-compatible runtimes and libraries designed to maximize performance on some of the best hardware in the industry
ONNX is being developed by three industry giants:
ONNX is being supported by the following companies as well:
Read the full article for further insights, a real life use case and basic challenges in building ONNX
This is a companion discussion topic for the original entry at http://iq.opengenus.org/onnx-format-for-interchangeable-ai-model/