Microsoft and Facebook today announced the Open Neural Network Exchange, otherwise known as ONNX. The two companies have worked together on the open source project to provide “a shared model representation for interoperability and innovation in the AI framework ecosystem.” This, in turn, will help developers reduce delays which normally come with converting learning models between frameworks.
According to Microsoft, ONNX will “empower AI developers to choose the framework that is best for the current stage of their project and easily switch between frameworks as their project evolves.” Microsoft and Facebook have already made available the code and documentation for ONNX on GitHub, and are calling on the community to get active with the development.
For the more technical details on ONNX, you can check out this website. Microsoft, though, is sharing the following technical summary of the project.
ONNX provides a definition of an extensible computation graph model, as well as definitions of built-in operators and standard data types. Initially we focus on the capabilities needed for inferencing (evaluation). Each computation dataflow graph is structured as a list of nodes that form an acyclic graph. Nodes have one or more inputs and one or more outputs. Each node is a call to an operator. The graph also has metadata to help document its purpose, author, etc. Operators are implemented externally to the graph, but the set of built-in operators are portable across frameworks. Every framework supporting ONNX will provide implementations of these operators on the applicable data types.
Microsoft is planning to include ONNX in an upcoming release of the Cognitive Toolkit, its open source framework for building deep neural networks. The company, along with Facebook, also plans to include more reference implementations, examples, and more in the near future.