Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a common format for neural networks

Different teams use different libraries to train and run neural networks (caffe, torch, theano...). This makes sharing difficult: each library has its own format to store networks and you have to install a new library each time you want to test other teams' work.

I am looking for solutions to make this less tedious: - Is there a preferred (shared?) format to store neural networks? - Is there a service or library that can help handle different types of networks / or transform one type into another?

Thank you!

like image 932
David Avatar asked Nov 20 '15 14:11

David


2 Answers

Is there a preferred (shared?) format to store neural networks?

Each library / framework has its own serialization, e.g. Caffe uses Protocol Buffers, Torch has a built-in serialization scheme and Theano objects can be serialized with pickle.

In some cases like OverFeat or darknet the weights and biases are stored on-disk in binary format via plain fwrite-s of the corresponding float(or double) contiguous arrays (see this answer for more details). Note that this does not cover the architecture of the network / model which has to be known or represented separately (like declared explicitly at load time).

Also: a library like libccv stores the structure and the weights in a SQLite database.

Is there a service or library that can help handle different types of networks / or transform one type into another?

I don't think there is a single (meta) library that claims to do so. But it exists distinct projects that provide convenient converters.

Some examples (non exhaustive):

  • Caffe -> Torch: https://github.com/szagoruyko/loadcaffe
  • Torch -> Caffe: https://github.com/facebook/fb-caffe-exts
  • Caffe -> TensorFlow: https://github.com/ethereon/caffe-tensorflow

--

UPDATE (2017-09): two noticeable initiatives are:

(1) the ONNX format (a.k.a. Open Neural Network Exchange):

[...] a standard for representing deep learning models that enables models to be transferred between frameworks

See these blog posts.

(2) the CoreML format introduced by Apple:

[...] a public file format (.mlmodel) for a broad set of ML methods [...] Models in this format can be directly integrated into apps through Xcode.

like image 96
deltheil Avatar answered Oct 14 '22 18:10

deltheil


ONNX (Open Neural Network Exchange)

ONNX is an open-source AI ecosystem that provides a common format for neural networks.

  • It helps converting a deep learning model to another.

    Generally, model conversion takes weeks/months without ONNX:

    ONNX offers simpler and faster conversion process:

    For all supported conversions, see here.

  • It makes deployment easier, models stored in a much preferable way: enter image description here In the above image, ONNX acts as a data ingestion layer, transforms each input model to the same format. Otherwise, all models would be like a bunch of puzzle pieces that do not fit each other.


How to Use ONNX - Keras Conversion Example

Let's say you have your Keras model and you want to transform it to ONNX:

model = load_model("keras_model.hdf5")  # h5 is also OK!

onnx_model = keras2onnx.convert_keras(model, model.name)

onnx_model_file = 'output_model.onnx'
onnx.save_model(onnx_model, onnx_model_file)

Then load & run saved model :

onnx_model = onnx.load_model('output_model.onnx')

content = onnx_model.SerializeToString()
sess = onnxruntime.InferenceSession(content)
x = x if isinstance(x, list) else [x]
feed = dict([(input.name, x[n]) for n, input in enumerate(sess.get_inputs())])

# Do inference
pred_onnx = sess.run(None, feed)

This example uses keras2onnx to convert the Keras model and onnxruntime to do inference.

Note: There are also lots of pre-trained models in the ONNX format. Check this out!


References:
1. https://towardsdatascience.com/onnx-made-easy-957e60d16e94
2. https://blog.codecentric.de/en/2019/08/portability-deep-learning-frameworks-onnx/
3. http://on-demand.gputechconf.com/gtc/2018/presentation/s8818-onnx-interoperable-deep-learning-presented-by-facebook.pdf
4. https://devblogs.nvidia.com/tensorrt-3-faster-tensorflow-inference/

like image 20
isydmr Avatar answered Oct 14 '22 16:10

isydmr