Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Continuously train CoreML model after shipping

In looking over the new CoreML API, I don't see any way to continue training the model after generating the .mlmodel and bundling it in your app. This makes me think that I won't be able to perform machine learning on my user's content or actions because the model must be entirely trained beforehand.

Is there any way to add training data to my trained model after shipping?

EDIT: I just noticed you could initialize a generated model class from a URL, so perhaps I can post new training data to my server, re-generate the trained model and download it into the app? Seems like it would work, but this completely defeats the privacy aspect of being able to use ML without the user's data leaving the device.

like image 878
Patrick Goley Avatar asked Jun 09 '17 14:06

Patrick Goley


4 Answers

The .mlmodel file is compiled by Xcode into a .mlmodelc structure (which is actually a folder inside your app bundle).

Your app might be able to download a new .mlmodel from a server but I don't think you can run the Core ML compiler from inside your app.

Maybe it is possible for your app to download the compiled .mlmodelc data from a server, copy it into the app's Documents directory, and instantiate the model from that. Try it out. ;-)

(This assumes the App Store does not do any additional processing on the .mlmodelc data before it packages up your app and ships it to the user.)

like image 119
Matthijs Hollemans Avatar answered Nov 11 '22 19:11

Matthijs Hollemans


Apple has recently added a new API for on-device model compilation. Now you can download your model and compile it on device

like image 27
Shehroz Avatar answered Nov 11 '22 19:11

Shehroz


CoreML 3 now supports on-device model personalization. You can improve your model for each user while keeping its data private.

https://developer.apple.com/machine-learning/core-ml/

like image 3
Marcos Paulo Avatar answered Nov 11 '22 20:11

Marcos Paulo


In order to dynamically update the model (without updating the whole app), you need to use MPS (Metal Performance Shader) directly instead of relying on .mlmodel, which must be bundled with the app.

It means you need to manually build the neural network by writing some Swift code (instead of using coremltools to converts existing models directly), and feed various weights for each layer, which is a little bit of work, but not a rocket science.

This is a good video to watch if you want to know more about MPS.

https://developer.apple.com/videos/play/wwdc2017/608/

like image 2
Satoshi Nakajima Avatar answered Nov 11 '22 19:11

Satoshi Nakajima