I'm experiencing the capacity of CoreML for a project. Here's what I managed to do :
Compile it at run time :
let classifierName = "classifier1"
let fileName = NSString(format:"%@.mlmodel",classifierName)
let documentsUrl:URL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first as URL!
let destinationFileUrl = documentsUrl.appendingPathComponent(fileName as String)
let compiledModelUrl = try? MLModel.compileModel(at: destinationFileUrl)
let model = try? MLModel(contentsOf: compiledModelUrl!)
Now, I would like to use my model to make prediction. I tried in a sample app to directly embed the .mlmodel file, which allow XCode to create a wrapper class at build time to instantiate input :
let multiArr = try? MLMultiArray.init(shape: [1], dataType: .double)
let input = classifier1Input(input: multiArr!)
let output = try? model.prediction(input: input)
But because I'm downloading the file from server at run time, I do not have access to this kind of wrapper class.
let predict = model?.prediction(from: <MLFeatureProvider>)
Any ideas ?
Simplest solution: copy that Xcode-generated wrapper class into a Swift file and add it to your project. (Note that this wrapper class also shows how to make an MLFeatureProvider
etc.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With