I want to use Google's Machine Learning thing with App Engine application written on python.
This application should retrain TensorFlow models before every use, because of the investigation nature (data clusterization using Kohonen's SOM).
I have following questions:
Can an App Engine based app to command Machine Learning thing to train some model with some input data? Can an App Engine based app send some input vector into ML thing and get the result (what cluster this vector belongs)? If everything is possible, how to do that?
If nothing from this is possible is there any other architecture I can use to make an App Engine based app use TensorFlow
?
I talk about this thing:
Yes, you can use App Engine to communicate with Google Cloud Machine Learning (referred to as CloudML
from here on).
To communicate with CloudML from Python you can use the Google API client library, which you can use with any Google Service. This client library can also be used on App Engine, it even has specific documentation for this here.
I would recommend to first experiment with the API Client locally before testing it on App Engine. For the next part of this answer, I will make no distinction between using this client library locally or on App Engine.
You mentioned two different kind of operations you want to do with CloudML
:
Updating a model on new data actually corresponds with two steps. First training the model on new data (with or without CloudML
) and subsequently deploying this newly trained model on CloudML
.
You can do both steps with the API client library from App Engine, but to reduce the complexity, I think you should start by following the prediction quickstart. This will result in you having a newly trained and deployed model and will give you an understanding of the different steps involved.
Once you are familiar with the concepts and steps involved, you will see that you can store your new data on GCS, and replace the different gcloud commands in the quickstart by their respective API calls that you can make with the API client library (documentation).
If you have a deployed model (if not, follow link from the previous step), you can easily communicate with CloudML
to either get 1)batch predictions or 2)online predictions (the latter is in alpha).
Since you are using App Engine, I assume you are interested in using the online predictions (getting immediate results).
The minimal code required to do this:
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
projectID = 'projects/<your_project_id>'
modelName = projectID+'/models/<your_model_name>'
credentials = GoogleCredentials.get_application_default()
ml = discovery.build('ml', 'v1beta1', credentials=credentials)
# Create a dictionary with the fields from the request body.
requestDict = {"instances":[
{"image": [0.0,..., 0.0, 0.0], "key": 0}
]}
# Create a request to call projects.models.create.
request = ml.projects().predict(
name=modelName,
body=requestDict)
response = request.execute()
With {"image": <image_array>, "key": <key_id>}
the input-format you have defined for the deployed model via the link from the previous step. This will return in response
containing the expected output of the model.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With