My requirement is to scan a fixed object. After recognizing that, I want to highlight the object and to display corresponding pre-feeded parameters accordingly, like height, width, circumference, etc.
This all, I want to do, without internet, using camera only.
Please, let me know if any solution / suggestion for this.
I have seen CraftAR SDK. It seems working as per my requirement, in order to recognize object, but it uses its server for storing images, which I don't want. As I want the static image, to be stored in app itself.
Try using the TensorFlow Object Detection API. Link: TensorFlow Object Detection API
And you can then customize your overall app behaviour accordingly, managing all your requirements (like for eg. showing a pop up with all the details of the object that's being detected after receiving some kind of callback when using the Tensoflow Object Detection API after the object has been detected successfully) as well as I believe that you can customise the TensorFlow object detection scenario part as per your need (Here, I am talking about the UI related part specifically in case of how you want your app to detect the object graphically).
To answer about the details like how it works offline and the resulting overall APK size etc. can be better understood from the links given below:
1] Step by Step TensorFlow Object Detection API Tutorial — Part 1: Selecting a Model
2] How to train your own Object Detector with TensorFlow’s Object Detector API
As an overview, for detecting the objects offline you have to be limited (just to reduce your APK size) with your own set of data/objects (as you have mentioned that you have got a fixed object for detection, that's good) and then you have to train (can be trained locally as well as on cloud) this set of objects using a SSD-Mobilenet model. Then you will have your own trained model (in simpler words) of those set of objects which will give you a retrained_graph.pb file (this goes into your assets folder for your android project) which is the final outcome that includes the trick (in simpler words) to detect and classify the camera frames in real time thereby displaying the results (or object details) as per the info (or the set of data/objects) provided; without the need of any sort of internet connection. For instance, TF Detect can track objects (from 80 categories) in the camera preview in real-time.
For further reference follow these links:
1] Google Inception Model
2] Tensorflow Object Detection API Models
3] Speed/Accuracy Trade-offs for Modern Convolutional Object Detectors
Also you can optimize (or compress) the retrained_graph.pb to optimized_graph.pb as this is the only major thing that would increase your APK file size. Long ago, when I tried detecting 5 different objects (using TF Classify), each object's folder was having about 650 photographs and the overall size of all the 5 folders (together) was about 230 mb and my retrained_graph.pb size was only 5.5 mb (which can further be optimized to optimized_graph.pb reducing its size even more).
For to start learning it from the beginner's level I would suggest you to once go through these codelab links and understand the working of these two projects as I too did so.
1] TensorFlow For Poets
2] TensorFlow For Poets 2: Optimize for Mobile
Wishing you good luck.
The below link to TensorFlow GitHub (Master) includes almost everything:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With