Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to work with machine learning algorithms in embedded systems?

I'm doing a project to detect (classify) human activities using a ARM cortex-m0 microcontroller (Freedom - KL25Z) with an accelerometer. I intend to predict the activity of the user using machine learning.

The problem is, the cortex-m0 is not capable of processing training or predicting algorithms, so I would probably have to collect the data, train it in my computer and then embed it somehow, which I don't really know how to do it.

I saw some post in the internet saying that you can generate a matrix of weights and embed it in a microcontroller, so it would be a straightforward function to predict something ,based on the data you providing for this function. Would it be the right way of doing ?

Anyway my question is, how could I embedded a classification algorithm in a microcontroller?

I hope you guys can help me and give some guidance, I'm kind of lost here.

Thank you in advance.

like image 547
Renan Fonteles Avatar asked May 09 '16 01:05

Renan Fonteles


People also ask

How machine learning is used in embedded systems?

Machine learning (ML) enables electronic systems to learn autonomously from existing data and to use this acquired knowledge to independently make assessments, predictions and decisions. These kinds of applications are highly compute-intensive, so they are traditionally executed on PCs and cloud servers.

Is AI used in embedded systems?

Artificial intelligence (AI) is being incorporated into small, low-power embedded computing devices for consumer electronics, industry, and the Internet of Things (IoT).

What ML technology is available that can be installed on embedded systems?

There are many such frameworks, but some common ones include TensorFlow, Caffe, or PyTorch. ML frameworks can be used for model development and training, and can also be used to run inference engines using trained models at the edge.


1 Answers

I've been thinking about doing this myself to solve a problem that I've had a hard time developing a heuristic for by hand.

You're going to have to write your own machine-learning methods, because there aren't any machine learning libraries out there suitable for low-end MCUs, as far as I know.

Depending on how hard the problem is, it may still be possible to develop and train a simple machine learning algorithm that performs well on a low-end MCU. After-all, some of the older/simpler machine learning methods were used with satisfactory results on hardware with similar constraints.

Very generally, this is how I'd go about doing this:

  1. Get the (labelled) data to a PC (through UART, SD-card, or whatever means you have available).
  2. Experiment with the data and a machine learning toolkit (scikit-learn, weka, vowpal wabbit, etc). Make sure an off-the-shelf method is able to produce satisfactory results before moving forward.
  3. Experiment with feature engineering and selection. Try to get the smallest feature set possible to save resources.
  4. Write your own machine learning method that will eventually be used on the embedded system. I would probably choose perceptrons or decision trees, because these don't necessarily need a lot of memory. Since you have no FPU, I'd only use integers and fixed-point arithmetic.
  5. Do the normal training procedure. I.e. use cross-validation to find the best tuning parameters, integer bit-widths, radix positions, etc.
  6. Run the final trained predictor on the held-out testing set.
  7. If the performance of your trained predictor was satisfactory on the testing set, move your relevant code (the code that calculates the predictions) and the model you trained (e.g. weights) to the MCU. The model/weights will not change, so they can be stored in flash (e.g. as a const array).
like image 72
user43704 Avatar answered Nov 15 '22 07:11

user43704