Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Predicting radius of circle with Neural Network

I am generating uniformly distributed data points in a circle shape, where the radius of each circle is also generated uniformly. These circles look like this:

enter image description here

the uniformly distributed radia look like this:

enter image description here

My goal in this exercise is to predict the radius of these circles with a NN just by inputting the x,y-coordinates of the data points. (I am generating 1000 circles with its radia and data points for this)

But when trying this with the following architecture:

model = Sequential()

model.add(Flatten(input_shape=(X.shape[1],2)))
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
model.compile('adam', 'mse', metrics=['accuracy'])
model.summary()

I get these results:

model.predict(X_test)[:10]

array([[1.0524317],
       [0.9874419],
       [1.1739452],
       [1.0584671],
       [1.035887 ],
       [1.1663618],
       [1.1536952],
       [0.7245674],
       [1.0469185],
       [1.328696 ]], dtype=float32)

Y_test[:10]

array([[1.34369499],
       [0.9539995 ],
       [1.73399686],
       [1.56665937],
       [0.40627674],
       [1.73467557],
       [0.87950118],
       [1.13395495],
       [0.51870017],
       [1.28441215]])

enter image description here

as you can see the results are very bad when predicting the radius.

What am I missing here? Or is a NN just not the best way to do this task?

[EDIT]

Now I tried it with 100k circles and their corresponding radia:

enter image description here

The plot shows the real value against the prediction of the radius. with more training samples the prediction is much better, but for such a simple task there is still a big scatter around y = x.

like image 371
jeffs Avatar asked Jun 28 '20 14:06

jeffs


1 Answers

I have some suggestions, as you seem to be having Overfitting:

  1. As you are in a regression task, replacing the 'accuracy' metric for 'mse' would help evaluate your situation, as 'accuracy' is more used in classification problems.
  2. Not always the last epoch of your training will have the best results. That's why there are Callback functions. It is possible to check, during your training, what was the best training weights and use them in the and. Check this: https://keras.io/api/callbacks/ . I recommend, in your case, to use ModelCheckpoint to save only the best weights and EarlyStopping to stop training once your validation score isn't getting any better.
  3. It's always good to check whether your model has high variance or high bias. Good ref: https://towardsdatascience.com/understanding-the-bias-variance-tradeoff-165e6942b229 In this case, you can use regularization terms to prevent a big variance in your outputs. Check this: https://keras.io/api/layers/regularizers/ As it applies penalties to complex models, it will force a tradeoff between simplicity and effectiveness.

This is not all that it's possible to do, but I believe those points will be of great help.

like image 138
Danilo Nunes Avatar answered Nov 13 '22 21:11

Danilo Nunes