Number of parameters in a CONV layer would be : ((m * n * d)+1)* k), added 1 because of the bias term for each filter. The same expression can be written as follows: ((shape of width of the filter * shape of height of the filter * number of filters in the previous layer+1)*number of filters).
Here, there are 27 parameters — 24 weights and 3 biases.
Models and layers have special method for that purpose:
model.count_params()
Also, to get a short summary of each layer dimensions and parameters, you might find useful the following method
model.summary()
import keras.backend as K
def size(model): # Compute number of params in a model (the actual number of floats)
return sum([np.prod(K.get_value(w).shape) for w in model.trainable_weights])
Tracing back the print_summary()
function, Keras developers compute the number of trainable and non_trainable parameters of a given model
as follows:
import keras.backend as K
import numpy as np
trainable_count = int(np.sum([K.count_params(p) for p in set(model.trainable_weights)]))
non_trainable_count = int(np.sum([K.count_params(p) for p in set(model.non_trainable_weights)]))
Given that K.count_params()
is defined as np.prod(int_shape(x))
, this solution is quite similar to the one of Anuj Gupta, except for the use of set()
and the way the shape of the tensors are retrieved.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With