i have already read an image as an array :
import numpy as np
from scipy import misc
face1=misc.imread('face1.jpg')
face1 dimensions are (288, 352, 3)
i need to iterate over every single pixel and populate a y
column in a training set i took the following approach :
Y_training = np.zeros([1,1],dtype=np.uint8)
for i in range(0, face1.shape[0]): # We go over rows number
for j in range(0, face1.shape[1]): # we go over columns number
if np.array_equiv(face1[i,j],[255,255,255]):
Y_training=np.vstack(([0], Y_training))#0 if blank
else:
Y_training=np.vstack(([1], Y_training))
b = len(Y_training)-1
Y_training = Y_training[:b]
np.shape(Y_training)`
Wall time: 2.57 s
As i need to do above process for about 2000 images is there any faster approach where we could decrease running time to milliseconds or naonseconds
You can use broadcasting
to perform broadcasted comparison against the white pixel : [255, 255, 255]
and ALL
reduce each row with .all(axis=-1)
and finally convert to int
dtype. This would give us the output you would have right after exiting the loop.
Thus, one implementation would be -
(~((face1 == [255,255,255]).all(-1).ravel())).astype(int)
Alternatively, a bit more compact version -
1-(face1 == [255,255,255]).all(-1).ravel()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With