Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Memory error for np.concatenate

When I run the flowing code in an iPython notebook:

_x = np.concatenate([_batches.next() for i in range(_batches.samples)])

I get this error message

---------------------------------------------------------------
MemoryError                   Traceback (most recent call last)
<ipython-input-14-313ecf2ea184> in <module>()
----> 1 _x = np.concatenate([_batches.next() for i in 
range(_batches.samples)])

MemoryError:

The iterator has 9200 elements.

next(_batch) returns a np.array of shape: (1, 400, 400, 3)

I have 30GB RAM and 16GB GPU.

I have a similar issue when I use predict_generator() in Keras. I run the following code:

bottleneck_features_train = bottleneck_model.predict_generator(batches, len(batches), verbose=1) 

I can see the progress indicator goes all the way when using verbose=1, but then I get the following error:

2300/2300 [==============================] - 177s 77ms/step
---------------------------------------------------------------
MemoryError                   Traceback (most recent call last)
<ipython-input-19-d0e463f64f5a> in <module>()
----> 1 bottleneck_features_train = 
bottleneck_model.predict_generator(batches, len(batches), verbose=1)

~/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py in 
wrapper(*args, **kwargs)
     85                 warnings.warn('Update your `' + object_name +
     86                               '` call to the Keras 2 API: ' + 
signature, stacklevel=2)
---> 87             return func(*args, **kwargs)
     88         wrapper._original_function = func
     89         return wrapper

~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in 
predict_generator(self, generator, steps, max_queue_size, workers, 
use_multiprocessing, verbose)
   2345                 return all_outs[0][0]
   2346             else:
-> 2347                 return np.concatenate(all_outs[0])
   2348         if steps_done == 1:
   2349             return [out for out in all_outs]

MemoryError: 

Could you please advise a solution for this memory issue? Thank you!

like image 561
And Avatar asked Dec 06 '25 18:12

And


1 Answers

For the first error, the data is simply too big. Assuming a data type of int64 or float64 (8 bytes per element), the total data is 9200*400*400*3*8 bytes, ie 35GB. All this data is collected in chunks and then copied into a big array by the concatenation.

You could preallocate the array and maybe it'll work:

x_ = np.empty((9200,400,400,3))
for i in range(9200): 
    x_[i] = batches.next()
like image 59
user7138814 Avatar answered Dec 08 '25 16:12

user7138814



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!