Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best way to create a NumPy array from a dictionary?

Tags:

python

numpy

I'm just starting with NumPy so I may be missing some core concepts...

What's the best way to create a NumPy array from a dictionary whose values are lists?

Something like this:

d = { 1: [10,20,30] , 2: [50,60], 3: [100,200,300,400,500] }

Should turn into something like:

data = [
  [10,20,30,?,?],
  [50,60,?,?,?],
  [100,200,300,400,500]
]

I'm going to do some basic statistics on each row, eg:

deviations = numpy.std(data, axis=1)

Questions:

  • What's the best / most efficient way to create the numpy.array from the dictionary? The dictionary is large; a couple of million keys, each with ~20 items.

  • The number of values for each 'row' are different. If I understand correctly numpy wants uniform size, so what do I fill in for the missing items to make std() happy?

Update: One thing I forgot to mention - while the python techniques are reasonable (eg. looping over a few million items is fast), it's constrained to a single CPU. Numpy operations scale nicely to the hardware and hit all the CPUs, so they're attractive.

like image 627
Parand Avatar asked Mar 02 '09 06:03

Parand


People also ask

How do you convert a dictionary to an array?

To convert a dictionary to an array in Python, use the numpy. array() method, and pass the dictionary object to the np. array() method as an argument and it returns the array.

Is Numpy array faster than dictionary?

Also as expected, the Numpy array performed faster than the dictionary. However, the dictionary performed faster in Python than in Julia.

Can we store dictionary in Numpy array?

We can use Python's numpy. save() function from transforming an array into a binary file when saving it. This method also can be used to store the dictionary in Python.

Can you put a dictionary in an array?

You cannot use string indexes in arrays, but you can apply a Dictionary object in its place, and use string keys to access the dictionary items. The dictionary object has the following benefits when compared with arrays: The size of the Dictionary object can be set dynamically.


1 Answers

You don't need to create numpy arrays to call numpy.std(). You can call numpy.std() in a loop over all the values of your dictionary. The list will be converted to a numpy array on the fly to compute the standard variation.

The downside of this method is that the main loop will be in python and not in C. But I guess this should be fast enough: you will still compute std at C speed, and you will save a lot of memory as you won't have to store 0 values where you have variable size arrays.

  • If you want to further optimize this, you can store your values into a list of numpy arrays, so that you do the python list -> numpy array conversion only once.
  • if you find that this is still too slow, try to use psycho to optimize the python loop.
  • if this is still too slow, try using Cython together with the numpy module. This Tutorial claims impressive speed improvements for image processing. Or simply program the whole std function in Cython (see this for benchmarks and examples with sum function )
  • An alternative to Cython would be to use SWIG with numpy.i.
  • if you want to use only numpy and have everything computed at C level, try grouping all the records of same size together in different arrays and call numpy.std() on each of them. It should look like the following example.

example with O(N) complexity:

import numpy
list_size_1 = []
list_size_2 = []
for row in data.itervalues():
    if len(row) == 1:
      list_size_1.append(row)
    elif len(row) == 2:
      list_size_2.append(row)
list_size_1 = numpy.array(list_size_1)
list_size_2 = numpy.array(list_size_2)
std_1 = numpy.std(list_size_1, axis = 1)
std_2 = numpy.std(list_size_2, axis = 1)
like image 141
10 revs Avatar answered Sep 19 '22 14:09

10 revs