Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Store a lot of data inside python

Tags:

python

Maybe I start will a small introduction for my problem. I'm writing a python program which will be used for post-processing of different physical simulations. Every simulation can create up to 100 GB of output. I deal with different informations (like positions, fields and densities,...) for different time steps. I would like to have the access to all this data at once which isn't possible because I don't have enough memory on my system. Normally I use read file and then do some operations and clear the memory. Then I read other data and do some operations and clear the memory.

Now my problem. If I do it this way, then I spend a lot of time to read data more than once. This take a lot of time. I would like to read it only once and store it for an easy access. Do you know a method to store a lot of data which is really fast or which doesn't need a lot of space.

I just created a method which is around ten times faster then a normal open-read. But I use cat (linux command) for that. It's a really dirty method and I would like to kick it out of my script.

Is it possible to use databases to store this data and to get the data faster than normal reading? (sorry for this question, but I'm not a computer scientist and I don't have a lot of knowledge behind databases).

EDIT:

My cat-code look something like this - only a example:

out = string.split(os.popen("cat "+base+"phs/phs01_00023_"+time).read())
# and if I want to have this data as arrays then I normally use and reshape (if I
# need it)
out = array(out)
out = reshape(out)

Normally I would use a numpy Method numpy.loadtxt which need the same time like normal reading.:

f = open('filename')
f.read()
...

I think that loadtxt just use the normal methods with some additional code lines.

I know there are some better ways to read out data. But everything what I found was really slow. I will now try mmap and hopefully I will have a better performance.

like image 679
ahelm Avatar asked Mar 10 '11 18:03

ahelm


People also ask

How we can store data in Python?

Using Python's built-in File object, it is possible to write string data to a disk file and read from it. Python's standard library, provides modules to store and retrieve serialized data in various data structures such as JSON and XML. Python's DB-API provides a standard way of interacting with relational databases.

Can Python handle big data?

Python provides a huge number of libraries to work on Big Data. You can also work – in terms of developing code – using Python for Big Data much faster than any other programming language. These two aspects are enabling developers worldwide to embrace Python as the language of choice for Big Data projects.

How does Python store continuous data?

append(data) if len(buffer1)>0: process(buffer1) buffer2. append(data) (while the process_function will take buffer1 to process, at the same time, the continuous data should be stored in buffer2 so that after finishing the process_function with buffer1 can process with buffer2 and repeat.)


2 Answers

If you're on a 64-bit operating system, you can use the mmap module to map that entire file into memory space. Then, reading random bits of the data can be done a lot more quickly since the OS is then responsible for managing your access patterns. Note that you don't actually need 100 GB of RAM for this to work, since the OS will manage it all in virtual memory.

I've done this with a 30 GB file (the Wikipedia XML article dump) on 64-bit FreeBSD 8 with very good results.

like image 105
Greg Hewgill Avatar answered Nov 11 '22 09:11

Greg Hewgill


I would try using HDF5. There are two commonly used Python interfaces, h5py and PyTables. While the latter seems to be more widespread, I prefer the former.

like image 29
Sven Marnach Avatar answered Nov 11 '22 10:11

Sven Marnach