Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Writing data to LMDB with Python very slow

Tags:

python

caffe

lmdb

Creating datasets for training with Caffe I both tried using HDF5 and LMDB. However, creating a LMDB is very slow even slower than HDF5. I am trying to write ~20,000 images.

Am I doing something terribly wrong? Is there something I am not aware of?

This is my code for LMDB creation:

DB_KEY_FORMAT = "{:0>10d}"
db = lmdb.open(path, map_size=int(1e12))
    curr_idx = 0
    commit_size = 1000
    for curr_commit_idx in range(0, num_data, commit_size):
        with in_db_data.begin(write=True) as in_txn:
            for i in range(curr_commit_idx, min(curr_commit_idx + commit_size, num_data)):
                d, l = data[i], labels[i]
                im_dat = caffe.io.array_to_datum(d.astype(float), label=int(l))
                key = DB_KEY_FORMAT.format(curr_idx)
                in_txn.put(key, im_dat.SerializeToString())
                curr_idx += 1
    db.close()

As you can see I am creating a transaction for every 1,000 images, because I thought creating a transaction for each image would create an overhead, but it seems this doesn't influence performance too much.

like image 513
Simikolon Avatar asked Jul 27 '15 09:07

Simikolon


2 Answers

In my experience, I've had 50-100 ms writes to LMDB from Python writing Caffe data on ext4 hard disk on Ubuntu. That's why I use tmpfs (RAM disk functionality built into Linux) and get these writes done in around 0.07 ms. You can make smaller databases on your ramdisk and copy them to a hard disk and later train on all of them. I'm making around 20-40GB ones as I have 64 GB of RAM.

Some pieces of code to help you guys dynamically create, fill and move LMDBs to storage. Feel free to edit it to fit your case. It should save you some time getting your head around how LMDB and file manipulation works in Python.

import shutil
import lmdb
import random


def move_db():
    global image_db
    image_db.close();
    rnd = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(5))
    shutil.move( fold + 'ram/train_images',  '/storage/lmdb/'+rnd)
    open_db()


def open_db():
    global image_db
    image_db    = lmdb.open(os.path.join(fold, 'ram/train_images'),
            map_async=True,
            max_dbs=0)

def write_to_lmdb(db, key, value):
    """
    Write (key,value) to db
    """
    success = False
    while not success:
        txn = db.begin(write=True)
        try:
            txn.put(key, value)
            txn.commit()
            success = True
        except lmdb.MapFullError:
            txn.abort()
            # double the map_size
            curr_limit = db.info()['map_size']
            new_limit = curr_limit*2
            print '>>> Doubling LMDB map size to %sMB ...' % (new_limit>>20,)
            db.set_mapsize(new_limit) # double it

...

image_datum                 = caffe.io.array_to_datum( transformed_image, label )
write_to_lmdb(image_db, str(itr), image_datum.SerializeToString())
like image 171
Íhor Mé Avatar answered Sep 24 '22 11:09

Íhor Mé


Try this:

DB_KEY_FORMAT = "{:0>10d}"
db = lmdb.open(path, map_size=int(1e12))
    curr_idx = 0
    commit_size = 1000
    with in_db_data.begin(write=True) as in_txn:
        for curr_commit_idx in range(0, num_data, commit_size):
            for i in range(curr_commit_idx, min(curr_commit_idx + commit_size, num_data)):
                d, l = data[i], labels[i]
                im_dat = caffe.io.array_to_datum(d.astype(float), label=int(l))
                key = DB_KEY_FORMAT.format(curr_idx)
                in_txn.put(key, im_dat.SerializeToString())
                curr_idx += 1
    db.close()

the code

with in_db_data.begin(write=True) as in_txn:

takes much time.

like image 29
Skyduy Avatar answered Sep 24 '22 11:09

Skyduy