Having an issue with a custom iterator in that it will only iterate over the file once. I am calling seek(0)
on the relevant file object in between iterations, but StopIteration
is thrown on the first call to next()
on the 2nd run through. I feel I am overlooking something obvious, but would appreciate some fresh eyes on this:
class MappedIterator(object):
"""
Given an iterator of dicts or objects and a attribute mapping dict,
will make the objects accessible via the desired interface.
Currently it will only produce dictionaries with string values. Can be
made to support actual objects later on. Somehow... :D
"""
def __init__(self, obj=None, mapping={}, *args, **kwargs):
self._obj = obj
self._mapping = mapping
self.cnt = 0
def __iter__(self):
return self
def reset(self):
self.cnt = 0
def next(self):
try:
try:
item = self._obj.next()
except AttributeError:
item = self._obj[self.cnt]
# If no mapping is provided, an empty object will be returned.
mapped_obj = {}
for mapped_attr in self._mapping:
attr = mapped_attr.attribute
new_attr = mapped_attr.mapped_name
val = item.get(attr, '')
val = str(val).strip() # get rid of whitespace
# TODO: apply transformers...
# This allows multi attribute mapping or grouping of multiple
# attributes in to one.
try:
mapped_obj[new_attr] += val
except KeyError:
mapped_obj[new_attr] = val
self.cnt += 1
return mapped_obj
except (IndexError, StopIteration):
self.reset()
raise StopIteration
class CSVMapper(MappedIterator):
def __init__(self, reader, mapping={}, *args, **kwargs):
self._reader = reader
self._mapping = mapping
self._file = kwargs.pop('file')
super(CSVMapper, self).__init__(self._reader, self._mapping, *args, **kwargs)
@classmethod
def from_csv(cls, file, mapping, *args, **kwargs):
# TODO: Parse kwargs for various DictReader kwargs.
return cls(reader=DictReader(file), mapping=mapping, file=file)
def __len__(self):
return int(self._reader.line_num)
def reset(self):
if self._file:
self._file.seek(0)
super(CSVMapper, self).reset()
Sample usage:
file = open('somefile.csv', 'rb') # say this file has 2 rows + a header row
mapping = MyMappingClass() # this isn't really relevant
reader = CSVMapper.from_csv(file, mapping)
# > 'John'
# > 'Bob'
for r in reader:
print r['name']
# This won't print anything
for r in reader:
print r['name']
csv. Reader() allows you to access CSV data using indexes and is ideal for simple CSV files. csv. DictReader() on the other hand is friendlier and easy to use, especially when working with large CSV files.
The writerow method writes a row of data into the specified file. $ cat numbers2.csv 1,2,3,4,5,6 7,8,9,10,11,12. It is possible to write all data in one shot. The writerows method writes all given rows to the CSV file.
There are two common ways to read a . csv file when using Python. The first by using the csv library, and the second by using the pandas library.
You can do open("data. csv", "rw") , this allows you to read and write at the same time. So will this help me modify the data?
I think that you are better off not trying to do the .seek(0)
but rather opening the file from the filename each time.
And I don't recommend you just return self
in the __iter__()
method. That means you only ever have one instance of your object. I don't know how likely it is for someone to try to use your object from two different threads, but if that happened the results would be surprising.
So, save the filename, and then in the __iter__()
method, create a fresh object with a freshly initialized reader object and a freshly opened file handle object; return this new object from __iter__()
. This will work every time, no matter what the file-like object really is. It could be a handle to a networking function that is pulling data from a server, or who knows what, and it might not support a .seek()
method; but you know that if you just open it again you will get a fresh file handle object. And if someone uses the threading
module to run 10 instances of your class in parallel, each one will always get all of the lines from the file, instead of each randomly getting about a tenth of the lines.
Also, I don't recommend your exception handler inside the .next()
method in MappedIterator
. The .__iter__()
method should return an object that can be reliably iterated. If a silly user passes in an integer object (for example: 3), this won't be iterable. Inside .__iter__()
you can always explicitly call iter()
on an argument, and if it is already an iterator (for example, an open file handle object) you will just get the same object back; but if it is a sequence object, you will get an iterator that works on the sequence. Now if the user passes in 3, the call to iter()
will raise an exception that makes sense right at the line where the user passed the 3, rather than the exception coming from the first call to .next()
. And as a bonus, you don't need the cnt
member variable anymore, and your code will be a little bit faster.
So, if you put together all my suggestions, you might get something like this:
class CSVMapper(object):
def __init__(self, reader, fname, mapping={}, **kwargs):
self._reader = reader
self._fname = fname
self._mapping = mapping
self._kwargs = kwargs
self.line_num = 0
def __iter__(self):
cls = type(self)
obj = cls(self._reader, self._fname, self._mapping, **self._kwargs)
if "open_with" in self._kwargs:
open_with = self._kwargs["open_with"]
f = open_with(self._fname, **self._kwargs)
else:
f = open(self._fname, "rt")
# "itr" is my standard abbreviation for an iterator instance
obj.itr = obj._reader(f)
return obj
def next(self):
item = self.itr.next()
self.line_num += 1
# If no mapping is provided, item is returned unchanged.
if not self._mapping:
return item # csv.reader() returns a list of string values
# we have a mapping so make a mapped object
mapped_obj = {}
key, value = item
if key in self._mapping:
return [self._mapping[key], value]
else:
return item
if __name__ == "__main__":
lst_csv = [
"foo, 0",
"one, 1",
"two, 2",
"three, 3",
]
import csv
mapping = {"foo": "bar"}
m = CSVMapper(csv.reader, lst_csv, mapping, open_with=iter)
for item in m: # will print every item
print item
for item in m: # will print every item again
print item
Now the .__iter__()
method gives you a fresh object every time you call it.
Note how the example code uses a list of strings instead of opening a file. In this example, you need to specify an open_with()
function to be used instead of the default open()
to open the file. Since our list of strings can be iterated to return one string at a time, we can simply use iter
as our open_with
function here.
I didn't understand your mapping code. csv.reader
returns a list of string values, not some kind of a dictionary, so I wrote some trivial mapping code that works for CSV files with two columns, the first one a string. Clearly you should chop out my trivial mapping code and put in the desired mapping code.
Also, I took out your .__len__()
method. This returns the length of a sequence when you do something like len(obj)
; you had it returning line_num
which means that the value of len(obj)
would change every time you call the .next()
method. If users want to know the length, they should store the results in a list and take the length of the list, or something like that.
EDIT: I added **self._kwargs
to the call to call_with()
in the .__iter__()
method. That way, if your call_with()
function needs any extra arguments they will be passed through. Before I made this change, there wasn't really a good reason to save the kwargs
argument in the object; it would have been just as good to add a call_with
argument to the class .__init__()
method, with a default argument of None
. I think this change is a good one.
For DictReader:
f = open(filename, "rb")
d = csv.DictReader(f, delimiter=",")
f.seek(0)
d.__init__(f, delimiter=",")
For DictWriter:
f = open(filename, "rb+")
d = csv.DictWriter(f, fieldnames=fields, delimiter=",")
f.seek(0)
f.truncate(0)
d.__init__(f, fieldnames=fields, delimiter=",")
d.writeheader()
f.flush()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With