Iterator has no reset method. To start again at the beginning, just get a new Iterator by calling iterator() again.
Iterators are lazy Iterators allow us to both work with and create lazy iterables that don't do any work until we ask them for their next item. Because of their laziness, the iterators can help us to deal with infinitely long iterables.
Iterators will be faster and have better memory efficiency. Just think of an example of range(1000) vs xrange(1000) . (This has been changed in 3.0, range is now an iterator.) With range you pre-build your list, but xrange is an iterator and yields the next item when needed instead.
Note that every iterator is also an iterable, but not every iterable is an iterator. For example, a list is iterable but a list is not an iterator. An iterator can be created from an iterable by using the function iter().
I see many answers suggesting itertools.tee, but that's ignoring one crucial warning in the docs for it:
This itertool may require significant auxiliary storage (depending on how much temporary data needs to be stored). In general, if one iterator uses most or all of the data before another iterator starts, it is faster to use
list()
instead oftee()
.
Basically, tee
is designed for those situation where two (or more) clones of one iterator, while "getting out of sync" with each other, don't do so by much -- rather, they say in the same "vicinity" (a few items behind or ahead of each other). Not suitable for the OP's problem of "redo from the start".
L = list(DictReader(...))
on the other hand is perfectly suitable, as long as the list of dicts can fit comfortably in memory. A new "iterator from the start" (very lightweight and low-overhead) can be made at any time with iter(L)
, and used in part or in whole without affecting new or existing ones; other access patterns are also easily available.
As several answers rightly remarked, in the specific case of csv
you can also .seek(0)
the underlying file object (a rather special case). I'm not sure that's documented and guaranteed, though it does currently work; it would probably be worth considering only for truly huge csv files, in which the list
I recommmend as the general approach would have too large a memory footprint.
If you have a csv file named 'blah.csv' That looks like
a,b,c,d
1,2,3,4
2,3,4,5
3,4,5,6
you know that you can open the file for reading, and create a DictReader with
blah = open('blah.csv', 'r')
reader= csv.DictReader(blah)
Then, you will be able to get the next line with reader.next()
, which should output
{'a':1,'b':2,'c':3,'d':4}
using it again will produce
{'a':2,'b':3,'c':4,'d':5}
However, at this point if you use blah.seek(0)
, the next time you call reader.next()
you will get
{'a':1,'b':2,'c':3,'d':4}
again.
This seems to be the functionality you're looking for. I'm sure there are some tricks associated with this approach that I'm not aware of however. @Brian suggested simply creating another DictReader. This won't work if you're first reader is half way through reading the file, as your new reader will have unexpected keys and values from wherever you are in the file.
No. Python's iterator protocol is very simple, and only provides one single method (.next()
or __next__()
), and no method to reset an iterator in general.
The common pattern is to instead create a new iterator using the same procedure again.
If you want to "save off" an iterator so that you can go back to its beginning, you may also fork the iterator by using itertools.tee
Yes, if you use numpy.nditer
to build your iterator.
>>> lst = [1,2,3,4,5]
>>> itr = numpy.nditer([lst])
>>> itr.next()
1
>>> itr.next()
2
>>> itr.finished
False
>>> itr.reset()
>>> itr.next()
1
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With