it gives me this error:
Exception in thread Thread-163:
Traceback (most recent call last):
File "C:\Python26\lib\threading.py", line 532, in __bootstrap_inner
self.run()
File "C:\Python26\lib\threading.py", line 736, in run
self.function(*self.args, **self.kwargs)
File "C:\Users\Public\SoundLog\Code\Código Python\SoundLog\SoundLog.py", line 337, in getInfo
self.data1 = copy.deepcopy(Auxiliar.DataCollection.getInfo(1))
File "C:\Python26\lib\copy.py", line 162, in deepcopy
y = copier(x, memo)
File "C:\Python26\lib\copy.py", line 254, in _deepcopy_dict
for key, value in x.iteritems():
RuntimeError: dictionary changed size during iteration
while executing my python program.
How can I avoid this to happen?
Thanks in advance ;)
The normal advice, as per the other answers, would be to avoid using iteritems
(use items
instead). That of course is not an option in your case, since the iteritems
call is being done on your behalf deep in the bowels of a system call.
Therefore, what I would suggest, assuming Auxiliar.DataCollection.getInfo(1)
returns a dictionary (which is the one that's changing during the copy) is that you change your deepcopy
call to:
self.data1 = copy.deepcopy(dict(Auxiliar.DataCollection.getInfo(1)))
This takes a "snapshot" of the dict in question, and the snapshot won't change, so you'll be fine.
If Auxiliar.DataCollection.getInfo(1)
does not return a dict, but some more complicated object which includes dicts as items and/or attributes, it will be a bit more complicated, since those dicts are what you'll need to snapshot. However, it's impossible to be any more specific in this case, since you give us absolutely no clue as to the code that composes that crucial Auxiliar.DataCollection.getInfo(1)
call!-)
Although this thread is nearly 2 years old I have experienced a similar problem:
I have a producer/consumer-like system based on the Queue module. My worker-class' run-method is defined like this:
def run(self):
while True:
a, b, c = Worker._Queue.get()
# do some stuff
...
self.notify() # notify observers
Worker._Queue.task_done()
The main-class defines an update-method for the notify of the worker to collect the data and stores it in a dictionary. As multiple threads may change the dictionary in the main-class this 'critical section' is locked
def update(self, worker):
Main.indexUpdateLock.acquire()
# get results of worker
index = worker.getIndex()
# copy workers index into the main index
try:
for i in index:
if i in self._index:
self._index[i] += index[i]
else:
self._index[i] = index[i]
finally:
# index copied - release the lock
Main.indexUpdateLock.release()
Now this works in most cases - but somehow sometimes 'for i in index:' in the Main's update-method throws a RuntimeError: dictionary changed size during iteration. indexUpdateLock is defined as threading.Lock() or threading.RLock() - the behavior does not change either way I define it.
for i in dict(index): does solve the issue but as index may contain several thousand entries, copying it does not really increase performance imo - that's why I am trying to copy those values directly.
Although update is defined in Main, through the call of notify() in the worker's thread, update should execute in the worker's thread too and therefore task_done() only be executed when notify() or later on update() has finished processing. And through the definition of the critical section only one thread at a time is allowed to execute this area - or do I have some logical errors here? I don't really see where the change to the worker's index does come from as the only access to index is in Main.update() and in the the Worker but until task_done() has not executed no other method modifies index inside of Worker
edit: ok, fixed the issue which was caused by HTMLParser inside of the Worker who sent one additionally entry although the source was already closed - strange behavior though. While for i in index: still produces errors for i in index.keys(): does not, so I'll stick with this
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With