I have a python script and it will loop through bunch of maya files and do some stuff. But some time maya get seg fault and my script will stop there. I tried with signal
and multiprocess
. But both failed.
import os, optparse, glob, json, signal
import maya.standalone
import maya.cmds as cmds
from multiprocessing import Process, Queue
def loadMayaBd():
maya.standalone.initialize(name='python')
def sig_handler(signum, frame):
print "segfault"
def doSome(args, options):
signal.signal(signal.SIGSEGV, sig_handler)
loadMayaBd()
#from here its just a example
fileNameList = args[0]
for eachFile in fileNameList:
#this is throwing the seg fault
#I want continue my for llop even if there is any segfault
#I don't want to exit python coz of that segfault
cmds.file(eachFile, force = 1, open = 1)
if __name__ == "__main__":
usage = "usage: %prog [options] args(file list)"
parser = optparse.OptionParser(usage)
parser.add_option("-l", "--log", dest="log",
help="Log File Path", metavar="LOG_FILE")
parser.add_option("-v", "--verbose", dest="verbose",
help="Print All Logs", metavar="VERBOSE", default=False, action='store_true')
(options, args) = parser.parse_args()
if len(args) <= 0:
errorMsg = "You must pass file path list for crawling"
raise RuntimeError(errorMsg)
p = Process(target=doSome, args=(args, options))
p.start()
p.join()
Is there any other method which can trap seg fault and continue with next?
You can't catch segfaults. Segfaults lead to undefined behavior - period (err, actually segfaults are the result of operations also leading to undefined behavior.
Use debuggers to diagnose segfaults Start your debugger with the command gdb core , and then use the backtrace command to see where the program was when it crashed. This simple trick will allow you to focus on that part of the code.
It can be resolved by having a base condition to return from the recursive function. A pointer must point to valid memory before accessing it.
You have to check that no_prod is < 1024 before writing to it, otherwise you'll write in unallocated memory, which is what gives you a segmentation fault. Once no_prod reached 1024 you have to abort the program (I assume you haven't worked with dynamic allocation yet).
This works for me:
import os
import signal
def sig_handler(signum, frame):
print("segfault")
signal.signal(signal.SIGSEGV, sig_handler)
os.kill(os.getpid(), signal.SIGSEGV)
while True:
pass
Are you sure you are trapping the segfault in each process that you are spawning?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With