I have been running a particular python script for some time. All of the script had been running perfectly fine (including in Jupyter) for many months before this. Now, somehow, the jupyter in my system has started showing the following error message at one particular line of the code (the last line of the below mentioned code). All parts of the code run fine, except for the last line of the code (where I call a user defined function to do pair counts). The user defined function (correlation.polepy) can be found from https://github.com/OMGitsHongyu/N-body-analysis
This is the error message that I am getting:
Kernel Restarting The kernel appears to have died. It will restart automatically.
And, here is the skeleton of my Python Code:
from __future__ import division import numpy as np import correlation from scipy.spatial import cKDTree File1 = np.loadtxt('/Users/Sidd/Research/fname1.txt') File2 = np.loadtxt('/Users/Sidd/Research/fname2.txt') masscut = 1.1*np.power(10,13) mark1 = (np.where(File1[:,0]>masscut))[0] mark2 = (np.where(File2[:,0]>masscut))[0] Data1 = File1[mark1,1:8] Data2 = File2[mark2,1:8] Xi_masscut = correlation.polepy(p1=Data1, p2=Data2, rlim=150, nbins=150, nhocells=100, blen=1024, dis_f=100)
Similar problem happens (last line of the code) when I try to use IPython. When I try to use Python (implement in terminal), I get an error message (at the last line) which says "Segmentation fault: 11". I am using Python 2.7.13 :: Anaconda 2.5.0 (x86_64).
I have tried the following methods already in search for a solution:
1.> I checked some of the previous links on stackoverflow where this problem has been asked: The kernel appears to have died. It will restart automatically
I tried the solution given in the link above; sadly it doesn't seem to work for my case. This is the solution that was mention in the link given above:
conda update mkl
2.> Just to check if the system is running out of memory, I closed all applications which are heavy on memory. My system has 16 GB physical memory and even when there is over 9 GB of free memory, this problem happens (again, this problem had not been happening before, even when I had been using 14 GB in other tasks and had less than 2 GB of memory. It's very surprising that I could run task with given inputs before and I am not able to replicate calculation with the same exact inputs now.)
3.> I saw another link: https://alpine.atlassian.net/wiki/plugins/servlet/mobile?contentId=134545485#content/view/134545485
This one appears to tackle similar problems and it speaks about there not being enough memory for the docker container. I had doubts about how to implement the suggestions mentioned in there.
All in all, I am not sure how this problem arose in the first place. How do I solve this problem? Any help will be much appreciated.
This issue happens when I import sklearn PCA before numpy (not sure reverse the sequence will solve the problem)
But later I solved the issue by reinstalling numpy and mkl: conda install numpy
and conda install -c intel mkl
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With