Here is a little script I wrote for making fractals using newton's method.
import numpy as np
import matplotlib.pyplot as plt
f = np.poly1d([1,0,0,-1]) # x^3 - 1
fp = np.polyder(f)
def newton(i, guess):
if abs(f(guess)) > .00001:
return newton(i+1, guess - f(guess)/fp(guess))
else:
return i
pic = []
for y in np.linspace(-10,10, 1000):
pic.append( [newton(0,x+y*1j) for x in np.linspace(-10,10,1000)] )
plt.imshow(pic)
plt.show()
I am using numpy arrays, but nonetheless loop through each element of 1000-by-1000 linspaces to apply the newton()
function, which acts on a single guess and not a whole array.
My question is this: How can I alter my approach to better exploit the advantages of numpy arrays?
P.S.: If you want to try the code without waiting too long, better to use 100-by-100.
Extra background:
See Newton's Method for finding zeroes of a polynomial.
The basic idea for the fractal is to test guesses in the complex plane and count the number of iterations to converge to a zero. That's what the recursion is about in newton()
, which ultimately returns the number of steps. A guess in the complex plane represents a pixel in the picture, colored by the number of steps to convergence. From a simple algorithm, you get these beautiful fractals.
I worked from Lauritz V. Thaulow's code and was able to get a pretty significant speed-up with the following code:
import numpy as np
import matplotlib.pyplot as plt
from itertools import count
def newton_fractal(xmin, xmax, ymin, ymax, xres, yres):
yarr, xarr = np.meshgrid(np.linspace(xmin, xmax, xres), \
np.linspace(ymin, ymax, yres) * 1j)
arr = yarr + xarr
ydim, xdim = arr.shape
arr = arr.flatten()
f = np.poly1d([1,0,0,-1]) # x^3 - 1
fp = np.polyder(f)
counts = np.zeros(shape=arr.shape)
unconverged = np.ones(shape=arr.shape, dtype=bool)
indices = np.arange(len(arr))
for i in count():
f_g = f(arr[unconverged])
new_unconverged = np.abs(f_g) > 0.00001
counts[indices[unconverged][~new_unconverged]] = i
if not np.any(new_unconverged):
return counts.reshape((ydim, xdim))
unconverged[unconverged] = new_unconverged
arr[unconverged] -= f_g[new_unconverged] / fp(arr[unconverged])
N = 1000
pic = newton_fractal(-10, 10, -10, 10, N, N)
plt.imshow(pic)
plt.show()
For N=1000, I get a time of 11.1 seconds using Lauritz's code and a time of 1.7 seconds using this code.
There are two main speed-ups here. First, I used meshgrid to speed-up the creation of the numpy array of input values. This is actually a pretty significant part of the speed-up when N=1000.
The second speed-up comes from only doing calculations on the unconverged portions. Lauritz was using masked arrays for this before realizing that they were slowing things down. I haven't looked at them in quite some time, but I do remember masked arrays being a source of slowness in the past. I believe it is because they are largely implemented in pure Python over a numpy array rather than being written almost completely in C like numpy arrays.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With