Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I speed up fractal generation with numpy arrays?

Tags:

python

numpy

Here is a little script I wrote for making fractals using newton's method.

import numpy as np
import matplotlib.pyplot as plt

f = np.poly1d([1,0,0,-1]) # x^3 - 1
fp = np.polyder(f)

def newton(i, guess):
    if abs(f(guess)) > .00001:
        return newton(i+1, guess - f(guess)/fp(guess))
    else:
        return i

pic = []
for y in np.linspace(-10,10, 1000):
    pic.append( [newton(0,x+y*1j) for x in np.linspace(-10,10,1000)] )

plt.imshow(pic)
plt.show()

I am using numpy arrays, but nonetheless loop through each element of 1000-by-1000 linspaces to apply the newton() function, which acts on a single guess and not a whole array.

My question is this: How can I alter my approach to better exploit the advantages of numpy arrays?

P.S.: If you want to try the code without waiting too long, better to use 100-by-100.

Extra background:
See Newton's Method for finding zeroes of a polynomial.
The basic idea for the fractal is to test guesses in the complex plane and count the number of iterations to converge to a zero. That's what the recursion is about in newton(), which ultimately returns the number of steps. A guess in the complex plane represents a pixel in the picture, colored by the number of steps to convergence. From a simple algorithm, you get these beautiful fractals.

like image 712
mrKelley Avatar asked Jun 30 '13 18:06

mrKelley


1 Answers

I worked from Lauritz V. Thaulow's code and was able to get a pretty significant speed-up with the following code:

import numpy as np
import matplotlib.pyplot as plt
from itertools import count

def newton_fractal(xmin, xmax, ymin, ymax, xres, yres):
    yarr, xarr = np.meshgrid(np.linspace(xmin, xmax, xres), \
                             np.linspace(ymin, ymax, yres) * 1j)
    arr = yarr + xarr
    ydim, xdim = arr.shape
    arr = arr.flatten()
    f = np.poly1d([1,0,0,-1]) # x^3 - 1
    fp = np.polyder(f)
    counts = np.zeros(shape=arr.shape)
    unconverged = np.ones(shape=arr.shape, dtype=bool)
    indices = np.arange(len(arr))
    for i in count():
        f_g = f(arr[unconverged])
        new_unconverged = np.abs(f_g) > 0.00001
        counts[indices[unconverged][~new_unconverged]] = i
        if not np.any(new_unconverged):
            return counts.reshape((ydim, xdim))
        unconverged[unconverged] = new_unconverged
        arr[unconverged] -= f_g[new_unconverged] / fp(arr[unconverged])

N = 1000
pic = newton_fractal(-10, 10, -10, 10, N, N)

plt.imshow(pic)
plt.show()

For N=1000, I get a time of 11.1 seconds using Lauritz's code and a time of 1.7 seconds using this code.

There are two main speed-ups here. First, I used meshgrid to speed-up the creation of the numpy array of input values. This is actually a pretty significant part of the speed-up when N=1000.

The second speed-up comes from only doing calculations on the unconverged portions. Lauritz was using masked arrays for this before realizing that they were slowing things down. I haven't looked at them in quite some time, but I do remember masked arrays being a source of slowness in the past. I believe it is because they are largely implemented in pure Python over a numpy array rather than being written almost completely in C like numpy arrays.

like image 170
Justin Peel Avatar answered Oct 07 '22 02:10

Justin Peel