Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can I speed up this aerodynamics calculation with Numba, vectorization, or multiprocessing?

Tags:

Problem:

I am trying to increase the speed of an aerodynamics function in Python.

Function Set:

import numpy as np
from numba import njit

def calculate_velocity_induced_by_line_vortices(
    points, origins, terminations, strengths, collapse=True
):

    # Expand the dimensionality of the points input. It is now of shape (N x 1 x 3).
    # This will allow NumPy to broadcast the upcoming subtractions.
    points = np.expand_dims(points, axis=1)
    
    # Define the vectors from the vortex to the points. r_1 and r_2 now both are of
    # shape (N x M x 3). Each row/column pair holds the vector associated with each
    # point/vortex pair.
    r_1 = points - origins
    r_2 = points - terminations
    
    r_0 = r_1 - r_2
    r_1_cross_r_2 = nb_2d_explicit_cross(r_1, r_2)
    r_1_cross_r_2_absolute_magnitude = (
        r_1_cross_r_2[:, :, 0] ** 2
        + r_1_cross_r_2[:, :, 1] ** 2
        + r_1_cross_r_2[:, :, 2] ** 2
    )
    r_1_length = nb_2d_explicit_norm(r_1)
    r_2_length = nb_2d_explicit_norm(r_2)
    
    # Define the radius of the line vortices. This is used to get rid of any
    # singularities.
    radius = 3.0e-16
    
    # Set the lengths and the absolute magnitudes to zero, at the places where the
    # lengths and absolute magnitudes are less than the vortex radius.
    r_1_length[r_1_length < radius] = 0
    r_2_length[r_2_length < radius] = 0
    r_1_cross_r_2_absolute_magnitude[r_1_cross_r_2_absolute_magnitude < radius] = 0
    
    # Calculate the vector dot products.
    r_0_dot_r_1 = np.einsum("ijk,ijk->ij", r_0, r_1)
    r_0_dot_r_2 = np.einsum("ijk,ijk->ij", r_0, r_2)
    
    # Calculate k and then the induced velocity, ignoring any divide-by-zero or nan
    # errors. k is of shape (N x M)
    with np.errstate(divide="ignore", invalid="ignore"):
        k = (
            strengths
            / (4 * np.pi * r_1_cross_r_2_absolute_magnitude)
            * (r_0_dot_r_1 / r_1_length - r_0_dot_r_2 / r_2_length)
        )
    
        # Set the shape of k to be (N x M x 1) to support numpy broadcasting in the
        # subsequent multiplication.
        k = np.expand_dims(k, axis=2)
    
        induced_velocities = k * r_1_cross_r_2
    
    # Set the values of the induced velocity to zero where there are singularities.
    induced_velocities[np.isinf(induced_velocities)] = 0
    induced_velocities[np.isnan(induced_velocities)] = 0

    if collapse:
        induced_velocities = np.sum(induced_velocities, axis=1)

    return induced_velocities


@njit    
def nb_2d_explicit_norm(vectors):
    return np.sqrt(
        (vectors[:, :, 0]) ** 2 + (vectors[:, :, 1]) ** 2 + (vectors[:, :, 2]) ** 2
    )


@njit
def nb_2d_explicit_cross(a, b):
    e = np.zeros_like(a)
    e[:, :, 0] = a[:, :, 1] * b[:, :, 2] - a[:, :, 2] * b[:, :, 1]
    e[:, :, 1] = a[:, :, 2] * b[:, :, 0] - a[:, :, 0] * b[:, :, 2]
    e[:, :, 2] = a[:, :, 0] * b[:, :, 1] - a[:, :, 1] * b[:, :, 0]
    return e

Context:

This function is used by Ptera Software, an open-source solver for flapping wing aerodynamics. As shown by the profile output below, it is by far the largest contributor to Ptera Software's run time.

profile run

Currently, Ptera Software takes just over 3 minutes to run a typical case, and my goal is to get this below 1 minute.

The function takes in a group of points, origins, terminations, and strengths. At every point, it finds the induced velocity due to the line vortices, which are characterized by the groups of origins, terminations, and strengths. If collapse is true, then the output is the cumulative velocity induced at each point due to the vortices. If false, the function outputs each vortex's contribution to the velocity at each point.

During a typical run, the velocity function is called approximately 2000 times. At first, the calls involve vectors with relatively small input arguments (around 200 points, origins, terminations, and strengths). Later calls involve large input arguments (around 400 points and around 6,000 origins, terminations, and strengths). An ideal solution would be fast for all size inputs, but increasing the speed of large input calls is more important.

For testing, I recommend running the following script with your own implementation of the function:

import timeit

import matplotlib.pyplot as plt
import numpy as np

n_repeat = 2
n_execute = 10 ** 3
min_oom = 0
max_oom = 3

times_py = []

for i in range(max_oom - min_oom + 1):
    n_elem = 10 ** i
    n_elem_pretty = np.format_float_scientific(n_elem, 0)
    print("Number of elements: " + n_elem_pretty)

    # Benchmark Python.
    print("\tBenchmarking Python...")
    setup = '''
import numpy as np

these_points = np.random.random((''' + str(n_elem) + ''', 3))
these_origins = np.random.random((''' + str(n_elem) + ''', 3))
these_terminations = np.random.random((''' + str(n_elem) + ''', 3))
these_strengths = np.random.random(''' + str(n_elem) + ''')

def calculate_velocity_induced_by_line_vortices(points, origins, terminations,
                                                strengths, collapse=True):
    pass
    '''
    statement = '''
results_orig = calculate_velocity_induced_by_line_vortices(these_points, these_origins,
                                                           these_terminations,
                                                           these_strengths)
    '''
    
    times = timeit.repeat(repeat=n_repeat, stmt=statement, setup=setup, number=n_execute)
    time_py = min(times)/n_execute
    time_py_pretty = np.format_float_scientific(time_py, 2)
    print("\t\tAverage Time per Loop: " + time_py_pretty + " s")

    # Record the times.
    times_py.append(time_py)

sizes = [10 ** i for i in range(max_oom - min_oom + 1)]

fig, ax = plt.subplots()

ax.plot(sizes, times_py, label='Python')
ax.set_xscale("log")
ax.set_xlabel("Size of List or Array (elements)")
ax.set_ylabel("Average Time per Loop (s)")
ax.set_title(
    "Comparison of Different Optimization Methods\nBest of "
    + str(n_repeat)
    + " Runs, each with "
    + str(n_execute)
    + " Loops"
)
ax.legend()
plt.show()

Previous Attempts:

My prior attempts at speeding up this function involved vectorizing it (which worked great, so I kept those changes) and trying out Numba's JIT compiler. I had mixed results with Numba. When I tried to use Numba on a modified version of the entire velocity function, my results were much slower than before. However, I found that Numba significantly sped up the cross-product and norm functions, which I implemented above.

Updates:

Update 1:

Based on Mercury's comment (which has since been deleted), I replaced

points = np.expand_dims(points, axis=1)
r_1 = points - origins
r_2 = points - terminations

with two calls to the following function:

@njit
def subtract(a, b):
    c = np.empty((a.shape[0], b.shape[0], 3))
    for i in range(a.shape[0]):
        for j in range(b.shape[0]):
            for k in range(3):
                c[i, j, k] = a[i, k] - b[j, k]
    return c

This resulted in a speed increase from 227 s to 220 s. This is better! However, it is still not fast enough.

I also have tried setting the njit fastmath flag to true, and using a numba function instead of calls to np.einsum. Neither increased the speed.

Update 2:

With Jérôme Richard's answer, the run time is now 156 s, which is a decrease of 29%! I'm satisfied enough to accept this answer, but feel free to make other suggestions if you think you can improve on their work!

like image 573
wingedNorthropi Avatar asked Mar 22 '21 17:03

wingedNorthropi


People also ask

Does Numba speed up for loops?

Numba can speed things up Numba is a just-in-time compiler for Python specifically focused on code that runs in loops over NumPy arrays. Exactly what we need!

What is Numba used for?

A number is an arithmetic value used to represent quantity. Hence, a number is a mathematical concept used to count, measure, and label. Thus, numbers form the basis of mathematics.

Is Numba faster than NumPy?

Large data For larger input data, Numba version of function is must faster than Numpy version, even taking into account of the compiling time. In fact, the ratio of the Numpy and Numba run time will depends on both datasize, and the number of loops, or more general the nature of the function (to be compiled).


1 Answers

First of all, Numba can perform parallel computations resulting in a faster code if you manually request it using mainly parallel=True and prange. This is useful for big arrays (but not for small ones).

Moreover, your computation is mainly memory bound. Thus, you should avoid creating big arrays when they are not reused multiple times, or more generally when they cannot be recomputed on the fly (in a relatively cheap way). This is the case for r_0 for example.

In addition, memory access pattern matters: vectorization is more efficient when accesses are contiguous in memory and the cache/RAM is use more efficiently. Consequently, arr[0, :, :] = 0 should be faster then arr[:, :, 0] = 0. Similarly, arr[:, :, 0] = arr[:, :, 1] = 0 should be mush slower than arr[:, :, 0:2] = 0 since the former performs to noncontinuous memory passes while the latter performs only one more contiguous memory pass. Sometimes, it can be beneficial to transpose your data so that the following calculations are much faster.

Moreover, Numpy tends to create many temporary arrays that are costly to allocate. This is a huge problem when the input arrays are small. The Numba jit can avoid that in most cases.

Finally, regarding your computation, it may be a good idea to use GPUs for big arrays (definitively not for small ones). You can give a look to cupy or clpy to do that quite easily.

Here is an optimized implementation working on the CPU:

import numpy as np
from numba import njit, prange

@njit(parallel=True)
def subtract(a, b):
    c = np.empty((a.shape[0], b.shape[0], 3))
    for i in prange(c.shape[0]):
        for j in range(c.shape[1]):
            for k in range(3):
                c[i, j, k] = a[i, k] - b[j, k]
    return c

@njit(parallel=True)
def nb_2d_explicit_norm(vectors):
    res = np.empty((vectors.shape[0], vectors.shape[1]))
    for i in prange(res.shape[0]):
        for j in range(res.shape[1]):
            res[i, j] = np.sqrt(vectors[i, j, 0] ** 2 + vectors[i, j, 1] ** 2 + vectors[i, j, 2] ** 2)
    return res

# NOTE: better memory access pattern
@njit(parallel=True)
def nb_2d_explicit_cross(a, b):
    e = np.empty(a.shape)
    for i in prange(e.shape[0]):
        for j in range(e.shape[1]):
            e[i, j, 0] = a[i, j, 1] * b[i, j, 2] - a[i, j, 2] * b[i, j, 1]
            e[i, j, 1] = a[i, j, 2] * b[i, j, 0] - a[i, j, 0] * b[i, j, 2]
            e[i, j, 2] = a[i, j, 0] * b[i, j, 1] - a[i, j, 1] * b[i, j, 0]
    return e

# NOTE: avoid the slow building of temporary arrays
@njit(parallel=True)
def cross_absolute_magnitude(cross):
    return cross[:, :, 0] ** 2 + cross[:, :, 1] ** 2 + cross[:, :, 2] ** 2

# NOTE: avoid the slow building of temporary arrays again and multiple pass in memory
# Warning: do the work in-place
@njit(parallel=True)
def discard_singularities(arr):
    for i in prange(arr.shape[0]):
        for j in range(arr.shape[1]):
            for k in range(3):
                if np.isinf(arr[i, j, k]) or np.isnan(arr[i, j, k]):
                    arr[i, j, k] = 0.0

@njit(parallel=True)
def compute_k(strengths, r_1_cross_r_2_absolute_magnitude, r_0_dot_r_1, r_1_length, r_0_dot_r_2, r_2_length):
    return (strengths
        / (4 * np.pi * r_1_cross_r_2_absolute_magnitude)
        * (r_0_dot_r_1 / r_1_length - r_0_dot_r_2 / r_2_length)
    )

@njit(parallel=True)
def rDotProducts(b, c):
    assert b.shape == c.shape and b.shape[2] == 3
    n, m = b.shape[0], b.shape[1]
    ab = np.empty((n, m))
    ac = np.empty((n, m))
    for i in prange(n):
        for j in range(m):
            ab[i, j] = 0.0
            ac[i, j] = 0.0
            for k in range(3):
                a = b[i, j, k] - c[i, j, k]
                ab[i, j] += a * b[i, j, k]
                ac[i, j] += a * c[i, j, k]
    return (ab, ac)

# Compute `np.sum(arr, axis=1)` in parallel.
@njit(parallel=True)
def collapseArr(arr):
    assert arr.shape[2] == 3
    n, m = arr.shape[0], arr.shape[1]
    res = np.empty((n, 3))
    for i in prange(n):
        res[i, 0] = np.sum(arr[i, :, 0])
        res[i, 1] = np.sum(arr[i, :, 1])
        res[i, 2] = np.sum(arr[i, :, 2])
    return res

def calculate_velocity_induced_by_line_vortices(points, origins, terminations, strengths, collapse=True):
    r_1 = subtract(points, origins)
    r_2 = subtract(points, terminations)
    # NOTE: r_0 is computed on the fly by rDotProducts

    r_1_cross_r_2 = nb_2d_explicit_cross(r_1, r_2)

    r_1_cross_r_2_absolute_magnitude = cross_absolute_magnitude(r_1_cross_r_2)

    r_1_length = nb_2d_explicit_norm(r_1)
    r_2_length = nb_2d_explicit_norm(r_2)

    radius = 3.0e-16
    r_1_length[r_1_length < radius] = 0
    r_2_length[r_2_length < radius] = 0
    r_1_cross_r_2_absolute_magnitude[r_1_cross_r_2_absolute_magnitude < radius] = 0

    r_0_dot_r_1, r_0_dot_r_2 = rDotProducts(r_1, r_2)

    with np.errstate(divide="ignore", invalid="ignore"):
        k = compute_k(strengths, r_1_cross_r_2_absolute_magnitude, r_0_dot_r_1, r_1_length, r_0_dot_r_2, r_2_length)
        k = np.expand_dims(k, axis=2)
        induced_velocities = k * r_1_cross_r_2

    discard_singularities(induced_velocities)

    if collapse:
        induced_velocities = collapseArr(induced_velocities)

    return induced_velocities

On my machine, this code is 2.5 times faster than the initial implementation on arrays of size 10**3. It also use a bit less memory.

like image 76
Jérôme Richard Avatar answered Sep 28 '22 08:09

Jérôme Richard