Skip to content

Reuse Cache Plan #22

@seanlaw

Description

@seanlaw

@mreineck Thank you for this package. @NimaSarajpoor and I have been using scipy.fft to compute thousands of repeated rfft/ifft calculations for (power of 2) 1-dimensional inputs in the range of 2^10 to 2^20. Essentially, I am performing a convolution (or sliding window dot product). In the aforementioned range of inputs, PocketFFT seems to be performing ~2x slower than FFTW and so I was hoping to regain some of that time back by caching/reusing the plan. However, it appears that plan caching has been disabled for scipy. So, I wanted to ask if you might be able to offer a possible scipy workaround that doesn't require recompiling Pocketfft (as this is for an existing package that cannot have additional package dependencies added)?

Fictitious Example:

import numpy as np
from scipy.fft import rfft, ifft, next_fast_len

def scipy_convolution(Q, T):
    n = len(T)
    m = len(Q)
    shape = next_fast_len(n + m - 1, real=True)

    Qraf = rfft(np.ascontiguousarray(Q[::-1]), n=shape)
    Taf = rfft(T, n=shape)
    QT = irfft(np.multiply(Qraf, Taf), n=shape)

    return QT.real[m-1:n]

if __name__ == "__main__":
    Q = np.random.rand(2**11)  # Q will always be a power of 2 and always real inputs
    T = np.random.rand(2**19)  # T will always be a power of 2 and always real inputs
    for i in range(1_000):
        scipy_convolution(Q, T)

Perhaps there is an alternative approach beyond plan caching that could help improve the performance? I've also tried np.fft but, unsurprisingly, the performance is comparable to scipy.fft. Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions