Currently, in Stumpy, the sliding dot product [of a query Q and a time series T], is computed via one of the two following functions:
The sliding dot product in MATALB (via fft trick) seems to be faster though.
# MATLAB code
%x is the data, y is the query
m = length(y);
n = length(x);
y = y(end:-1:1);%Reverse the query
y(m+1:n) = 0; %aappend zeros
%The main trick of getting dot products in O(n log n) time
X = fft(x);
Y = fft(y);
Z = X.*Y;
z = ifft(Z);
# and then use the slice `z(m:n)`
[Update]
Here, I am providing some important links to external resources or the comments mentioned here:
range(17, 21).Currently, in Stumpy, the sliding dot product [of a query Q and a time series T], is computed via one of the two following functions:
core.sliding_dot_product, which takes advantage of fft trick usingscipy.signal.convolvecore._sliding_dot_product, which uses a njit on top ofnp.dotThe sliding dot product in MATALB (via fft trick) seems to be faster though.
Can we get closer to the performance of MATLAB?