ifft#
Perform a 1D inverse FFT. Batching is supported for any tensor with a rank higher than 1.
-
template<typename OpA>
__MATX_INLINE__ auto matx::ifft(const OpA &a, uint64_t fft_size = 0, FFTNorm norm = FFTNorm::BACKWARD)# Run a 1D IFFT with a cached plan
Creates a new FFT plan in the cache if none exists, and uses that to execute the 1D IFFT. Note that FFTs and IFFTs share the same plans if all dimensions match
Note: fft_size must be unsigned so that the axis overload does not match both prototypes with index_t.
- Template Parameters:
OpA – Input tensor or operator type
- Parameters:
a – input tensor or operator
fft_size – Size of FFT. Setting to 0 uses the output size to figure out the FFT size.
norm – Normalization to apply to IFFT
-
template<typename OpA>
__MATX_INLINE__ auto matx::ifft(const OpA &a, const int32_t (&axis)[1], uint64_t fft_size = 0, FFTNorm norm = FFTNorm::BACKWARD)# Run a 1D IFFT with a cached plan
Creates a new FFT Iplan in the cache if none exists, and uses that to execute the 1D IFFT. Note that FFTs and IFFTs share the same plans if all dimensions match Note: fft_size must be unsigned so that the axis overload does not match both prototypes with index_t.
- Template Parameters:
OpA – Input tensor or operator type
- Parameters:
a – input tensor or operator
axis – axis to perform FFT along
fft_size – Size of FFT. Setting to 0 uses the output size to figure out the FFT size.
norm – Normalization to apply to IFFT
Examples#
// Perform a batched 1D IFFT on a 3D tensor across axis 2. Since axis 2 is the last dimension,
// this is equivalent to not specifying the axis
(out1 = ifft(in)).run(this->exec);
(out2 = ifft(in, {2})).run(this->exec);
// Perform a batched 1D IFFT on a 3D tensor across axis 1. This is equivalent to permuting the last
// two axes before input and after the output
(out1.Permute({0,2,1}) = ifft(in.Permute({0,2,1}))).run(this->exec);
(out2 = ifft(in, {1})).run(this->exec);