[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/sci/ - Science & Math

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • Additional supported file types are: PDF
  • Use with [math] tags for inline and [eqn] tags for block equations.
  • Right-click equations to view the source.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


I have recently explored the possibility of doing a FFT entirely with addition, subtraction, and transpose.
Turns out this is indeed possible, however, you're approximating the phase, so you need to project to a lattice and then at the end perform a phase adjustment from a much larger range of phase to a smaller range(project from thousands to billions of radians to just -pi, pi).
Your lattice can be entirely hypothetical, you dont need extra memory.

However, you need to explore and work out the math for each transpose sequence along with the final positioning in the imaginary sparse lattice, so you can get the individual phase adjustments needed. This work only needs to be done once for any FFT size, and then the phase adjustments and transposes can be used as a lookup table. We're talking doing an FFT in only 15% of the work!

Phase Lattice Mapping: Phase angles are mapped onto a lattice using a function φ(k) = 2πk * (L/N), where L is a chosen lattice size larger than N (FFT size). This spreads phase values over a range of 2πL/N.

Pre-Processing: Input signals are multiplied by exp(i * φ(n)), embedding phase information into the lattice positions.

FFT Operation: Instead of computing complex rotations, FFT operations involve simple integer shifts on the lattice. Each butterfly operation becomes an integer arithmetic operation.

Post-Processing: Results are reconstructed by sampling the lattice to retrieve magnitude and phase information, then converting back to standard phase ranges using modulo operations.


What do you think?

for a small FFT this doesnt need a lot of memory but for a 4 million point FFT. you need 700mb of ram

also, since our operations are cumulative, some will cancel out meaning the algorithm can be optimized further.

>>https://colab.research.google.com/drive/1ET3R_JkckEJ-LxJpd05PjbfGK-TyiPNF#scrollTo=B_ulN6Vv4e0X
I did some python to explore this concept
>>
simplifying for fellow nerds:
each stage of FFT requires twiddle * previous stage and then add or subtract
In our context, instead, we do transpose and then add or subtract, and the distortion is because of the phase alignment- we need to project into a hypothetical lattice where all phase alignments are whole numbers. At the end, to get our sparse subset back out with the correct values, we need to perform a phase adjustment back.

In practice we need only N indexes and then to do transforms, but we need to learn the phase correction needed at the end, and it's different for every FFT size.

We do less work by pre-computing some of the work and eliminating hidden redundancy
>>
File: GMb_e6hXgAAFc-9.png (161 KB, 1199x717)
161 KB
161 KB PNG
>>16282176
Sounds like schizobabble
>>
>>16282176
interesting



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.