I am writing some software to create a complex waveform (actually a soundwave) as an array. Starting with some primitive waveforms (sine waves etc), there will be functions which combine them to create more complex waves, and yet more functions which combine those waves, etc.
It might look like:
f(mult(sine(), env(square(), ramp()))
but a lot more complex.
One way of doing this would be to make each function a generator, so that the whole function tree executes once per element with each generator yielding a single value each time.
The array could have several million elements, and the function tree could easily be 10 deep. Would generators be ridiculously inefficient for doing this?
The alternative would be for each function to create and return an entire array. This would presumably be more efficient, but has disadvantages (messier implementation, no results available until the end of the calculation, could use a lot of memory).
They always say you shouldn't try to second guess Python efficiency, but will generators take a long time in this case?
Generators are lazy sequences. They are perfect for use when you have sequences which may be very long, as long as you can operate piecewise (either elementwise, or on reasonably sized chunks).
This will tend to reduce your peak memory use. Just don't ruin that by then storing all elements of the sequence somewhere.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With