In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back.
I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.
First, you want to convert your byte array to an InputStream. Then, you create an AudioInputStream from that Inputstream using your AudioSystem. Once you have your audio stream, you essentially have audio and you can write it to file or do whatever you like.
ByteArrayInputStream oInstream = new ByteArrayInputStream(ayAudioData);
AudioInputStream oAIS = AudioSystem.getAudioInputStream(oInstream);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With