Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

iOS split stereo mp3 to mono aac

Tags:

split

ios

mp3

m4a

I'm converting an mp3 to m4a on iOS with this code: iOS swift convert mp3 to aac

but I need to extract the left and right channel into seperate m4a files.

I have this code working which is splitting my audio into nsdata:

let leftdata:NSMutableData! = NSMutableData()
let rightdata:NSMutableData! = NSMutableData()

let buff: CMBlockBufferRef = CMSampleBufferGetDataBuffer(sampleBuffer!)!

var lengthAtOffset: size_t = 0
var totalLength:Int = 0
var data: UnsafeMutablePointer<Int8> = nil

if( CMBlockBufferGetDataPointer( buff, 0, &lengthAtOffset, &totalLength, &data ) != noErr ) {
    print("some sort of error happened")
} else {

    for i in 0.stride(to: totalLength, by: 2) {

        if(i % 4 == 0) {
            leftdata.appendBytes(data+i, length: 2)
        } else {
            rightdata.appendBytes(data+i, length: 2)
        }

    }
}

data = nil

However now I need it converted to CMSampleBuffer's so I can append to the asset writer. How do I convert the nsdata to sample buffers?

Update 24th Nov I've now got the following code thats trying to convert the NSData to a CMSampleBuffer. I can't work out where its failing:

var dataPointer: UnsafeMutablePointer<Void> = UnsafeMutablePointer(leftdata.bytes)

var cmblockbufferref:CMBlockBufferRef?

var status = CMBlockBufferCreateWithMemoryBlock(nil, dataPointer, leftdata.length, kCFAllocatorNull, nil, 0, leftdata.length, 0, &cmblockbufferref)

var audioFormat:AudioStreamBasicDescription = AudioStreamBasicDescription()
audioFormat.mSampleRate = 44100
audioFormat.mFormatID = kAudioFormatLinearPCM
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian
audioFormat.mBytesPerPacket = 2
audioFormat.mFramesPerPacket = 1
audioFormat.mBytesPerFrame = 2
audioFormat.mChannelsPerFrame = 1
audioFormat.mBitsPerChannel = 16
audioFormat.mReserved = 0

var format:CMFormatDescriptionRef?

status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format);

var timing:CMSampleTimingInfo = CMSampleTimingInfo(duration: CMTimeMake(1, 44100), presentationTimeStamp: kCMTimeZero, decodeTimeStamp: kCMTimeInvalid)

var leftSampleBuffer:CMSampleBufferRef?

status = CMSampleBufferCreate(kCFAllocatorDefault, cmblockbufferref, true, nil, nil, format, leftdata.length, 1, &timing, 0, nil, &leftSampleBuffer)

self.assetWriterAudioInput.appendSampleBuffer(leftSampleBuffer!)
like image 705
Castles Avatar asked Nov 13 '15 00:11

Castles


1 Answers

We finally got this to work! Here is the final swift code that we are using to convert nsdata to a samplebuffer:

func NSDataToSample(data:NSData) -> CMSampleBufferRef? {

    var cmBlockBufferRef:CMBlockBufferRef?

    var status = CMBlockBufferCreateWithMemoryBlock(nil, nil, data.length, nil, nil, 0, data.length, 0, &cmBlockBufferRef)

    if(status != 0) {
        return nil
    }

    status = CMBlockBufferReplaceDataBytes(data.bytes, cmBlockBufferRef!, 0, data.length)

    if(status != 0) {
        return nil
    }

    var audioFormat:AudioStreamBasicDescription = AudioStreamBasicDescription()

    audioFormat.mSampleRate = 44100
    audioFormat.mFormatID = kAudioFormatLinearPCM
    audioFormat.mFormatFlags = 0xc
    audioFormat.mBytesPerPacket = 2
    audioFormat.mFramesPerPacket = 1
    audioFormat.mBytesPerFrame = 2
    audioFormat.mChannelsPerFrame = 1
    audioFormat.mBitsPerChannel = 16
    audioFormat.mReserved = 0

    var format:CMFormatDescriptionRef?

    status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format)

    if(status != 0) {
        return nil
    }

    var sampleBuffer:CMSampleBufferRef?

    status = CMSampleBufferCreate(kCFAllocatorDefault, cmBlockBufferRef!, true, nil, nil, format,  data.length/2, 0, nil, 0, nil, &sampleBuffer)

    if(status != 0) {
        return nil
    }

    return sampleBuffer
}
like image 56
Castles Avatar answered Nov 03 '22 21:11

Castles