Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

iOS voice recorder visualization on swift

I want to make visualization on the record like on the original Voice Memo app:

enter image description here

I know I can get the levels - updateMeters - peakPowerForChannel: - averagePowerForChannel:

but how to draw the graphic, should I do it custom? Is there free/paid source I can use?

like image 737
SpyZip Avatar asked Mar 31 '15 13:03

SpyZip


1 Answers

I was having the same problem. I wanted to create a voice memos clone. Recently, I found a solution and wrote an article about it on medium.

I created a subclass from UIView class and drew the bars with CGRect.

import UIKit

class AudioVisualizerView: UIView {


// Bar width
var barWidth: CGFloat = 4.0
// Indicate that waveform should draw active/inactive state
var active = false {
    didSet {
        if self.active {
            self.color = UIColor.red.cgColor
        }
        else {
            self.color = UIColor.gray.cgColor
        }
    }
}
// Color for bars
var color = UIColor.gray.cgColor
// Given waveforms
var waveforms: [Int] = Array(repeating: 0, count: 100)

// MARK: - Init
override init (frame : CGRect) {
    super.init(frame : frame)
    self.backgroundColor = UIColor.clear
}

required init?(coder decoder: NSCoder) {
    super.init(coder: decoder)
    self.backgroundColor = UIColor.clear
}

// MARK: - Draw bars
override func draw(_ rect: CGRect) {
    guard let context = UIGraphicsGetCurrentContext() else {
        return
    }
    context.clear(rect)
    context.setFillColor(red: 0, green: 0, blue: 0, alpha: 0)
    context.fill(rect)
    context.setLineWidth(1)
    context.setStrokeColor(self.color)
    let w = rect.size.width
    let h = rect.size.height
    let t = Int(w / self.barWidth)
    let s = max(0, self.waveforms.count - t)
    let m = h / 2
    let r = self.barWidth / 2
    let x = m - r
    var bar: CGFloat = 0
    for i in s ..< self.waveforms.count {
        var v = h * CGFloat(self.waveforms[i]) / 50.0
        if v > x {
            v = x
        }
        else if v < 3 {
            v = 3
        }
        let oneX = bar * self.barWidth
        var oneY: CGFloat = 0
        let twoX = oneX + r
        var twoY: CGFloat = 0
        var twoS: CGFloat = 0
        var twoE: CGFloat = 0
        var twoC: Bool = false
        let threeX = twoX + r
        let threeY = m
        if i % 2 == 1 {
            oneY = m - v
            twoY = m - v
            twoS = -180.degreesToRadians
            twoE = 0.degreesToRadians
            twoC = false
        }
        else {
            oneY = m + v
            twoY = m + v
            twoS = 180.degreesToRadians
            twoE = 0.degreesToRadians
            twoC = true
        }
        context.move(to: CGPoint(x: oneX, y: m))
        context.addLine(to: CGPoint(x: oneX, y: oneY))
        context.addArc(center: CGPoint(x: twoX, y: twoY), radius: r, startAngle: twoS, endAngle: twoE, clockwise: twoC)
        context.addLine(to: CGPoint(x: threeX, y: threeY))
        context.strokePath()
        bar += 1
    }
  }

}

For the recording function, I used installTap instance method to record, monitor, and observe the output of the node.

let inputNode = self.audioEngine.inputNode
guard let format = self.format() else {
    return
}

inputNode.installTap(onBus: 0, bufferSize: 1024, format: format) { (buffer, time) in
    let level: Float = -50
    let length: UInt32 = 1024
    buffer.frameLength = length
    let channels = UnsafeBufferPointer(start: buffer.floatChannelData, count: Int(buffer.format.channelCount))
    var value: Float = 0
    vDSP_meamgv(channels[0], 1, &value, vDSP_Length(length))
    var average: Float = ((value == 0) ? -100 : 20.0 * log10f(value))
    if average > 0 {
        average = 0
    } else if average < -100 {
        average = -100
    }
    let silent = average < level
    let ts = NSDate().timeIntervalSince1970
    if ts - self.renderTs > 0.1 {
        let floats = UnsafeBufferPointer(start: channels[0], count: Int(buffer.frameLength))
        let frame = floats.map({ (f) -> Int in
            return Int(f * Float(Int16.max))
        })
        DispatchQueue.main.async {
            let seconds = (ts - self.recordingTs)
            self.timeLabel.text = seconds.toTimeString
            self.renderTs = ts
            let len = self.audioView.waveforms.count
            for i in 0 ..< len {
                let idx = ((frame.count - 1) * i) / len
                let f: Float = sqrt(1.5 * abs(Float(frame[idx])) / Float(Int16.max))
                self.audioView.waveforms[i] = min(49, Int(f * 50))
            }
            self.audioView.active = !silent
            self.audioView.setNeedsDisplay()
        }
    }

Here is the article I wrote, and I hope that you will find what you are looking for: https://medium.com/flawless-app-stories/how-i-created-apples-voice-memos-clone-b6cd6d65f580

The project is also available on GitHub: https://github.com/HassanElDesouky/VoiceMemosClone

Please note that I'm still a beginner, and I'm sorry my code doesn't seem that clean!

like image 169
Hassan ElDesouky Avatar answered Oct 04 '22 12:10

Hassan ElDesouky