Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

process Swift DispatchQueue without affecting resource

I have a Swift DispatchQueue that receives data at 60fps. However, depending on phones or amount of data received, the computation of those data becomes expensive to process at 60fps. In actuality, it is okay to process only half of them or as much as the computation resource allows.

let queue = DispatchQueue(label: "com.test.dataprocessing")

func processData(data: SomeData) {
    queue.async {
        // data processing 
    }
}

Does DispatchQueue somehow allow me to drop some data if a resource is limited? Currently, it is affecting the main UI of SceneKit. Or, is there something better than DispatchQueue for this type of task?

like image 392
user3087615 Avatar asked Mar 08 '18 17:03

user3087615


Video Answer


1 Answers

There are a couple of possible approaches:

  1. The simple solution is to keep track of your own Bool as to whether your task is in progress or not, and when you have more data, only process it if there's not one already running:

    private var inProgress = false
    private var syncQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".sync.progress")  // for reasons beyond the scope of this question, reader-writer with concurrent queue is not appropriate here
    
    func processData(data: SomeData) {
        let isAlreadyRunning = syncQueue.sync { () -> Bool in
            if self.inProgress { return true }
    
            self.inProgress = true
            return false
        }
    
        if isAlreadyRunning { return }
    
        processQueue.async {
            defer {
                self.syncQueue.async { self.inProgress = false }
            }
    
            // process `data`
        }
    }
    

    All of that syncQueue stuff is to make sure that I have thread-safe access to the inProgress property. But don't get lost in those details; use whatever synchronization mechanism you want (e.g. a lock or whatever). All we want to make sure is that we have thread-safe access to the Bool status flag.

    Focus on the basic idea, that we'll keep track of a Bool flag to know whether the processing queue is still tied up processing the prior set of SomeData. If it is busy, return immediately and don't process this new data. Otherwise, go ahead and process it.

  2. While the above approach is conceptually simple, it won't offer great performance. For example, if your processing of data always takes 0.02 seconds (50 times per second) and your input data is coming in at a rate of 60 times per second, you'll end up getting 30 of them processed per second.

    A more sophisticated approach is to use a GCD user data source, something that says "run the following closure when the destination queue is free". And the beauty of these dispatch user data sources is that it will coalesce them together. These data sources are useful for decoupling the speed of inputs from the processing of them.

    So, you first create a data source that simply indicates what should be done when data comes in:

    private var dataToProcess: SomeData?
    private lazy var source = DispatchSource.makeUserDataAddSource(queue: processQueue)
    
    func configure() {
        source.setEventHandler() { [unowned self] in
            guard let data = self.syncQueue.sync(execute: { self.dataToProcess }) else { return }
    
            // process `data`
        }
        source.resume()
    }
    

    So, when there's data to process, we update our synchronized dataToProcess property and then tell the data source that there is something to process:

    func processData(data: SomeData) {
        syncQueue.async { self.dataToProcess = data }
        source.add(data: 1)
    }
    

    Again, just like the previous example, we're using syncQueue to synchronize our access to some property across multiple threads. But this time we're synchronizing dataToProcess rather than the inProgress state variable we used in the first example. But the idea is the same, that we must be careful to synchronize our interation with a property across multiple threads.

    Anyway, using this pattern with the above scenario (input coming in at 60 fps, whereas processing can only process 50 per second), the resulting performance much closer to the theoretical max of 50 fps (I got between 42 and 48 fps depending upon the queue priority), rather than 30 fps.

The latter process can conceivably lead to more frames (or whatever you're processing) to be processed per second and results in less idle time on the processing queue. The following image attempts to graphically illustrate how the two alternatives compare. In the former approach, you'll lose every other frame of data, whereas the latter approach will only lose a frame of data when two separate sets of input data came in prior to the processing queue becoming free and they were coalesced into a single call to the dispatch source.

enter image description here

like image 116
Rob Avatar answered Oct 17 '22 15:10

Rob