Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pseudo code for Blockwise Non Local Means noise reduction algorithm

I have implemented a nice algorithm ("Non Local Means") for reducing noise in image. It is based on it's Matlab implementation.

The problem with NLMeans is that the original algorithm is slow even on compiled languages like c/c++ and i am trying to run it using scripting language.

One of best solutions is to use improved Blockwise NLMeans algorithm which is ~60-80 times faster. The problem is that the paper which describes it is written in a complex mathematical language and it's really hard for me to understand an idea and program it (yes, i didn't learn math at college).

That is why i am desperately looking for a pseudo code of this algorithm.

(modification of original Matlab implementation would be just perfect)

like image 323
Termos Avatar asked Aug 12 '11 14:08

Termos


1 Answers

I admit, I was intrigued until I saw the result – 260+ seconds on a dual core, and that doesn't assume the overhead of a scripting language, and that's for the Optimized Block Non Local Means filter.

Let me break down the math for you – My idea of pseudo-code is writing in Ruby.

Non Local Means Filtering

Assume an image that's 200 x 100 pixels (20000 pixels total), which is a pretty tiny image. We're going to have to go through 20,000 pixels and evaluate each one on the weighted average of the other 19,999 pixels: [Sorry about the spacing, but the equation is interpreted as a link without it]

NL [v] (i) = ∑ w(i,j)v(j) [j ∈ I]

where 0 ≤ w(i,j) ≤ 1 and ∑j w(i,j) = 1

Understandably, this last part can be a little confusing, but this is really nothing more than a convolution filter the size of the whole image being applied to each pixel.

Blockwise Non Local Means Filtering

The blockwise implementation takes overlapping sets of voxels (volumetric pixels - the implementation you pointed us to is for 3D space). Presumably, taking a similar approach, you could apply this to 2D space, taking sets of overlapping pixels. Let's see if we can describe this...

NL [v] (ijk) = 1/|Ai|∑ w(ijk, i)v(i)

Where A is a vector of the pixels to be estimated, and similar circumstances as above are applied.

[NB: I may be slightly off; It's been a few years since I did heavy image processing]

Algorithm

In all likelihood, we're talking about reducing complexity of the algorithm at a minimal cost to reduction quality. The larger the sample vector, the higher the quality as well as the higher the complexity. By overlapping then averaging the sample vectors from the image then applying that weighted average to each pixel we're looping through the image far fewer times.

  • Loop through the image to collect the sample vectors and store their weighted average to an array.
  • Apply each weighted average (a number between 0 and 1) to each pixel times the pixels value.

Pretty simple, but the processing time is going to be horrid with larger images.

Final Thoughts

You're going to have to make some tough decisions. If you're going to use a scripting language, you're already dealing with significant interpretive overhead. It's far from optimal to use a scripting language for heavy duty image processing. If you're not processing medical images, in all likelihood, there are far better algorithms to use with lesser O's.

Hope this is helpful... I'm not terribly good at making a point clearly and concisely, so if I can clarify anything, let me know.

like image 170
stslavik Avatar answered Nov 10 '22 01:11

stslavik