I have implemented a nice algorithm ("Non Local Means") for reducing noise in image. It is based on it's Matlab implementation.
The problem with NLMeans is that the original algorithm is slow even on compiled languages like c/c++ and i am trying to run it using scripting language.
One of best solutions is to use improved Blockwise NLMeans algorithm which is ~60-80 times faster. The problem is that the paper which describes it is written in a complex mathematical language and it's really hard for me to understand an idea and program it (yes, i didn't learn math at college).
That is why i am desperately looking for a pseudo code of this algorithm.
(modification of original Matlab implementation would be just perfect)
I admit, I was intrigued until I saw the result – 260+ seconds on a dual core, and that doesn't assume the overhead of a scripting language, and that's for the Optimized Block Non Local Means filter.
Let me break down the math for you – My idea of pseudo-code is writing in Ruby.
Assume an image that's 200 x 100 pixels (20000 pixels total), which is a pretty tiny image. We're going to have to go through 20,000 pixels and evaluate each one on the weighted average of the other 19,999 pixels: [Sorry about the spacing, but the equation is interpreted as a link without it]
NL [v] (i) = ∑ w(i,j)v(j) [j ∈ I]
where 0 ≤ w(i,j) ≤ 1 and ∑j w(i,j) = 1
Understandably, this last part can be a little confusing, but this is really nothing more than a convolution filter the size of the whole image being applied to each pixel.
The blockwise implementation takes overlapping sets of voxels (volumetric pixels - the implementation you pointed us to is for 3D space). Presumably, taking a similar approach, you could apply this to 2D space, taking sets of overlapping pixels. Let's see if we can describe this...
NL [v] (ijk) = 1/|Ai|∑ w(ijk, i)v(i)
Where A is a vector of the pixels to be estimated, and similar circumstances as above are applied.
[NB: I may be slightly off; It's been a few years since I did heavy image processing]
In all likelihood, we're talking about reducing complexity of the algorithm at a minimal cost to reduction quality. The larger the sample vector, the higher the quality as well as the higher the complexity. By overlapping then averaging the sample vectors from the image then applying that weighted average to each pixel we're looping through the image far fewer times.
Pretty simple, but the processing time is going to be horrid with larger images.
You're going to have to make some tough decisions. If you're going to use a scripting language, you're already dealing with significant interpretive overhead. It's far from optimal to use a scripting language for heavy duty image processing. If you're not processing medical images, in all likelihood, there are far better algorithms to use with lesser O's.
Hope this is helpful... I'm not terribly good at making a point clearly and concisely, so if I can clarify anything, let me know.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With