If Moore's Law holds true, and CPUs/GPUs become increasingly fast, will software (and, by association, you software developers) still push the boundaries to the extent that you still need to optimize your code? Or will a naive factorial solution be good enough for your code (etc)?
10 Reasons Why You Need Code Optimization. 1 1. Cleaner Code Base. As a project matures, and more and more developers start to work on it, duplications and overlaps usually sooner or later ... 2 2. Higher Consistency. 3 3. Faster Sites. 4 4. Better Code Readability. 5 5. More Efficient Refactoring. More items
I've found that the problem with premature optimization mostly happens when re-writing existing code to be faster. I can see how it could be a problem to write some convoluted optimization in the first place, but mostly I see premature optimization rearing its ugly head in fixing what ain't (known to be) broke.
It is necessary that the code which we are writing is not only clean, and without defects but also optimized i.e. the time taken by the code to execute should be within intended limits. In order to achieve this, we need to refer to the Java coding standards and review our code to make sure it is as per the standards.
But if both algorithms are similarly complex, or if the expected workload is so large that you already know you'll need the faster one, then optimizing early is a sound engineering decision that will reduce your total workload in the long run.
2x the processing power doesn't do much to ameliorate the awfulness of your lousy n^2 search.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With