There is something that I'm really not understanding about Node.js: pretty much everywhere you can read that node.js is not recommended for HPC (high performance computing) due to his async but single-thread nature.
You can find node.js pretty much always explained with Express.js to build some really fast web-server or service that also allows you to send HTML or JSON in your response after some query to an SQL or NoSQL database.
But here the thing.
You can also find on npm lots of packages build for time consuming and intensive operations, like fluent-ffmpeg for video encoding. Or you can use request and cheerio and build a web scraper.
Npm in also full of command line application written for node.js (in node.js). Are all the application for non-time-consuming operations?
Also we can find a lot of frameworks, like next.js that, at least to me, doesn't seem like they are doing something so easy.
I love using node and javascript to build web-servers, service and command line applications too, but sometimes I feel like I did not understand the real potential and the real limits of node.js.
If you look closely at the ffmepeg package, you'll note that it says:
In order to be able to use this module, make sure you have ffmpeg installed on your system
This is a hint as to what's going on in this case. This package is not reimplementing the entirety of ffmpeg, but instead simply serving as an API to an existing ffmpeg installation.
If you look at the code, you can see that it's actually just spawn
ing a copy of ffmpeg to do the work. This therefore isn't actually running "in node".
So that's ffmpeg, what about your other examples? Well, I suspect most of them aren't as CPU heavy as you might think - after all, the entire design of many, many node applications is to deal with HTML and webpages, and a scraper isn't something that takes a lot of processing power to do.
So, "What does "cpu intensive operations" really mean?" is a pretty subjective one. Some things to note from your source link and real life:
The copyright at the bottom of the page is 2011. That's ancient in javascript development time. This advice was written before many iterations and innovations happened. It's likely not wholly wrong, but it's missing the current point of view we have.
CPU-heavy applications are called out in comparison to their I/O:
very heavy on CPU usage, and very light on actual I/O
Web scrapers are probably not considered "light on actual I/O"
This is a subjective choice. No one can dictate exactly how you should be implementing your application. If they were, they'd be writing it, not you.
The real world is not strictly defined into "CPU-intensive" and not. Many applications start with some requirements that look great for node, and then later, some get added that aren't as perfect as a fit, or are even a bad fit. Real world teams can't always reinvent everything whenever a new requirement gets added, so shims like the mentioned ffmpeg package get created.
So how do you know the limits? Again, this is a subjective choice. It's fair to set some hard boundaries, like video encoding, as things that really should not be done in pure javascript. But the space from there to a simple API gets pretty murky depending on the exact requirements and details. If it works and is reasonably performant, it's probably ok! You might get more performance out of another system, but you might also lose your knowledge of the ecosystem and integration with the community.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With