After Node.js came out, it was the one thing that popularized evented programming. But, Ruby does have EventMachine which supports writing evented code.
The requirements for supporting eventing in rails are:
1. Evented server (thin, rainbows) which runs a reactor
2. Fibers (Ruby 1.9.3) in order to make writing evented code easier, else we could have used threads.
3. All gems evented (example mysql2).
Nodejs showed the obvious benefits of evented programming. So why is the rails community NOT adopting eventmachine? I think one of the reasons rails is not fully portable to eventmachine is because of the dependency on underlying gems which may not be evented. But does anyone know if there is a plan to make a move in that direction?
Rails can do what Nodejs does, but Nodejs started out by advocating evented programming on all the library makers, so by convention most dependencies that you add to package.json in node, you know that it will be evented and will work with nodejs out of the box.
The directory structure for JavaScript has changed to the app/javascript/packs/ folder. In that folder you will find the application. js file, which is just like the application.
Having spent a lot of time working with JavaScript and React recently, I thought I'd revisit Rails for some practice, and wanted to write a post about creating a RESTful API using Ruby on Rails. Rails now has the option to create an API application which is a slimmed down version of a traditional Rails web app.
The biggest reason is that the Rails ecosystem was not built for evented IO, and the introduction of a single piece of non-evented IO into your application eliminates the benefits. It's very possible to write evented code in Ruby (and with Rails), but it's not necessarily going to be straightforward, as it's not always clear when gems do or do not perform evented IO, and the developer would need to spend a lot of time chasing down where the application may be blocking. By comparison, Node was created with the implicit ideal that IO should never be synchronous, and its entire ecosystem has flowed from that ideal, meaning that the developer doesn't have to be concerned about whether or not their IO operations are going to be synchronous or not; the assumption by default is that they are asynchronous.
Additionally, evented web applications are only really useful when you're IO-bound. If your application is CPU-bound, or is doing heavy synchronous CPU work, then an evented model is probably not the right approach anyhow. Ruby can require a significant amount of CPU, primarily due to the language's metaprogramming constructs and garbage collector (which should improve substantially in Ruby 2.1!), which may make it less suited than Node to evented programming.
Rails has many concurrency models available - forking, preemptive threading, and eventing - and it's up to the developer to choose the one that best fits their app domain. Forking is the default because it's easy, doesn't require any special considerations (as long as you're deploying on a POSIX system!), and Ruby didn't have system threads when Rails was created. Now, with Ruby 1.9+ (system threads, GIL) and JRuby (no GIL!), threaded code is very easy to deploy. Ruby 2.0 brings a COW-friendly garbage collector, which means that forking is more efficient than before, as well.
At the end of the day, evented code isn't the default because it requires more work from the developer, and for many people, the default forking model is good enough. In the cases where it isn't, the developer has the option of threaded or evented code, as best fits their infrastructure and application domain.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With