I am working on a very typical web application. The main component of the user experience is a widget that a site owner would install on their front page. Every time their front page loads, the widget talks to our server and displays some of the data that returns.
So there are two components to this web application:
Previously we had all of this running in PHP. Now we are experimenting with Rails, which is fantastic for #1 (the front end UI). The question is how to do #2, the back serving of widget information, efficiently. Obviously this is much higher load than the front end, since it is called every time the front page loads on one of our clients' websites.
I can see two obvious approaches:
A. Parallel Stack: Set up a parallel stack that uses something other than rails (e.g. our old PHP-based approach) but accesses the same database as the front end
B. Rails Metal: Use Rails Metal/Rack to bypass the Rails routing mechanism, but keep the api call responder within the Rails app
My main question:
But also...
And...
Thanks in advance for any insights.
Not really the most elaborate answer but:
I would not use metal for this, I would use page caching instead. That way, the requests will be served by the webserver and no dynamic languages at all. When you create a resource clear the corresponding index
page. A very basic example would be:
class PostsController < ApplicationController
caches_page :index
def index
@posts = Post.all
respond_to do |format|
format.html
format.xml
end
end
def create
@post = Post.new(params[:post])
respond_to do |format|
if @post.save
expire_page :action => :index
format.html { redirect_to posts_path }
format.xml
else
format.html { render :action => "new" }
end
end
end
end
For more information read the Caching Guide.
I would only start pulling functionality down to Rack/Metal if I had determined the exact cause of any performance issue being encountered. Particular in recent versions of Rails (3, in particular) and Ruby, the stack itself is very rarely the bottleneck. Start measuring, get some real metrics and optimise judiciously.
My rule of thumb: if you don't have metrics, you can't reason intelligently about your performance issue and any possible solution.
The issues in my experience are nearly always: the views and the database.
As Ryan suggests, caching can be incredibly effective ... you can even move you architecture to use a reverse proxy in front of your Rails request stack to provide even more capability. A cache like Varnish provides incredibly high-performance. Rails has built-in support for etags and HTTP headers for facilitating a reverse proxy solution.
The other thing to do is look at the db layer itself. The cache can go a long way here, but some optimisation may be useful here too. Making sure you use Active Record's :include sensibly is a great step to avoid N+1 query situations, but there is fantastic support in Rails for dropping memcached into the stack with little or no configuration, which can provide excellent performance gains.
PHP loads the entire environment on each request. In production mode, Rails loads the entire environment once when the server starts up. There is certainly a fair amount of Ruby code being executed during normal controller-action invocations. However, in production mode, none of this code is related to loading the environment. And using Rails Metal instead of the usual Rails controller stack removes a number of those layers, yielding a few additional milliseconds of time saved per request.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With