Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Trouble with Nginx and Multiple Meteor/Nodejs Apps

I understand that multiple node.js, and I assume by extension Meteor, can be run on one server using Nginx. I've got Nginx setup and running on a Ubuntu server just fine, I can even get it to respond to requests and proxy them to one application of mine. I however hit a roadblock when trying to get Nginx to proxy traffic to the second application.

Some background:

  • 1st app running on port 8001
  • 2nd app running on port 8002
  • Nginx listening on port 80
  • Attempting to get nginx to send traffic at / to app one and traffic at /app2/ to app two
  • Both apps can be reached by going to domain:8001 and domain:8002

My Nginx config:

upstream mydomain.com {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}

# the nginx server instance
server {
listen 0.0.0.0:80 default_server;
access_log /var/log/nginx/mydomain.log;

location /app2 {
  rewrite /app2/(.*) /$1 break;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_pass http://127.0.0.1:8002;
  proxy_redirect off;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";
}

location / {
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_pass http://127.0.0.1:8001;
  proxy_redirect off;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";
}
}

Any insight as to what might be going on when traffic goes to /app2/ I'd greatly appreciate it!

like image 567
jak119 Avatar asked Apr 29 '13 15:04

jak119


People also ask

Can you use nginx with node js?

Configuring Nginx For Nginx to route to the Node. js application listening on port 3000, we'll need to first unlink the default configuration of Nginx and then create a new configuration to be used for by our Node. js application. The Nginx configuration is kept in the /etc/nginx/sites-available directory.

Is node faster than nginx?

Conclusion. Node. js is a JS runtime environment that is also an HTTP server with some event-driven features and has many drawbacks in terms of concurrency and high load or user requests to handle a large number of users concurrently. Nginx has the best performance in this case, and it provides the best performance.

Why does Nodejs need nginx?

If Node. js is used, does NGINX or Apache still need to be installed? yes, you need nginx (not apache) to complement nodejs for a serious website. the reason is nginx is easier to deploy and debug (and performs better than nodejs) for “mundane” things like handling https and serving static files.


1 Answers

proxy_pass http://127.0.0.1:8002/1;  <-- these should be 
proxy_pass http://**my_upstream_name**;  <--these

then

upstream my_upstream_name {  

//Ngixn do a round robin load balance, some users will conect to / and othes to /app2

server 127.0.0.1:8001;

server 127.0.0.1:8002;

}

A few tips control the proxy:

take a look here @nginx docs

then here we go:

weight = NUMBER - set weight of the server, if not set weight is equal to one. unbalance the default round robin.

max_fails = NUMBER - number of unsuccessful attempts at communicating with the server within the time period (assigned by parameter fail_timeout) after which it is considered inoperative. If not set, the number of attempts is one. A value of 0 turns off this check. What is considered a failure is defined by proxy_next_upstream or fastcgi_next_upstream (except http_404 errors which do not count towards max_fails).

fail_timeout = TIME - the time during which must occur *max_fails* number of unsuccessful attempts at communication with the server that would cause the server to be considered inoperative, and also the time for which the server will be considered inoperative (before another attempt is made). If not set the time is 10 seconds. fail_timeout has nothing to do with upstream response time, use proxy_connect_timeout and proxy_read_timeout for controlling this.

down - marks server as permanently offline, to be used with the directive ip_hash.

backup - (0.6.7 or later) only uses this server if the non-backup servers are all down or busy (cannot be used with the directive ip_hash)

EXAMPLE generic

    upstream  my_upstream_name  {
      server   backend1.example.com    weight=5;
      server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
      server   unix:/tmp/backend3;
    }
//   proxy_pass http://my_upstream_name; 

tho these is what you need:

if u just want to control de load between vhosts for one app :

 upstream  my_upstream_name{
          server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8081          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8082          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8083 backup;
//  proxy_pass http://my_upstream_name; 
// amazingness no.1, the keyword "backup" means that this server should only be used when the rest are non-responsive
    } 

if u have 2 or more apps: 1 upstream per app like:

upstream  my_upstream_name{
              server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8081          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8082          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8083 backup;  
            } 
upstream  my_upstream_name_app2  {
              server   127.0.0.1:8084          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8085          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8086          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8087 backup; 
            } 
upstream  my_upstream_name_app3  {
              server   127.0.0.1:8088          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8089          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8090          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8091 backup;  
            } 

hope it helps.

like image 139
jmingov Avatar answered Sep 21 '22 21:09

jmingov