What I do on my cloud instances is I redirect port 80 to port 3000 with this command:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
Then I launch my Node.js on port 3000. Requests to port 80 will get mapped to port 3000.
You should also edit your /etc/rc.local
file and add that line minus the sudo
. That will add the redirect when the machine boots up. You don't need sudo
in /etc/rc.local
because the commands there are run as root
when the system boots.
Use the forever module to launch your Node.js with. It will make sure that it restarts if it ever crashes and it will redirect console logs to a file.
Add your Node.js start script to the file you edited for port redirection, /etc/rc.local
. That will run your Node.js launch script when the system starts.
This not only applies to Linode, but Digital Ocean, AWS EC2 and other VPS providers as well. However, on RedHat based systems /etc/rc.local
is /ect/rc.d/local
.
Give Safe User Permission To Use Port 80
Remember, we do NOT want to run your applications as the root user, but there is a hitch: your safe user does not have permission to use the default HTTP port (80). You goal is to be able to publish a website that visitors can use by navigating to an easy to use URL like http://ip:port/
Unfortunately, unless you sign on as root, you’ll normally have to use a URL like http://ip:port
- where port number > 1024.
A lot of people get stuck here, but the solution is easy. There a few options but this is the one I like. Type the following commands:
sudo apt-get install libcap2-bin
sudo setcap cap_net_bind_service=+ep `readlink -f \`which node\``
Now, when you tell a Node application that you want it to run on port 80, it will not complain.
Check this reference link
Drop root privileges after you bind to port 80 (or 443).
This allows port 80/443 to remain protected, while still preventing you from serving requests as root:
function drop_root() {
process.setgid('nobody');
process.setuid('nobody');
}
A full working example using the above function:
var process = require('process');
var http = require('http');
var server = http.createServer(function(req, res) {
res.write("Success!");
res.end();
});
server.listen(80, null, null, function() {
console.log('User ID:',process.getuid()+', Group ID:',process.getgid());
drop_root();
console.log('User ID:',process.getuid()+', Group ID:',process.getgid());
});
See more details at this full reference.
For port 80 (which was the original question), Daniel is exactly right. I recently moved to https
and had to switch from iptables
to a light nginx proxy managing the SSL certs. I found a useful answer along with a gist by gabrielhpugliese on how to handle that. Basically I
Created an SSL Certificate Signing Request (CSR) via OpenSSL
openssl genrsa 2048 > private-key.pem
openssl req -new -key private-key.pem -out csr.pem
Changed the location
in /etc/nginx/conf.d/example_ssl.conf
to
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Real-IP $remote_addr;
}
Formatted the cert for nginx by cat
-ing the individual certs together and linked to it in my nginx example_ssl.conf
file (and uncommented stuff, got rid of 'example' in the name,...)
ssl_certificate /etc/nginx/ssl/cert_bundle.cert;
ssl_certificate_key /etc/nginx/ssl/private-key.pem;
Hopefully that can save someone else some headaches. I'm sure there's a pure-node way of doing this, but nginx was quick and it worked.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With