Hosting multiple Express (node.js) apps on port 80

In the last days, i was trying to find a solution hosting multiple Express apps on my vServer the same Server.

Starting with Apache and mod_proxy, i ended up with a plain node solution, which i really like.

Let’s take a quick look on some different approaches out there:


Using apache on port 80 as a proxy

ProxyPass /nodeurls/ http://localhost:9000/
ProxyPassReverse /nodeurls/ http://localhost:9000/

via stackoverflow

— no websockets
++ probably the easiest way to integrate with your running AMPP-stack


Using a node.js app on port 80 as a Wrapper for other node apps.

  .use(express.vhost('', require('/path/to/hostname1').app)
  .use(express.vhost('', require('/path/to/hostname2').app)

via stackoverflow

++ you can use websockets on port 80
— apps crash/restart/stop globally
–what about your apache or the like?


Using node.js with node-http-proxy on port 80

var http = require('http')
, httpProxy = require('http-proxy');

  hostnameOnly: true,
  router: {
    '': '',
    '' : ''

++ proxy websockets to any port
— you might need to move your old web server to another port

The really cool thing about using node-http-proxy is its capability of proxying websockets.
So you can have your apps running independtly on different ports while serving everything to the user over port 80 and use stuff like

Since i’m new to node.js and miles away from beeing a admin, any feedback is highly appreciated :)

14 Replies to “Hosting multiple Express (node.js) apps on port 80”

  1. Hmm, why would you want to listen to port 80? First, you shouldn’t start node as privileged user. Instead create user to run service and redirect all packets with iptables to your preferred port:

    # /sbin/iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 9000
    # /sbin/iptables -A FORWARD -p tcp --dport 9000 -j ACCEPT

    and then route using either httpProxy or “router” app.
    Using apache in front makes sense if you have other stuff running on server, like Tomacat.

  2. Hi, Im still a dummy on Node.js! :-)
    Using the 3rd approach, when I need to add a new site, will I need to restart the main server.js script? Or there’s a way to implement a new app/site without interrupt all the nodejs apps?

  3. Thanks for your comment – the 3rd solution was the best i could come up with in terms of app-independence. Of course you’ll have to restart the proxy – and you have a point there as this affects all apps availability – but the other apps can still keep running and keep sessions etc.
    Plus no other app is involved when one app crashes…
    Would a quick restart of the server really affect users of running apps that much – i think sockets are capable of reconnecting in such case, aren’t they?

  4. Haven’t implemented yet, but I can see 3rd solution is a working proof-of-concept. You should be able to implement dynamic routing and config persistence on top of it, so you can modify route tables without restarting the server.

  5. Or you could just start node give privileges to port 80 to the running user, using something like start-stop-daemon or setuid/setgid. Just because a service account has access to port 80 does not mean it needs sudoer access. For example, this is how a standard repo apache starts up using the apache/httpd user as a non-privileged user who still has access to port 80 due to the way the server was started through upstart.

    Seems like the standard course of action, on *nix anyway, is to use a startup script that can access port 80 but do so using an unprivileged, no-login user. This is in keeping with the “default deny” security mechanism of *nix. The way I usually do it is create a user with access to NOTHING (disable shell login, ensure group membership is minimal, only set file/folder ownership over specific subtrees in the file system, etc.), then adding only the required permissions (such as permissions to write log files, any cron jobs running under that user account, etc).

    Screwing around with IPTables IMHO just creates something else that can break, and will drive you crazy trying to figure out why ports aren’t answering, particularly if there’s an upstream firewall, such as the one provided by AWS in front of your servers.

    Hope that helps.

    In my case I’m writing an API with Sails, and I’m trying to wire up an upload node.js server (not written using Sails) on a different port, but somehow server-side redirect port 80 requests to the port that the upload server is on, because I don’t want to have the user hitting multiple ports, particularly ports other than 80 and 443, since many firewalls don’t allow outbound connections to arbitrary ports.

    There’s a way to do this with, by having listen for connections to the http server:

    I’m trying to do something similar, but without breaking Sails standards. If someone would like to enlighten me, please do.

  6. Thank you very much, this helped me a lot with finding a solution for running node.js apps as virtual hosts.

Comments are closed.