Node.js deployment, what method to use in an enterprise environment using Jenkins and Chef?

Node.js deployment, what method to use in an enterprise environment using Jenkins and Chef?

Let me first explain the context.
Context
Currently we're working with a Jenkins server and use Chef Server for our configuration management. We're moving towards a more continuous deployment environment and this is the workflow that I've been working on:

Code gets checked in on GIT
Gitlab triggers Jenkins to run a new build
Jenkins pulls in the latest code and runs npm install
Jenkins creates an RPM using FPM
The RPM is uploaded to the RPM repository
Jenkins uploads the chef deployment cookbook that is in the application's git repository to Chef Server.
Jenkins triggers a new deployment of the application by running chef client.
The new RPM is installed by the deployment cookbook.

In (manually triggered) promotions to staging and production environments I do not have internet connections available. The RPM overcomes this problem. The cookbooks are developed using Berkshelf.
The Node.js applications deployed this way do sometimes use compiled native libraries (one project has 3+ dependencies compiling native code).
I know very little about these kinds of deployment processes, but one disadvantage I've heard is that by using RPM's and compiling it only once the compile environment (currently Jenkins itself) should have the same architecture as the deployment environments. 
The bonus by using RPM's is the artifact remains exactly identical for all environments, it doesn't need recompiling and doesn't pull hundreds of dependencies from everywhere.
Still, the workflow seems a bit elaborate and having to stick to the same architecture doesn't feel very flexible to me.
For our use case we need the following:

Deploy on the cloud quickly (likely Amazon)
Deploy on our own infrastructure quickly (which doesn't have internet connection currently, but might allow some access if there is good reason to)
Application updates, albeit somewhat uncommon, should be deployed easily and automated; continuous deployment should be possible
The software architecture they're working on is a microservices architecture and I expect to have dozens of Node.js applications to be deployed on various servers (likely multiple deployments on the same server for simplicity).

For my own projects I've been using Heroku most of the time which doesn't take any effort to setup. The above workflow takes two weeks to create (for the first time).
Questions
The sheer effort to manage all this begs me to question some of the above steps:

Is rsyncing and using npm install as bad as it sounds, pulling in all those dependencies and recompiling on every environment? Is this really as unstable as I'm thought to believe? (I have a Java and PHP background, in PHP nothing ever compiled and FTPing was the norm back in the day, whereas in Java everything is neatly packaged).
Why use an RPM over say a Tarbal? (up to a week ago I had never built an RPM manually, I know very little of its capabilities and what or what not to use here).
I've been working on what I'd call a "Deployment cookbook" in Chef that basically installs the deployment directory, monit configuration, init script and (optionally) nginx proxy configuration. This deployment cookbook is versioned the same as the deployment itself and shipped within the original git repository code. I've not found any best practices on this in the chef community and I had expected it to be very common. Is this not the way to go or even an anti-pattern?
Deploying multiple microservices on the same server (with different port numbers say), is this really bad? Does it make sense? (I've looked briefly at Docker but thought it'd introduce too much complexity as a means to logically separate microservices, we're still struggling with setting up the thing in the first place).

Any experiences you might be able to share would be much appreciated!

Solutions/Answers:

Solution 1:

1) Better to ship all the dependencies with your app and npm rebuild them on the target machine. Or, if you wanna go enterprise, you may rebuild modules on build server and pack into tarball/docker or lxc container/VM image/you name it. There is no silver bullet. Personally I prefer plain LXC containers. But general behaviour: bundle modules with app and rebuild binary modules on target platform.

Related:  boost unit tests with xUnit Plugin in Jenkins do not work

2) For simple script applications it’s better to use tarball or even git clone. No, really you don’t need all this power and complexity of system package managers in this case. But if you going to use custom-built nginx or some kind of system-wide library or something like this you better use RPM or DEB and setup appropriate repo for your custom packages.

3) I’m not using Chef, but it is better to separate deployment scripts into standalone repo for any kind of big projects. What I mean is that your deployment code is not code of your application. It’s like having two separate apps in one repo. Possible but not a good practice.

4) It’s pretty ok. It’s ok for scaling cause you may start with just one physical machine and grow as you go (but hey, it just sounds easy. I spent a hell of a lot of time to make my current project scalable). But it is always very good for integration testing. You may spawn whole environment, run integration tests, grab the results and start all over with new tests in fresh environment.

References