I have an AWS Ubuntu instance which has GitLab CE configured. Now I want to configure GitLab CI to deploy my NodeJS app after each commit.I don't have any proper step by step solution for this.
My NodeJS app running in /var/www/mean/my-app
on http://myapp.mydomain.com
and the hosting is handled by Apache Proxy,
<VirtualHost *:80>
ServerAdmin [email protected]
ServerName gitlab.mydomain.com
ServerAlias www.gitlab.mydomain.com
ServerSignature Off
ProxyPreserveHost On
AllowEncodedSlashes NoDecode
<Location />
Require all granted
ProxyPassReverse http://localhost:8080
ProxyPassReverse http://gitlab.mydomain.com/
</Location>
RewriteEngine on
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f [OR]
RewriteCond %{REQUEST_URI} ^/uploads/.*
RewriteRule .* http://127.0.0.1:8080%{REQUEST_URI} [P,QSA,NE]
DocumentRoot /home/git/gitlab/public
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b" common_forwarded
ErrorLog /var/log/apache2/gitlab_error.log
CustomLog /var/log/apache2/gitlab_forwarded.log common_forwarded
CustomLog /var/log/apache2/gitlab_access.log combined env=!dontlog
CustomLog /var/log/apache2/gitlab.log combined
</VirtualHost>
And the app is bootstrapped using forever module
forever start app.js
The gitlab config check sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production
gives,
Checking GitLab Shell ...
GitLab Shell version >= 4.0.0 ? ... OK (4.0.0)
Repo base directory exists?
default... yes
Repo storage directories are symlinks?
default... no
Repo paths owned by git:git?
default... yes
Repo paths access is drwxrws---?
default... yes
hooks directories in repos are links: ...
dev / my-app ... ok
Running /home/git/gitlab-shell/bin/check
Check GitLab API access: OK
Access to /home/git/.ssh/authorized_keys: OK
Send ping to redis server: OK
gitlab-shell self-check successful
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Number of Sidekiq processes ... 1
Checking Sidekiq ... Finished
Checking Reply by email ...
Reply by email is disabled in config/gitlab.yml
Checking Reply by email ... Finished
Checking LDAP ...
LDAP is disabled in config/gitlab.yml
Checking LDAP ... Finished
Checking GitLab ...
Git configured with autocrlf=input? ... yes
Database config exists? ... yes
All migrations up? ... yes
Database contains orphaned GroupMembers? ... no
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Uploads directory setup correctly? ... yes
Init script exists? ... yes
Init script up-to-date? ... yes
projects have namespace: ...
dev / my-app ... yes
Redis version >= 2.8.0? ... yes
Ruby version >= 2.1.0 ? ... yes (2.3.1)
Your git bin path is "/usr/bin/git"
Git version >= 2.7.3 ? ... yes (2.7.4)
Active users: 1
Checking GitLab ... Finished
I used to login to the instance using SSH from my system,
ssh -i API-Key.pem [email protected]
Created key using command
ssh-keygen -t rsa
Runner config on /etc/gitlab-runner/config.toml
concurrent = 1
check_interval = 0
[[runners]]
name = "Production Runner"
url = "http://gitlab.mydomain.com/ci"
token = "xxxxxxxxxxxxxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.ssh]
user = "ubuntu"
host = "ip-XXX-XX-XX-XXX"
identity_file = "/home/ubuntu/.ssh/id_rsa"
[runners.cache]
Code on .gitlab-ci.yml
test_async:
script:
- npm install
Because of my bad configuration, the runner gives error,
Running with gitlab-ci-multi-runner 1.7.1 (f896af7)
Using SSH executor...
ERROR: Preparation failed: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Will be retried in 3s ...
My confusions are:
What should be the content of .gitlab-ci.yml
file to deploy the committed code to the application location?
And how to configure a runner for this? If I have to use an ssh
runner, what should be the configurations over there?
Update:
After providing .pem
file as identity_file
, getting the following error
Running with gitlab-ci-multi-runner 1.7.1 (f896af7)
Using SSH executor...
Running on ip-xxx-xx-xx-xxx via ip-xxx-xx-xx-xxx...
Cloning repository...
Cloning into 'builds/a92f1b91/0/dev/my-app'...
fatal: unable to access 'http://gitlab-ci-token:[email protected]/dev/my-app.git/': The requested URL returned error: 500
ERROR: Build failed: Process exited with: 1. Reason was: ()
Now there is a problem, git clone with http not working but ssh cloning works.
Note: Both gitlab and build environment are same host (same aws instance)
Bug reported in GitLab as well (http clone issue).
In your /etc/gitlab-runner/config.toml
concurrent = 1
check_interval = 0
[[runners]]
name = "Production Runner"
url = "http://gitlab.mydomain.com/ci"
token = "xxxxxxxxxxxxxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.ssh]
user = "ubuntu"
host = "ip-XXX-XX-XX-XXX"
identity_file = "/home/ubuntu/.ssh/id_rsa"
[runners.cache]
You define
host should be your Build host IP (in other words where you are going to execute your build)
user should be your user on Build host. Not on gitlab host.
You can test how your password less ssh works by
If that works and doesn't ask you for a password - all is good.
If that breaks - means you didn't setup password less auth correctly.
The easiest way to setup password less public key based auth is to use command called
ssh-copy-id
For example I want to setup password less ssh auth between my gitlab and my build host.
My build host ip is 192.168.0.42 and host name is build.home
I already have my id_rsa and id_rsa.pub generated under /home/ubuntu/.ssh on gitlab host.
Now let's push our public key from gitlab host to our build host. First time it will ask you for a password.
[root@gitlab ~]# ssh-copy-id -i /home/ubuntu/.ssh/id_rsa.pub [email protected]
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
Note that in above example I was pushing public key to remote host.
But when I will be now connecting to this remote host - I will specify my private key.
[root@gitlab ~]# ssh -i /home/ubuntu/.ssh/id_rsa [email protected]
[ubuntu@build ~]$ hostname
build.home
Try testing your public key auth between gitlab host and remote host and update your question.
Resources:
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/executors/ssh.md
P.S: I will post my working environment config a bit later today when I get home.
Edit 1:
Here is my config.
My gitlab host is called gitlab.home 192.168.0.41
And I have another VM called sshbuild.home 192.168.0.43
Below is how I added ssh runner
Step 1. Install on my gitlab.home
yum install gitlab-ci-multi-runner
and register my remote sshbuild.home VM as ssh runner
I also need to make sure that password less auth works between my gitlab.home and sshbuild.home, so
[root@gitlab gitlab-runner]# ssh-copy-id 192.168.0.43
The authenticity of host '192.168.0.43 (192.168.0.43)' can't be established.
ECDSA key fingerprint is b4:6a:1b:72:d1:7d:1f:34:f7:bb:ef:ad:69:42:11:13.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.0.43'"
and check to make sure that only the key(s) you wanted were added.
[root@gitlab gitlab-runner]# ssh 192.168.0.43
Last login: Fri Nov 18 17:05:06 2016 from 192.168.0.101
[root@sshbuild ~]# exit
Then I disabled my other runner (shell runner) and made new SSH runner project specific just to make sure that when I commit - it will be executed on the ssh runner
And I do commit and voila - we have our successfull test which was run on sshbuild.home host
Here is several links which might help with a better understanding of this topic
P.S: And here is my /etc/gitlab-runner/config.toml file
[root@gitlab gitlab-runner]# cat /etc/gitlab-runner/config.toml
concurrent = 1
check_interval = 0
[[runners]]
name = "sshbuild"
url = "http://gitlab.home/"
token = "2bc1825d8fbde09fd632637c60e9e7"
executor = "ssh"
[runners.ssh]
user = "root"
host = "192.168.0.43"
port = "22"
identity_file = "/root/.ssh/id_rsa"
[runners.cache]
P.S: I have similar error as you if I disable HTTP for my repo under Settings in the web interface. However error is not 500 but 403.
Edit 2:
Now I will cover .gitlab-ci.yml based on a simple HelloWorld project
In my HelloWorld I have file called server.js which when run from node - will simply create Web Server running on port 3000 and will reply with "Hello World" on GET requests.
1 const http = require('http');
2
3 const hostname = '0.0.0.0';
4 const port = 3000;
5
6 const server = http.createServer((req, res) => {
7 res.statusCode = 200;
8 res.setHeader('Content-Type', 'text/plain');
9 res.end('Hello World!\n');
10 });
11
12 server.listen(port, hostname, () => {
13 console.log(`Server running at http://${hostname}:${port}/`);
14 });
My goal is to be able to run a test case against it. In this case I will run simple
curl localhost:3000 | grep "Hello World"
But I need to put it into a separate script which will have exit status 0 on success and non zero on failure
cat -n simpletest.sh
1 #!/bin/bash
2
3 cleanup ()
4 {
5 count=`netstat -anp|grep ":3000"|grep LISTEN|awk '{print $NF}'|cut -d\/ -f1|wc -l`
6 if [ $count -ne 0 ]
7 then
8 pid=`netstat -anp|grep ":3000"|grep LISTEN|awk '{print $NF}'|cut -d\/ -f1`;
9 echo "Need to kill PID $pid";
10 kill $pid
11 fi
12 }
13
14 echo "Running simple test"
15 curl localhost:3000|grep "Hello World"
16 if [ $? -eq 0 ]
17 then
18 echo "Test was successfull"
19 echo "Clean up node.js process"
20 cleanup
21 exit 0
22 else
23 echo "Test failed"
24 echo "Clean up node.js process"
25 cleanup
26 exit 1
27 fi
Now let's cover my .gitlab-ci.yml
cat -n .gitlab-ci.yml
1 test:
2
3 before_script:
4 - echo "Before script"
5 - hostname
6 - /bin/bash cleanup.sh
7
8 script:
9 - echo "Main Script"
10 - node server.js &
11 - sleep 3
12 - /bin/bash simpletest.sh
I have a single job called test.
In before_script it runs cleanup.sh script which simply kills PID listening on port 3000 in case such found.
cat -n cleanup.sh
1 #!/bin/bash
2 count=`netstat -anp|grep ":3000"|grep LISTEN|awk '{print $NF}'|cut -d\/ -f1|wc -l`
3 if [ $count -ne 0 ]
4 then
5 pid=`netstat -anp|grep ":3000"|grep LISTEN|awk '{print $NF}'|cut -d\/ -f1`;
6 echo "Need to kill PID $pid";
7 kill $pid
8 fi
9 exit 0
And under script: it runs node with my server.js, gives it 3 seconds to start and then runs test against it.
This test will also take care of killing node PID after test is done.
So let's commit and check status of the build
And now let's change our server.js to output not "Hello World" but "HelloWorld", so there is no space in between. I expect that my test case will fail since it expects literally "Hello World". And it does fail.
This is the most simplistic CI use case I could come up with.
Now if based on the status of the test case you would like to deploy code to another environment - you would have to start using
So your .gitlab-ci.yml would turn into something like this (real working example)
cat -n .gitlab-ci.yml
1 stages:
2 - test
3 - deploy
4
5 run_test_case:
6 stage: test
7 before_script:
8 - echo "Before script"
9 - hostname
10 - /bin/bash cleanup.sh
11
12 script:
13 - echo "Main Script"
14 - node server.js &
15 - sleep 3
16 - /bin/bash simpletest.sh
17
18 deploy_to_production:
19 stage: deploy
20 script:
21 - echo "Run code here to do production deployment"
22 environment:
23 name: production
Which upon git push will succeed.
On line 21 I simply ran echo, but this could be replaced with a script which will do push to your remote staging or production environment.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With