Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

nginx - Unable to open primary script

I got error message:

FastCGI sent in stderr: "Unable to open primary script: /home/messi/web/wordpress/index.php (No such file or directory)" while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: www.domain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.domain.com

here are my configuration files:

/etc/php5/fpm/php.ini

cgi.fix_pathinfo=0
doc_root =
user_dir =
....

/etc/php5/fpm/php-fpm.conf

[global]
pid = /var/run/php5-fpm.pid
error_log = /var/log/php5-fpm.log
include=/etc/php5/fpm/pool.d/*.conf

/etc/php5/fpm/pool.d/www.conf

[www]
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0666
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
chdir = /
security.limit_extensions = .php .php3 .php4 .php5
php_flag[display_errors] = on
php_admin_value[error_log] = /var/log/fpm-php.www.log
php_admin_flag[log_errors] = on

/etc/nginx/nginx.conf

user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}
http {
    include       /etc/nginx/mime.types;
    server_tokens off;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    #gzip  on;
    include /etc/nginx/sites-enabled/*;
}

/etc/nginx/sites-enabled/wordpress

server {
    listen   80;
    server_name www.domain.com;
    root /home/messi/web/wordpress;
    error_log /var/log/nginx/err.wordpress.log;
    index index.php;
    location / {
        try_files $uri $uri/ /index.php?$args;
    }
    location = /favicon.ico {
        log_not_found off;
        access_log off;
    }
    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }
    location ~ /\. {
        deny all;
    }
    location ~* /(?:uploads|files)/.*\.php$ {
        deny all;
    }
    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
            fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include /etc/nginx/fastcgi_params;
    }
}

Setup user permission:

#adduser www-data messi
#chown -R www-data:www-data /home/messi/web
#chmod -R 664 /home/messi/web/wordpress

How can I resolve this? Thanks

like image 442
user3145965 Avatar asked Feb 28 '14 08:02

user3145965


3 Answers

SELinux will cause this error on CentOS/RHEL 7+ by default :(

To test if SELinux is the source of your woes, do

setenforce 0 

... and see if everything works. If that fixed it, you can leave SELinux off (weak, you're better than that), or you can turn it back on with

setenforce 1 

... and then properly fix the issue.

If you do

tail -f /var/log/audit/audit.log 

... you'll see the SELinux issue. In my case, it was denying PHP-FPM access to web files. You can run the following directives to fix it:

setsebool -P httpd_can_network_connect_db 1 setsebool -P httpd_can_network_connect 1 

This actually didn't fix it for me at first, but then restoring SELinux context did it

restorecon -R -v /var/www 

Hope that helps.

like image 136
siliconrockstar Avatar answered Sep 22 '22 06:09

siliconrockstar


This is likely a permissions problem.

  1. Make sure that every parent directory has +x permissions for the user (the nginx user and/or php-fpm user).

    You can check these permissions with: namei -om /path/to/file.

  2. If you have symlinks, make sure they point to a valid path.

  3. Make sure chroots have access to the right paths.

  4. Make sure SELinux (e.g. Fedora / Centos) or AppArmor (e.g. Ubuntu) or any other MAC security systems are not interfering with the file access.

    For SeLinux: Check /var/log/audit/audit.log or /var/log/messages

    For AppArmor: Im not a Ubuntu user and as far as I understand the logging for AppArmor isnt always easy to figure out. You might check here for info: http://ubuntuforums.org/showthread.php?t=1733231

like image 41
ethanpil Avatar answered Sep 21 '22 06:09

ethanpil


It was SELinux in my case as well. I read some documentation found here:

https://wiki.centos.org/HowTos/SELinux
https://linux.die.net/man/1/chcon

and ended up with the command:

chcon -R -v --type=httpd_sys_content_t html/

....this changed the context of the files to the httpd type which is what my web server (Nginx) was running as.

You can find what context your web server runs as using:

ps axZ | grep nginx

....which in my case gave me:

system_u:system_r:**httpd_t**:s0      6246 ?        Ss     0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
system_u:system_r:**httpd_t**:s0      6249 ?        S      0:00 nginx: worker process

Seeing the context of the running service was httpd_t I changed the context of my web site's root folder to that (recursively)

The point of SELinux is to only allow services and processes to access files of the same type as them. Since the web server ran as httpd_t than it made sense to set the context of the files/folder in the site to the same.

I'm new at this by the way.... But this seemed to be the best approach to me. It kept SELinux enabled, didn't lessen the security of what it does, nad matched up context of the files with the process/service.

like image 21
Zack A Avatar answered Sep 21 '22 06:09

Zack A