Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

net/http server: too many open files error

Tags:

go

I'm trying to develop a simple job queue server with some worker that query it but I encountered a problem with my net/http server. I'm surely doing something bad but after ~3 minutes my server start displaying :

http: Accept error: accept tcp [::]:4200: accept4: too many open files; retrying in 1s

For information it receive 10 request per second in my test case.

Here's two files to reproduce this error :

// server.go package main  import (     "net/http" )  func main() {     http.HandleFunc("/get", func(rw http.ResponseWriter, r *http.Request) {         http.Error(rw, "Try again", http.StatusInternalServerError)     })     http.ListenAndServe(":4200", nil) }  // worker.go package main  import (     "net/http"     "time" )  func main() {     for {         res, _ := http.Get("http://localhost:4200/get")         defer res.Body.Close()          if res.StatusCode == http.StatusInternalServerError {             time.Sleep(100 * time.Millisecond)             continue         }          return     } } 

I already done some search about this error and I found some interesting response but none of these fixed my issue.

The first response I saw was to correctly close the Body in the http.Get response, as you can see I did it.

The second response was to change the file descriptor ulimit of my system but as I will not control where my app will run I prefer to not use this solution (But for information it's set at 1024 on my system)

Can someone explain me why this problem happen and how I can fix it in my code ?

Thanks a lot for your time

EDIT : Asked by comment

EDIT 2 : In comment Martin says that I'm not closing the Body, I tried to close it (without defer so) and it fixed the issue. Thanks Martin ! I was thinking that continue will execute my defer, I was wrong.

like image 239
Uhsac Avatar asked May 26 '16 07:05

Uhsac


People also ask

How do you handle too many open files?

The Too many open files message occurs on UNIX and Linux operating systems. The default setting for the maximum number of open files might be too low. To avoid this condition, increase the maximum open files to 8000 : Edit the /etc/security/limit.

What causes too many open files error?

"Too many open files " errors happen when a process needs to open more files than it is allowed by the operating system. This number is controlled by the maximum number of file descriptors the process has. 2. Explicitly set the number of file descriptors using the ulimit command.

How do I fix too many open files in Linux?

A regular, non-root, user can raise their soft limit to any value up to their hard limit. The root user can increase their hard limit. To see the current soft and hard limits, use ulimit with the -S (soft) and -H (hard) options, and the -n (open files) option.

How many files open Linux?

Linux systems limit the number of file descriptors that any one process may open to 1024 per process.


1 Answers

I found a post explaining the root problem in a lot more detail. Nathan Smith even explains how to control timeouts on the TCP level, if needed. Below is a summary of everything I could find on this particular problem, as well as the best practices to avoid this problem in future.

Problem

When a response is received regardless of whether response-body is required or not, the connection is kept alive until the response-body stream is closed. So, as mentioned in this thread, always close the response-body. Even if you do not need to use/read the body content:

func Ping(url string) (bool) {     // simple GET request on given URL     res, err := http.Get(url)     if err != nil {         // if unable to GET given URL, then ping must fail         return false     }      // always close the response-body, even if content is not required     defer res.Body.Close()      // is the page status okay?     return res.StatusCode == http.StatusOK } 

Best Practice

As mentioned by Nathan Smith never use the http.DefaultClient in production systems, this includes calls like http.Get as it uses http.DefaultClient at its base.

Another reason to avoid http.DefaultClient is that it is a Singleton (package level variable), meaning that the garbage collector will not try to clean it up, which will leave idling subsequent streams/sockets alive.

Instead create your own instance of http.Client and remember to always specify a sane Timeout:

func Ping(url string) (bool) {     // create a new instance of http client struct, with a timeout of 2sec     client := http.Client{ Timeout: time.Second * 2 }      // simple GET request on given URL     res, err := client.Get(url)     if err != nil {         // if unable to GET given URL, then ping must fail         return false     }      // always close the response-body, even if content is not required     defer res.Body.Close()      // is the page status okay?     return res.StatusCode == http.StatusOK } 

Safety Net

The safety net is for that newbie on the team, who does not know the shortfalls of http.DefaultClient usage. Or even that very useful, but not so active, open-source library that is still riddled with http.DefaultClient calls.

Since http.DefaultClient is a Singleton we can easily change the Timeout setting, just to ensure that legacy code does not cause idle connections to remain open.

I find it best to set this on the package main file in the init function:

package main  import (     "net/http"     "time" )  func init() {     /*     Safety net for 'too many open files' issue on legacy code.     Set a sane timeout duration for the http.DefaultClient, to ensure idle connections are terminated.     Reference: https://stackoverflow.com/questions/37454236/net-http-server-too-many-open-files-error     */     http.DefaultClient.Timeout = time.Minute * 10 } 
like image 164
justinrza Avatar answered Oct 18 '22 13:10

justinrza