I'm trying to develop a simple job queue server with some worker that query it but I encountered a problem with my net/http server. I'm surely doing something bad but after ~3 minutes my server start displaying :
http: Accept error: accept tcp [::]:4200: accept4: too many open files; retrying in 1s
For information it receive 10 request per second in my test case.
Here's two files to reproduce this error :
// server.go package main import ( "net/http" ) func main() { http.HandleFunc("/get", func(rw http.ResponseWriter, r *http.Request) { http.Error(rw, "Try again", http.StatusInternalServerError) }) http.ListenAndServe(":4200", nil) } // worker.go package main import ( "net/http" "time" ) func main() { for { res, _ := http.Get("http://localhost:4200/get") defer res.Body.Close() if res.StatusCode == http.StatusInternalServerError { time.Sleep(100 * time.Millisecond) continue } return } }
I already done some search about this error and I found some interesting response but none of these fixed my issue.
The first response I saw was to correctly close the Body in the http.Get response, as you can see I did it.
The second response was to change the file descriptor ulimit of my system but as I will not control where my app will run I prefer to not use this solution (But for information it's set at 1024 on my system)
Can someone explain me why this problem happen and how I can fix it in my code ?
Thanks a lot for your time
EDIT :
EDIT 2 : In comment Martin says that I'm not closing the Body, I tried to close it (without defer so) and it fixed the issue. Thanks Martin ! I was thinking that continue will execute my defer, I was wrong.
The Too many open files message occurs on UNIX and Linux operating systems. The default setting for the maximum number of open files might be too low. To avoid this condition, increase the maximum open files to 8000 : Edit the /etc/security/limit.
"Too many open files " errors happen when a process needs to open more files than it is allowed by the operating system. This number is controlled by the maximum number of file descriptors the process has. 2. Explicitly set the number of file descriptors using the ulimit command.
A regular, non-root, user can raise their soft limit to any value up to their hard limit. The root user can increase their hard limit. To see the current soft and hard limits, use ulimit with the -S (soft) and -H (hard) options, and the -n (open files) option.
Linux systems limit the number of file descriptors that any one process may open to 1024 per process.
I found a post explaining the root problem in a lot more detail. Nathan Smith even explains how to control timeouts on the TCP level, if needed. Below is a summary of everything I could find on this particular problem, as well as the best practices to avoid this problem in future.
When a response is received regardless of whether response-body is required or not, the connection is kept alive until the response-body stream is closed. So, as mentioned in this thread, always close the response-body. Even if you do not need to use/read the body content:
func Ping(url string) (bool) { // simple GET request on given URL res, err := http.Get(url) if err != nil { // if unable to GET given URL, then ping must fail return false } // always close the response-body, even if content is not required defer res.Body.Close() // is the page status okay? return res.StatusCode == http.StatusOK }
As mentioned by Nathan Smith never use the http.DefaultClient
in production systems, this includes calls like http.Get
as it uses http.DefaultClient
at its base.
Another reason to avoid http.DefaultClient
is that it is a Singleton (package level variable), meaning that the garbage collector will not try to clean it up, which will leave idling subsequent streams/sockets alive.
Instead create your own instance of http.Client
and remember to always specify a sane Timeout
:
func Ping(url string) (bool) { // create a new instance of http client struct, with a timeout of 2sec client := http.Client{ Timeout: time.Second * 2 } // simple GET request on given URL res, err := client.Get(url) if err != nil { // if unable to GET given URL, then ping must fail return false } // always close the response-body, even if content is not required defer res.Body.Close() // is the page status okay? return res.StatusCode == http.StatusOK }
The safety net is for that newbie on the team, who does not know the shortfalls of http.DefaultClient
usage. Or even that very useful, but not so active, open-source library that is still riddled with http.DefaultClient
calls.
Since http.DefaultClient
is a Singleton we can easily change the Timeout
setting, just to ensure that legacy code does not cause idle connections to remain open.
I find it best to set this on the package main
file in the init
function:
package main import ( "net/http" "time" ) func init() { /* Safety net for 'too many open files' issue on legacy code. Set a sane timeout duration for the http.DefaultClient, to ensure idle connections are terminated. Reference: https://stackoverflow.com/questions/37454236/net-http-server-too-many-open-files-error */ http.DefaultClient.Timeout = time.Minute * 10 }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With