When creating a TCP client using the socket API, a port is used on the localhost for connecting to the TCP server.
The port that is used seems to be unavailable for another application which is a TCP server to bind to and act as a server.
Because the port used for the client is dynamically determined it may be a port that my application wants to use as a server.
Is it true that the TCP client will dynamically pick a port to use and prevent other programs from being a server on that port?
Can a client control what port it uses to make sure it does not occupy a port required by another program?
Yes, the port will be selected from a predefined range which varies from OS to OS, and blocked for other use. You can select a specific port with bind
if you need this.
EDIT:
The only case where you can have multiple TCP sockets bound to the same local port/IP is when you accept()
new sockets from a listening socket. You can never bind()
a TCP socket to an in-use port/IP. There's also some confusion on SO_REUSEADDR
, this socket option does not allow port reuse, it just relaxes the rules when there's only dead timing out connections bound to a port you want.
Is it true that the TCP client will dynamically pick a port to use and prevent other programs from being a server on that port?
Yes, It is.
Can a client control what port it uses to make sure it does not occupy a port required by another program?
Yes, you can but you should not. Use the Bind property.
OK .. heres the thing:
When you establish a connection to a server, you open a socket port that is greater than 1024. The point is, it is going to be a high port number.
Your server should not open a TCP port greater than 1024. Basically you should keep you server running in a low port. That is what all http documents tell us.
You can also check if a port are already taken, and if so, you can open your server socket in another port.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With