I am using http post method to send request to a Http Server URL.
Time difference between request and response is around 60 seconds but as per Server team they are sending response withing 7 seconds once request reached at their end.
I don't think network is taking remaining 53 seconds of time to reach packet at server end, so what can be the issue.
In this application we are using synchronous communication in between client and server. Kindly provide me following details also.
EDIT : When I used wireshark in my network to capture network logs I found that it is appearning in wireshark 20 seconds after actually my application is send to Server. What is reason behind this delay. what can be the possible reason in which request is appearing in network 20 seconds delay from actually it has been sent.
Statistical analysis of page load speed data collected using the Navigation Timing API shows that an HTTP request can be reasonably approximated to 0.5 seconds.
HTTP uses TCP as a transport layer. Certain design features of HTTP interact badly with TCP, causing problems with performance and with server scalability. Latency problems are caused by opening a single connection per request, through connection setup and slow-start costs.
Response Time. For an uptime check (http check) the response time is calculated as the time it takes to perform a HTTP GET to the specified URL, so the response time is calculated in three parts: Time to first byte. Time to receive headers. Time to load HTML of the site.
As a request-response protocol, HTTP gives users a way to interact with web resources such as HTML files by transmitting hypertext messages between clients and servers. HTTP clients generally use Transmission Control Protocol (TCP) connections to communicate with servers.
In regards to your edit, to help you understand. Networking follows a model called Open Source Intercommunication (OSI). This model is broken down into seven distinct layers, all with a function.
Those layers are here:
Wireshark detects packets which are located at Layer 3. Which is handled by a Router. The Network Interface Card (NIC) takes the allotted data and turns it into a packet to send across the wire.
Wireshark won't detect the packet until your NIC has converted it into a packet in which a Router handles.
You see once it is converted into a packet it contains the following information:
Those are the key 160 Bits that are created when creating such a packet.
What does this mean?
Well, you know that it takes twenty seconds for Wireshark to detect your packet. So right off the bat we know it took your application twenty seconds to actually build this packet.
Which we know the Server will also need to Reconstruct this packet so that it can handle the data and possibly send off a request.
We also know that the Router is acting like a traffic cop, sending your data across the internet or local network.
Well, that adds quite a bit of inference? But where do I go?
You have a utility called: tracert
On average it takes a route request one to two milliseconds to pass through five to six foot cable, so if it generates the initial hop one or two milliseconds, but the second hop is triggered in twenty-thirty milliseconds then you could use a simple formula:
6 * 20
Based on the current speed from our tracert we can estimate the time duration. This is a very generic approach but tools exists for exact accuracy. But the more hops, the more time it will take to reach the destination. Also the more you'll multiply.
What about that in between from Client to Server?
Local Area Networks (LAN) : The internal efficiency of a network is due to the optimizations of each Network Protocol, Equipment, and Physical Median. A Network Administrator has to measure reliability with speed; as well as all the traffic generated by the Network. So an equipments throughput and physical median are important. You wouldn't want a ten cars merging into a one lane tunnel, that could create a bottle neck same applies for a Network.
Wide Area Network (WAN) : This essentially is the connection to the Internet, the Cloud. Think of it like this: Your computer is on a LAN, the Router goes to a WAN. Then your ISP more then likely has a LAN which then has it's WAN open up to a larger distribution facility. It keeps working it's way up until it reaches the internet.
What can I do though?
You know what is in between now, but what can I do?
Well, when your generating your Service you obviously want to ensure your code is lean and highly efficient. As efficiency can be crucial in speed. So altering buffer sizes, transmission rates, and etc. can greatly improve your application.
Obviously good code practice will help.
My code is rock solid though?
If you believe your code isn't the problem at this point, or the method in which you Host and Create your Service then these factors may be the cause:
Those are a larger chunk of variables. All you can try is to refactor the Service and ensure your Server is hosting it the most efficient way possible. Otherwise you'll want to get a Information Technology Team involved it is critical.
But keep this in mind, your experience may be better or worst then another clients with interfacing with this service.
I'm speaking under the assumption your deployed in one location and you could be several states away from your Server.
Tools:
Command Line:
Network and Protocol Analyzers:
There are other utilities available to actually mitigate and test Network Speeds even to other locations, just Google "Network Tools". Fluke has a few.
Hopefully that explains why it may take twenty seconds for Wireshark to even display the packet on the Network.
Hope that helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With