use std::io::ErrorKind;
use std::net::TcpStream;
fn main() {
let address = "localhost:7000";
loop {
match TcpStream::connect(address.clone()) {
Err(err) => { match err.kind() {
ErrorKind::ConnectionRefused => { continue; },
kind => panic!("Error occurred: {:?}", kind),
}; },
Ok(_stream) => { /* do some stuff here */ },
}
}
}
Consider the piece of Rust code above. What's interesting to me here is not the Ok
branch, but rather the ErrorKind::ConnectionRefused
sub-branch coupled with the loop
: it's very cheap, CPU-wise, consuming less than 1% CPU. This is great, it's what I want.
But I don't understand why it is cheap: comparable code in C would likely consume 100% basically NOPing (not precisely but close enough). Can anyone help me understand why this is so cheap?
It's quite likely that connect() is the culprit; in order to receive the Connection refused error, it first needs to look up the address (which should be cheap for localhost), then connect, and wait for the Connection refused response.
While localhost is certainly quite fast as opposed to remote network services, there's still a lot of overhead.
ping localhost has a latency of around 0.9ms for me. That means that your loop only does on the order of 1000 to 10000 iterations per second, which is not very much compared to an actual while true {} loop.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With