What are the advantages and disadvantages of using libcurl and libsoup?
Which one is better to use for a HTTP operation where I have to send request to a server to get a response, and has a quick reaction time?
Libsoup is coming along, but libcurl has much better support and stability. Lib soup devs readily admit that you should probably be using libcurl.
The 4th item is especially important because even on Linux KDE and XCF users will install gnome-related libs, but it isn't nice to force them to use the gnome libraries when a platform independent option is available.
"I found that libsoup is far slower than libcurl. It uses at least 4x the amount of CPU to transfer a high-bitrate datastream over HTTP. I attribute this to the over-reliance on heavy-weight glib/gobject constructs. Man, that stuff is slow and a pain to use!" - Matt Gruenke
I was looking at libsoup to implement the server side of an API on a hobby project (I was making my own router).
By the time I got through satisfying the GNOME dependencies, the simplicity of the callback based server side code didn't seem as attractive as it once did. The interface is nice enough, see soup_server_add_handler().
If you write GNOME applications (thus can already count on the GNOME dependencies being there), it's okay (it felt sluggish, to me).
If you are just writing client code, or anything that has to work in the absence of GNOME, stick to curl.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With