I have profiled my unit test and the majority of the time the application is running is spent on this section of code. It is a function that converts a float to a string. How can I rewrite the code below to have a better speed performance?
Am I misreading the report and the bottleneck is somewhere else?
The profile reports states:
Total CPU % = 13.02% , Self CPU % .07, Total CPU(ms) 769, Self CPU out of 100 percent 769 ms.
769 out of 5907 samples.
std::string FloatToScientificString(float val, int width, int precision)
{
std::stringstream buffer;
buffer << std::scientific << std::setw(width) << std::setprecision(precision) << std::setfill(' ') << val;
return buffer.str();
}
We can convert float to String in java using String. valueOf() and Float. toString() methods.
We can convert float to a string easily using str() function.
If using an external library to achieve this goal is possible, you can go with fmtlib (this library will probably make it into the standard), which claims to be faster than other approaches (see their benchmarks).
#include <fmt/format.h>
std::string FloatToScientificString(float val, int width, int precision)
{
return fmt::format("{:>{}.{}e}", val, width, precision);
}
This should return an identical string as your original function, and you don't sacrifice type safety as with std::*printf
approaches. When using abseil instead (they claim to be notably faster than the printf
-family here), the function looks like this:
#include <absl/strings/str_format.h>
std::string FloatToScientificString(float val, int width, int precision)
{
return absl::StrFormat("%*.*e", width, precision, val);
}
There is also boost format, which does not allow for width or precision specifier to be passed as arguments, but this works equally well:
#include <boost/format.hpp>
std::string FloatToScientificString(float val, int width, int precision)
{
const std::string fmt = "%" + std::to_string(width) + "." +
std::to_string(precision) + "e";
return boost::str(boost::format(fmt) % val);
}
and finally, without any external dependencies other than the standard library (note that using std::snprintf
is superior to std::sprintf
as the buffer size is checked, but neither function is type safe):
#include <cstdio>
std::string FloatToScientificString(float val, int width, int precision)
{
static const int bufSize = 100;
static char buffer[bufSize];
std::snprintf(buffer, bufSize, "%*.*e", width, precision, val);
return std::string(buffer);
}
A correct performance analysis of these options is probably a topic on its own. Any of these options should be notably faster than the original approach using std::stringstream
, though, and all snippets except the std::snprintf
one are type safe.
Instead of converting the incomming data from float to a char representation, you should try to generate your comparison data to the incoming binary format, maybe only once with a tool which creates a compilable binary table.
This enables you to compare binary/float against binary/float data without any need to do further coversions during runtime.
And also you can do your tests ones, record the incomming data to some storage and compare later against that storage again. So you only once compare your string representation and later on you compare against the binary stored data. This can be done as long your testcases stay untouched.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With