I wonder if there is a reason why the std::sto
series (e.g. std::stoi
, std::stol
) is not a function template, like that:
template<typename T>
T sto(std::string const & str, std::size_t *pos = 0, int base = 10);
and then:
template<>
int sto<int>(std::string const & str, std::size_t *pos, int base)
{
// do the stuff.
}
template<>
long sto<long>(std::string const & str, std::size_t *pos, int base)
{
// do the stuff.
}
/* etc. */
In my sense, that would be a better design, because for the moment, when I have to convert a string in whatever numerical value an user want, I have to manually manage each case.
Is there a reason to not have such a template function? Is there an assumed choice, or is this just done like that?
Looking at the description of these functions at cppref, I note the following:
... Interprets a signed integer value in the string str.
1) calls
std::strtol(str.c_str(), &ptr, base)
...
and strol
a "C" standard function that's also available in C++.
Reading further, we see: (for the c++ sto*
functions):
Return value
The string converted to the specified signed integer type.
Exceptions
std::invalid_argument
if no conversion could be performedstd::out_of_range
if the converted value would fall out of the range of the result type or if the underlying function (std::strtol or std::strtoll) sets errno to ERANGE.
So while I have no original source for this, and indeed have never worked with these functions, I would guess that:
TL;DR : These functions are C++-ish wrappers around already existing C/C++ functions -- strtol*
-- so they resemble these functions as close as possible.
I have to manage manually each case. Is there a reason to not have such a template function?
In case of such questions, Eric Lippert (C#) usually says something along the lines:
If a feature is missing, then it's missing because noone implemented it yet. And that's because either noone else earlier wanted yet, or because it was considered not worth the effort, or because it couldn't have been finished before publishing the current release".
Here, I guess it's the "not worth" part, but I have neither asked the commitee about, nor managed to find any answer in old questions and faqs. I didn't spend much time searching though.
I say this because I suppose that most common of these functions' functionality (if not all of) is already contained in stream classes, like istringstream
. Just like cin
/etc, this one also has an all-having operator >>
, overloaded for all base numeric types (and more).
Furthermore, the stream manipulators like std::hex
(std::setbase) already solve the problem of passing various type-dependent configuration parameters to the actual conversion functions. No problems with mixed function signatures (like those mentioned by DavidHaim in his answer). Here's just a single operator>>
.
So.. since if we have it in streams
, if we already can read numbers/etc from strings with simple foo >> bar >> setbase(42) >> baz >> ...
, then I think it was not worth the effort to add more complicated layers to old C runtime functions.
No proof for that though. Just a hunch.
The problem with template specialization is that the specialization requires you to match the original template function signature, so each specialization must implement the interface of (string,pos,base)
.
If you would like to have some other type which does not follows this interface, you are in trouble.
Suppose that, in the future, we would like to have sto<std::pair<int,int>>
. We will want to have pos
and base
for the first and the second stringified integer. we would like the signature to be in the form of string,pos1,base1,pos2,base2
. Since sto
signature is already set, we cannot do it.
You can always wrap std::sto*
in your implementation of sto
for integral types, but you cannot do that the other way around.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With