My goal is to create a workaround so that I may use C++11 lambdas inside Boost Spirit Qi semantic actions, while still having access to a more expanded set of qi placeholders, such as qi::_pass or qi::_r1, without having to manually extract them from the context object. I wish to avoid writing Phoenix lambdas for some non-trivial parsing logic, preferring the more direct C++ syntax and semantics available inside C++11 lambdas.
The code below represents an idea I have for a workaround. The idea is to use phoenix::bind to bind to the lambda and pass to it the particular placeholders I need. However, I'm getting an extremely long templated compiler error (gcc 4.7.0, Boost 1.54) that I don't have the expertise to interpret. I chose what I believe to be the most relevant portion and posted it below the code.
I'd like to know if what I'm trying to do in this code is possible with Boost Spirit, and if anyone can interpret the error message for me and tell me what's going wrong.
#include <string>
#include <iostream>
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/phoenix.hpp>
namespace qi = boost::spirit::qi;
namespace spirit = boost::spirit;
namespace phoenix = boost::phoenix;
int main() {
std::string input{"test1 test2 test3 FOO!"};
typedef decltype(input.begin()) StringIter;
qi::rule<StringIter, std::string()> parser =
*(
qi::char_
[
phoenix::bind(
[] (char value) {
std::cerr << value << std::endl;
},
qi::_1
)
]
);
qi::parse(input.begin(), input.end(), parser);
}
(Note: I'm aware that the particular task performed by this code would be simpler with direct Phoenix constructs, or could even be done thanks to the Boost Spirit updates that allow a one-argument C++11 lambda directly, since it only uses the parsed value (qi::_1). Nevertheless, it's a good minimal example of the kind of thing I'd like to do, and if I can get it to work, it should generalize easily.)
And, a bit of the compiler error (through STLfilt):
test\testSpiritLearning.cpp:28:9: required from here
D:\programming\lib\boost\boost_1_54_0/boost/spirit/home/support/action_dispatch.hpp:178:13:
error: no match for call to '(
const boost::phoenix::actor<
boost::phoenix::composite<
boost::phoenix::detail::function_eval<1>
, boost::fusion::vector<
boost::phoenix::value<main()::<lambda(char &)> >
, boost::spirit::argument<0>, boost::fusion::void_
, boost::fusion::void_, boost::fusion::void_
, boost::fusion::void_, boost::fusion::void_
, boost::fusion::void_, boost::fusion::void_
, boost::fusion::void_
>
>
>
) (
boost::spirit::traits::pass_attribute<
boost::spirit::qi::char_class<
boost::spirit::tag::char_code<
boost::spirit::tag::char_
, boost::spirit::char_encoding::standard
>
>, char, void
>::type &
, boost::spirit::context<
boost::fusion::cons<basic_string<char> &, boost::fusion::nil>
, boost::fusion::vector0<>
> &, bool &
)'
Just tell Boost that you want bleeding edge compiler support:[1]
#define BOOST_RESULT_OF_USE_DECLTYPE
and you wish to use the V3 version of Phoenix:
#define BOOST_SPIRIT_USE_PHOENIX_V3
And it works
See it Live on Coliru
Reason:
Using function objects in Phoenix actors assumes your function object will have a special nested struct result
template or indeed a simple typedef result_type
. This is known as the RESULT_OF protocol, see here:
http://www.boost.org/doc/libs/1_55_0/libs/utility/utility.htm#result_of
This protocol is required for c++03 compatibility. However, lambdas don't have it. In fact, lambdas have unspecified types. This is precisely one of the reasons why compilers with support for lambdas will always have decltype
as well, so the RESULT_OF protocol is no longer required
On the second #define
, you need to select Phoenix V3, because Phoenix V2 simply doesn't implement support for lambdas. By default, Spirit V2 selects Phoenix V2 for historical/compatibility reasons. In practice, Phoenix V3 is just much more mature and fixes many (many many) problems, so I recommend always running with BOOST_SPIRIT_USE_PHOENIX_V3
[1] might not be needed with very recent versions of some compilers
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With