Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I make llvm optimize the following floating point equation

Tags:

c++

llvm

I have a small test program that uses llvm to calculate the values from some equation. The setup is as follows: I have created a bc file containing functions to add, multiply, divide, subtract and square double numbers. Now I set up linear equations with different parameters by combining the add & multiply functions. Then I transform the functions using the optimizers from the kaleidoscope example. This works nicely - the resulting function takes x as parameter and does simply 2 floating point calculations (multiply and add). The code for setting up these functions is:

Function* createLinearFunction(const std::string& name, double factor, double summand, Module* module)
{
    LLVMContext& context = getGlobalContext();

    Function* func = cast<Function>(module->getOrInsertFunction(name.c_str(),Type::getDoubleTy(context),Type::getDoubleTy(context),(Type *)0)); 
    //add basic block
    BasicBlock* bb1 = BasicBlock::Create(context,"EntryBlock",func);
    IRBuilder<> builder(bb1);
    Argument* x0 = func->arg_begin();
    x0->setName("x0");
    Value* x1 = ConstantFP::get(context,APFloat(factor));
    Value* x2 = ConstantFP::get(context,APFloat(summand));
    std::vector<Value*> args1;
    args1.push_back(x0);
    args1.push_back(x1);
    Value* x3 = builder.CreateCall(mul_d_dd,args1,"");
    std::vector<Value*> args2;
    args2.push_back(x2);
    args2.push_back(x3);
    Value* x4 = builder.CreateCall(add_d_dd,args2,"");
    builder.CreateRet(x4);
    return func;
}

What I now want is the following - when I generate a function with factor 1, it should optimize the multiplication away and with summand 0 it should optimize the addition away. With factor 0, it should just return the summand. Is there a pass that does already do that? I just assume that llvm does not do that because of the reasons mentioned here: Why don't LLVM passes optimize floating point instructions?

Thank you for your help Tobias


Addition - I tried out adding instcombine via createInstructionCombiningPass(), but the optimized code still looks the same:

define double @Linear0xPlus0(double %x0) {
EntryBlock:
  %0 = call double @mul_d_dd(double %x0, double 0.000000e+00)
  %1 = call double @add_d_dd(double 0.000000e+00, double %0)
  ret double %1
}

I now tried adding an inlining pass using createFuntionInliningPass() - but this causes an assertion error

    FunctionPassManager fpm(module);
    fpm.add(new DataLayout(module->getDataLayout()));
    fpm.add(createFunctionInliningPass());
    fpm.add(createBasicAliasAnalysisPass());
    fpm.add(createInstructionCombiningPass());
    fpm.add(createReassociatePass());
    fpm.add(createGVNPass());
    fpm.add(createCFGSimplificationPass());
    fpm.add(createInstructionCombiningPass());
    fpm.doInitialization();

and get the following error: Assertion failed: !PMS.empty() && "Unable to handle Call Graph Pass"

This error is caused by the fact that inlining is not a function, but a module optimization and must be used in a module optimization pass. The set-up for that now looks like this:

PassManagerBuilder pmb;
pmb.OptLevel=3;
PassManager mpm;
pmb.populateModulePassManager(mpm);
mpm.add(createFunctionInliningPass());

but even running a second function optimization pass over all functions that contains the instcombine pass does not do the trick

FunctionPassManager instcombiner(module);
instcombiner.add(createInstructionCombiningPass());
instcombiner.doInitialization();
for (auto i=module->begin(); i!=module->end(); ++i)
{
    instcombiner.run(*i);
}
like image 231
Tobias Langner Avatar asked Oct 30 '13 10:10

Tobias Langner


1 Answers

The instcombine pass should perform these sorts of optimizations, even for floating-point operations.

A simple way to try it is to run opt -instcombine on your bitcode file and check out the output.

like image 101
Oak Avatar answered Oct 03 '22 22:10

Oak