A dynamic variable is a variable you can declare for a StreamBase module that can thereafter be referenced by name in expressions in operators and adapters in that module. In the declaration, you can link each dynamic variable to an input or output stream in the containing module.
Object is useful when we don't have more information about the data type. Dynamic is useful when we need to code using reflection or dynamic languages or with the COM objects and when getting result out of the LinQ queries.
I've read dynamic makes the compiler run again, but what it does. Does it have to recompile whole method with the dynamic used as a parameter or rather those lines with dynamic behavior/context(?)
Here's the deal.
For every expression in your program that is of dynamic type, the compiler emits code that generates a single "dynamic call site object" that represents the operation. So, for example, if you have:
class C
{
void M()
{
dynamic d1 = whatever;
dynamic d2 = d1.Foo();
then the compiler will generate code that is morally like this. (The actual code is quite a bit more complex; this is simplified for presentation purposes.)
class C
{
static DynamicCallSite FooCallSite;
void M()
{
object d1 = whatever;
object d2;
if (FooCallSite == null) FooCallSite = new DynamicCallSite();
d2 = FooCallSite.DoInvocation("Foo", d1);
See how this works so far? We generate the call site once, no matter how many times you call M. The call site lives forever after you generate it once. The call site is an object that represents "there's going to be a dynamic call to Foo here".
OK, so now that you've got the call site, how does the invocation work?
The call site is part of the Dynamic Language Runtime. The DLR says "hmm, someone is attempting to do a dynamic invocation of a method foo on this here object. Do I know anything about that? No. Then I'd better find out."
The DLR then interrogates the object in d1 to see if it is anything special. Maybe it is a legacy COM object, or an Iron Python object, or an Iron Ruby object, or an IE DOM object. If it is not any of those then it must be an ordinary C# object.
This is the point where the compiler starts up again. There's no need for a lexer or parser, so the DLR starts up a special version of the C# compiler that just has the metadata analyzer, the semantic analyzer for expressions, and an emitter that emits Expression Trees instead of IL.
The metadata analyzer uses Reflection to determine the type of the object in d1, and then passes that to the semantic analyzer to ask what happens when such an object is invoked on method Foo. The overload resolution analyzer figures that out, and then builds an Expression Tree -- just as if you'd called Foo in an expression tree lambda -- that represents that call.
The C# compiler then passes that expression tree back to the DLR along with a cache policy. The policy is usually "the second time you see an object of this type, you can re-use this expression tree rather than calling me back again". The DLR then calls Compile on the expression tree, which invokes the expression-tree-to-IL compiler and spits out a block of dynamically-generated IL in a delegate.
The DLR then caches this delegate in a cache associated with the call site object.
Then it invokes the delegate, and the Foo call happens.
The second time you call M, we already have a call site. The DLR interrogates the object again, and if the object is the same type as it was last time, it fetches the delegate out of the cache and invokes it. If the object is of a different type then the cache misses, and the whole process starts over again; we do semantic analysis of the call and store the result in the cache.
This happens for every expression that involves dynamic. So for example if you have:
int x = d1.Foo() + d2;
then there are three dynamic calls sites. One for the dynamic call to Foo, one for the dynamic addition, and one for the dynamic conversion from dynamic to int. Each one has its own runtime analysis and its own cache of analysis results.
Make sense?
Update: Added precompiled and lazy-compiled benchmarks
Update 2: Turns out, I'm wrong. See Eric Lippert's post for a complete and correct answer. I'm leaving this here for the sake of the benchmark numbers
*Update 3: Added IL-Emitted and Lazy IL-Emitted benchmarks, based on Mark Gravell's answer to this question.
To my knowledge, use of the dynamic
keyword does not cause any extra compilation at runtime in and of itself (though I imagine it could do so under specific circumstances, depending on what type of objects are backing your dynamic variables).
Regarding performance, dynamic
does inherently introduce some overhead, but not nearly as much as you might think. For example, I just ran a benchmark that looks like this:
void Main()
{
Foo foo = new Foo();
var args = new object[0];
var method = typeof(Foo).GetMethod("DoSomething");
dynamic dfoo = foo;
var precompiled =
Expression.Lambda<Action>(
Expression.Call(Expression.Constant(foo), method))
.Compile();
var lazyCompiled = new Lazy<Action>(() =>
Expression.Lambda<Action>(
Expression.Call(Expression.Constant(foo), method))
.Compile(), false);
var wrapped = Wrap(method);
var lazyWrapped = new Lazy<Func<object, object[], object>>(() => Wrap(method), false);
var actions = new[]
{
new TimedAction("Direct", () =>
{
foo.DoSomething();
}),
new TimedAction("Dynamic", () =>
{
dfoo.DoSomething();
}),
new TimedAction("Reflection", () =>
{
method.Invoke(foo, args);
}),
new TimedAction("Precompiled", () =>
{
precompiled();
}),
new TimedAction("LazyCompiled", () =>
{
lazyCompiled.Value();
}),
new TimedAction("ILEmitted", () =>
{
wrapped(foo, null);
}),
new TimedAction("LazyILEmitted", () =>
{
lazyWrapped.Value(foo, null);
}),
};
TimeActions(1000000, actions);
}
class Foo{
public void DoSomething(){}
}
static Func<object, object[], object> Wrap(MethodInfo method)
{
var dm = new DynamicMethod(method.Name, typeof(object), new Type[] {
typeof(object), typeof(object[])
}, method.DeclaringType, true);
var il = dm.GetILGenerator();
if (!method.IsStatic)
{
il.Emit(OpCodes.Ldarg_0);
il.Emit(OpCodes.Unbox_Any, method.DeclaringType);
}
var parameters = method.GetParameters();
for (int i = 0; i < parameters.Length; i++)
{
il.Emit(OpCodes.Ldarg_1);
il.Emit(OpCodes.Ldc_I4, i);
il.Emit(OpCodes.Ldelem_Ref);
il.Emit(OpCodes.Unbox_Any, parameters[i].ParameterType);
}
il.EmitCall(method.IsStatic || method.DeclaringType.IsValueType ?
OpCodes.Call : OpCodes.Callvirt, method, null);
if (method.ReturnType == null || method.ReturnType == typeof(void))
{
il.Emit(OpCodes.Ldnull);
}
else if (method.ReturnType.IsValueType)
{
il.Emit(OpCodes.Box, method.ReturnType);
}
il.Emit(OpCodes.Ret);
return (Func<object, object[], object>)dm.CreateDelegate(typeof(Func<object, object[], object>));
}
As you can see from the code, I try to invoke a simple no-op method seven different ways:
dynamic
Action
that got precompiled at runtime (thus excluding compilation time from the results).Action
that gets compiled the first time it is needed, using a non-thread-safe Lazy variable (thus including compilation time)Each gets called 1 million times in a simple loop. Here are the timing results:
Direct: 3.4248ms
Dynamic: 45.0728ms
Reflection: 888.4011ms
Precompiled: 21.9166ms
LazyCompiled: 30.2045ms
ILEmitted: 8.4918ms
LazyILEmitted: 14.3483ms
So while using the dynamic
keyword takes an order of magnitude longer than calling the method directly, it still manages to complete the operation a million times in about 50 milliseconds, making it far faster than reflection. If the method we call were trying to do something intensive, like combining a few strings together or searching a collection for a value, those operations would likely far outweigh the difference between a direct call and a dynamic
call.
Performance is just one of many good reasons not to use dynamic
unnecessarily, but when you're dealing with truly dynamic
data, it can provide advantages that far outweigh the disadvantages.
Based on Johnbot's comment, I broke the Reflection area down into four separate tests:
new TimedAction("Reflection, find method", () =>
{
typeof(Foo).GetMethod("DoSomething").Invoke(foo, args);
}),
new TimedAction("Reflection, predetermined method", () =>
{
method.Invoke(foo, args);
}),
new TimedAction("Reflection, create a delegate", () =>
{
((Action)method.CreateDelegate(typeof(Action), foo)).Invoke();
}),
new TimedAction("Reflection, cached delegate", () =>
{
methodDelegate.Invoke();
}),
... and here are the benchmark results:
So if you can predetermine a specific method that you'll need to call a lot, invoking a cached delegate referring to that method is about as fast as calling the method itself. However, if you need to determine which method to call just as you're about to invoke it, creating a delegate for it is very expensive.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With