I have read a similar question: Magic sprintf function - how to wrap it?, but my requirement is a little bit different, so I am wondering whether it's doable or not.
First, I want to explain the scenario a little bit, I currently have a trace function like
let Trace traceLevel ( fs : unit -> string) =
if traceLevel <= Config.TraceLevel then
Trace.WriteLine <| fs()
So the function "fs" is called to generate a string only if traceLevel is less than or equal to the trace level specified by the Config.TraceLevel. So when traceLevel is greater than the Config.TraceLevel, it's a no op. "fs" is not evaluated at all.
Although not limited to, but in practice, almost all use cases look like
Trace 4 (fun _ -> sprintf "%s : %i" "abc" 1)
It's pretty tedious to always write the "fun _ -> sprintf" part. Ideally, it would be nice to provide a flavor that user can just write
Trace 4 "%s : %i" "abc" 1
and it could
I cannot figure how to achieve this even after reading the answers to the original SO question.
It seems that kprintf allows a continuation function to be invoked against the formatted string. The wrapper still returns a function returned by one of the printf functions (which can then be a function taking one or more arguments). So currying can be in play. However, in the case above, what needed is to evaluate a condition before formatting the string, then apply the formatted string to Trace.WriteLine. It seems that the existing Printf module has an API to allow the injection of a pre-condition evaluation. So it seems not easily doable by wrapping the existing APIs.
Any idea on how to achieve this? (I read FSharp.Core/printf.fs very briefly, it seems possible to do it by providing a new derived PrintfEnv. However, these are internal types).
Thanks for the answers from Tomas and Lincoln. I think both approaches take some performance hit. I did some simple measurement on my machine with fsi.
Option 1: my original approach, on the "false" path, "fs()" is not evaluated at all. The usage is not so nice, since one needs to write the "fun _ -> sprintf" part.
let trace1 lvl (fs : unit -> string) =
if lvl <= 3 then Console.WriteLine(fs())
Option 2: format the string but throw it away on the "false" path
let trace2 lvl fmt =
Printf.kprintf (fun s -> if lvl <= 3 then Console.WriteLine(s)) fmt
Option 3: through recursion, reflection and box
let rec dummyFunc (funcTy : Type) retVal =
if FSharpType.IsFunction(funcTy) then
let retTy = funcTy.GenericTypeArguments.[1]
FSharpValue.MakeFunction(funcTy, (fun _ -> dummyFunc retTy retVal))
else box retVal
let trace3 lvl (fmt : Printf.StringFormat<'t, unit>) =
if lvl <= 3 then Printf.kprintf (fun s -> Console.WriteLine(s)) fmt
else downcast (dummyFunc typeof<'t> ())
Now I timed all three with code like
for i in 1..1000000 do
trace1 4 (fun _ -> sprintf "%s:%i" (i.ToString()) i)
for i in 1..1000000 do
trace2 4 "%s:%i" (i.ToString()) i
for i in 1..1000000 do
trace3 4 "%s:%i" (i.ToString()) i
Here is what I get:
trace1:
Real: 00:00:00.009, CPU: 00:00:00.015, GC gen0: 2, gen1: 1, gen2: 0
trace2:
Real: 00:00:00.709, CPU: 00:00:00.703, GC gen0: 54, gen1: 1, gen2: 0
trace3:
Real: 00:00:50.918, CPU: 00:00:50.906, GC gen0: 431, gen1: 5, gen2: 0
So both option 2 and 3 takes a significant perf hit compared to option 1 (especially option 3). This gap would grow if the string format is more complicated. For example, if I change the format and parameters to
"%s: %i %i %i %i %i" (i.ToString()) i (i * 2) (i * 3) (i * 4) (i * 5)
I get
trace1:
Real: 00:00:00.007, CPU: 00:00:00.015, GC gen0: 3, gen1: 1, gen2: 0
trace2:
Real: 00:00:01.912, CPU: 00:00:01.921, GC gen0: 136, gen1: 0, gen2: 0
trace3:
Real: 00:02:10.683, CPU: 00:02:10.671, GC gen0: 1074, gen1: 14, gen2: 1
So far, there seems still no satisfying solution to get both usability and perf.
The trick is to use the kprintf
function:
let trace level fmt =
Printf.kprintf (fun s -> if level > 3 then printfn "%s" s) fmt
trace 3 "Number %d" 10
trace 4 "Better number %d" 42
You can use it via partial application, so that all parameters required by the format string of kprintf
will become parameters of the function you are defining.
The function then calls a continuation with the final string, and so you can decide what to do with it.
Here's a way to do it, however the "no-op" case needs to use reflection and boxing so it's probably much slower than simply formatting the string and throwing it away :-)
open System
open Microsoft.FSharp.Reflection
let rec dummyFunc (funcTy : Type) retVal =
if FSharpType.IsFunction(funcTy) then
let retTy = funcTy.GenericTypeArguments.[1]
FSharpValue.MakeFunction(funcTy, (fun _ -> dummyFunc retTy retVal))
else box retVal
let trace lvl (fmt : Printf.StringFormat<'t, unit>) =
if lvl <= 3 then Printf.kprintf (fun s -> Console.WriteLine(s)) fmt
else downcast (dummyFunc typeof<'t> ())
trace 3 "%s : %i" "abc" 1 // abc : 1
trace 4 "%s : %i" "abc" 1 // <nothing>
Looking at your requirements, it seems to me that the most important thing is not avoiding the tracing/logging per se, but avoiding the work that goes into formatting the string to be traced.
So for example, using System.Diagnostics.Trace
instead of printf
would not help you, because it is the sprintf
that is taking the time, yes?.
So, there are a couple of ways to delay the formatting. One is to use a unit function, as you are doing originally. Alternatively, you can use lazy
as an equivalent.
open System
let traceUnitFn lvl (fs : unit -> string) =
if lvl <= 3 then Console.WriteLine(fs())
let traceLazy lvl (s:Lazy<string>) =
if lvl <= 3 then Console.WriteLine(s.Force())
Timing those (on my machine) gives the following:
printfn "traceUnitFn"
#time
for i in 1..1000000 do
traceUnitFn 4 (fun _ -> sprintf "%s:%i" (i.ToString()) i)
#time
// traceUnitFn
// Real: 00:00:00.008, CPU: 00:00:00.000, GC gen0: 7, gen1: 0, gen2: 0
printfn "traceLazy"
#time
for i in 1..1000000 do
traceLazy 4 <| lazy (sprintf "%s:%i" (string i) i)
#time
// traceLazy
// Real: 00:00:00.053, CPU: 00:00:00.046, GC gen0: 56, gen1: 0, gen2: 0
So, OK, using lazy
is a lot slower.
But is sprintf
really the bottleneck here? Let's try calling it directly.
First we'll need a separate function for each number of parameters:
let trace0Param level fmt =
if level <= 3 then printfn fmt
let trace1Param level fmt x1 =
if level <= 3 then printfn fmt x1
let trace2Param level fmt x1 x2 =
if level <= 3 then printfn fmt x1 x2
If we test these then, we get:
printfn "trace0Param"
#time
for i in 1..1000000 do
trace0Param 4 "hello"
#time
// trace0Param
// Real: 00:00:00.007, CPU: 00:00:00.000, GC gen0: 8, gen1: 0, gen2: 0
printfn "trace1Param"
#time
for i in 1..1000000 do
trace1Param 4 "%i" i
#time
// trace1Param
// Real: 00:00:00.007, CPU: 00:00:00.000, GC gen0: 7, gen1: 0, gen2: 0
printfn "trace2Param with i.ToString"
#time
for i in 1..1000000 do
trace2Param 4 "%s:%i" (i.ToString()) i
#time
// trace2Param with i.ToString
// Real: 00:00:00.123, CPU: 00:00:00.124, GC gen0: 25, gen1: 0, gen2: 0
The first two are just as fast as your original, so the problem is all in the i.ToString()
call.
We can confirm this if we hard code the string parameter to be "hello" instead:
printfn "trace2Param with hello"
#time
for i in 1..1000000 do
trace2Param 4 "%s:%i" "hello" i
#time
// trace2Param with hello
// Real: 00:00:00.007, CPU: 00:00:00.000, GC gen0: 7, gen1: 0, gen2: 0
This last one is just as fast. And notice how many fewer GC's as well. The GC's will hurt you if performance is critical.
So the question really becomes: how much work are you doing in transforming values in order to trace them?
Are you going to be doing expensive things like i.ToString()
a lot? If not, then you don't really need the laziness at all.
Finally, and much more importantly, all these micro-profiling measurements are absolutely useless out of context, and any decision based on them is premature.
For example, even the worst implementation is doing 8 million traces a second. Is that really a bottleneck based on profiling your real system? If not, then I wouldn't worry about any of this, and just pick the simplest implementation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With