I have been attempting to optimize a JavaScript encode function (in C#) to improve its performance (in an overall attempt of improving the performance of an enterprise web application). We attempted to use the .NET HttpUtility.JavaScriptStringEncode
but it does not encode the way our data layer is anticipating (and changing the data layer is not on the table).
Using the RedGate profiler tool I determined the best performance of our function is around 8% of the total page load. When I use the .NET function (on a page that accepts it), it comes in at around .08% of total page load. We reflected the .NET function to see what sorcery they were working and when I copied the reflected code into the function and ran it directly, it performed at around 10%.
I'm curious as to why. How is the .NET function prepared differently to award such a performance increase?
I apologize in advance but I cannot paste up the function that we are using, but I don't think that should impact answering the question.
Can you compare the IL code produced after you paste the reflected code into your library with the IL code present in the .NET library? The compiler switch being used to compile can cause such differences.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With