How much performance gain (if any) can a windows service gain between a debug build and release build and why?
C programming language is a machine-independent programming language that is mainly used to create many types of applications and operating systems such as Windows, and other complicated programs such as the Oracle database, Git, Python interpreter, and games and is considered a programming foundation in the process of ...
In the real sense it has no meaning or full form. It was developed by Dennis Ritchie and Ken Thompson at AT&T bell Lab. First, they used to call it as B language then later they made some improvement into it and renamed it as C and its superscript as C++ which was invented by Dr.
What is C? C is a general-purpose programming language created by Dennis Ritchie at the Bell Laboratories in 1972. It is a very popular language, despite being old. C is strongly associated with UNIX, as it was developed to write the UNIX operating system.
C is a general-purpose language that most programmers learn before moving on to more complex languages. From Unix and Windows to Tic Tac Toe and Photoshop, several of the most commonly used applications today have been built on C. It is easy to learn because: A simple syntax with only 32 keywords.
For managed code, unless you have a lot of stuff conditionally compiled in for DEBUG builds there should be little difference - the IL should be pretty much the same. The Jitter generates differently when run under the debugger or not - the compilation to IL isn't affected much.
There are some things the /optimize
does when compiling to IL, but they aren't particularly aggressive. And some of those IL optimizations will probably be handled by the jitter optimizations, even if they aren't optimized in the IL (like the removal of nops).
See Eric Lippert's article http://blogs.msdn.com/ericlippert/archive/2009/06/11/what-does-the-optimize-switch-do.aspx for details:
The /optimize flag does not change a huge amount of our emitting and generation logic. We try to always generate straightforward, verifiable code and then rely upon the jitter to do the heavy lifting of optimizations when it generates the real machine code. But we will do some simple optimizations with that flag set.
Read Eric's article for information about /optimize
does do differently in IL generation.
Well, though the question is a duplicate, I feel that some of the better answers in the original question are at the very bottom. Personally I have seen situations where there is an appreciable difference between debug and release modes. (Example : Property performance, where there was a 2x difference between accessing properties in debug and release mode). Whether this difference would be present in an actual software(instead of benchmark like program) is debatable, but I have seen it happen in one product I worked on.
From Neil's answer on the original question, from msdn social:
It is not well documented, here's what I know. The compiler emits an instance of the System.Diagnostics.DebuggableAttribute. In the debug version, the IsJitOptimizerEnabled property is True, in the release version it is False. You can see this attribute in the assembly manifest with ildasm.exe.
The JIT compiler uses this attribute to disable optimizations that would make debugging difficult. The ones that move code around like loop-invariant hoisting. In selected cases, this can make a big difference in performance. Not usually though.
Mapping breakpoints to execution addresses is the job of the debugger. It uses the .pdb file and info generated by the JIT compiler that provides the IL instruction to code address mapping. If you would write your own debugger, you'd use ICorDebugCode::GetILToNativeMapping().
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With