(This was meant to be a general hypothetical question, me whining that .NET was a pig and begging for reasons. It was not really meant to be a question about my specific app.)
Currently I am rewriting some old C++ code in C#. We are porting over all legacy applications. I have C++ applications that take MAX 3% CPU. Mostly they use none. I then take the code, copy and paste, then reformat to C# syntax and .NET libraries, and BAM! 50% CPU. Whats the reason for this? I thought at first it was JIT, but even after each code path has been exercises, and the whole thing has been JIT ed, same issue.
I have also noticed huge memory increases. Apps that took 9 MB running a full load now start at 10 MB and run at 50 MB. I realize hardware is cheap, but I want to understand what causes this. Is it a cause for alarm, or is .NET just that much the pig?
Update 1 Answer to Skeet
I am familiar with C#. I change things to Linq, and so on. I typically take code and reduce the number of lines, and so on. Could you give some more examples of what a C++ person my do wrong in .NET?
Update 2
This was meant to be a general question, but the specific app that has the issue is as follows.
It has a thread that uses and ODBC driver to get data from a paradox db. It then uses Linq to transform this to a SQL db and post it. I have run it through ANTS profiler, and it seems the data set filling take the most time. Followed by Linq posting. I know some of my areas are reflection usage, but I don't see how to do what I need to with out this. I plan to change my string to string builders. Is there any difference between these two?
(int)datarow["Index"]
and
ConvertTo.Int32(datarow["Index"])
I changed all string concatenation to format strings. That didn't reduce over head. Does any one know the difference between a data reader vs data adapter and datasets?
What Causes the . NET Runtime Optimization Service High CPU Usage? The official answer is that the process needs to recompile its libraries and that it should only run when the computer is idle. Microsoft also states that the process shouldn't take more than a couple of minutes.
If the CPU usage is around 100%, this means that your computer is trying to do more work than it has the capacity for. This is usually OK, but it means that programs may slow down a little. Computers tend to use close to 100% of the CPU when they are doing computationally-intensive things like running games.
NET runtime optimization service is to help Windows run faster, which is something you wouldn't want to refuse. However, if you want to stop the service, proceed to the Services app again, find Net Runtime Optimization Service there and open its Properties via right-click.
It is normal for it to be high because the processor is not doing much at the moment. So, if your System Idle process is using 60% - 70% of your CPU, it means you're actually using 40% - 30% of it. Was this reply helpful? No the System Idle Process is only using 20-30% when running no or very few programs.
How familiar are you with C# and .NET? If you're just porting over the legacy code keeping C++ idioms, I'm not at all surprised that it's being a hog. Porting applications verbatim from one platform to another is almost never a good idea. (Of course, you haven't said that you've definitely done that.) Also, if you're expert C++ developers but novice .NET developers, you should expect your code to perform as if you're novices on the platform.
We can't really tell what's taking the performance without knowing more about the app - although I wouldn't be surprised to hear that string concatenation was the culprit. How many processors do you have on the box? If it's 2, then the app is basically taking up everything it can for a single thread...
.NET is generally going to be heavier in terms of memory than a C++ app, but should be at least comparable in terms of speed for most tasks. Taking 50MB instead of 9MB sounds like more than I'd expect, but I wouldn't immediately be too worried.
Both the memory and the CPU performance should be investigated with the use of a good profiler. I can recommend JetBrains dotTrace Profiler, but there are plenty of others out there.
AFAIK there is litte difference between (int)datarow["Index"]
and ConvertTo.Int32(datarow["Index"])
. However there is a big difference if you use stream mode data readers:
int orderIndex = <order of Index column in projection list>;
using (OdbcDataReader rdr = cmd.ExecuteReader(CommandBehavior.SequentialAccess))
{
int Index = rdr.GetInt32(orderIndex);
}
The SeqentialAccess
command behavior is the fastes way to process SQL results, because it does eliminate extra caching needed for random access.
A second notice is that is seems you're using Data sets. Data sets are easy to use, but they are very very far from what anyone can call 'fast'. With data sets you are basically running an in memory storage engine (I think is based on Rushmore). If you want to squeeze every CPU cycle and all the 1s from every bit of RAM, then you'll have to use leaner components (eg. raw arrays of structs instead of Datasets and DataTables).
When you compare apples to apples CLR can hold its ground against native code. IL code can be nativized at deployment time with NGEN. Typical CLR overheads like bounds checks can be avoided . GC pre-emption 'pause' happens only if you're careless with your allocation (just because you have GC doesn't mean you should allocate left and right). And CLR actually has some aces up its sleve when it comes to memory layout since it can rearange object in memory to fit access patterns and improve TLB and L2 locality.
BTW, if you think the debate 'C++ can run circles around C#' is something new, I remember a time when C could run circles around C++ ('virtual calls are impossibly slow' they were saying) and I hear there was a time assembly was running circles around C.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With