I understand pointers and the rare need to use them in C# code. My question is: what is the reasoning behind having to explicitly state "unsafe" in a block of code. Additionally, why must a compiler option be changed to allow "unsafe" code?
Bottom Line: What in the CLR (or language specs) makes it so we can't just use pointers whenever we want (much like C and C++) without having to type "unsafe" and change the compiler option?
For clarification: I know what "unsafe" and "safe" code is. It's just a question of why must we do all the extra work (ok, not THAT much extra) just to be able to use these features.
There's an interview with C# Creator Anders Hejlsberg that touches on the subject here. Basically, exactly what @Marc Gravell said: typesafety first, unsafety by explicit declaration.
So to answer your question: nothing in the CLR prevents it; it's a language idiom designed to allow you to work with safety gloves when dealing with types. If you want to take the gloves off, it's your choice, but you have to make the active choice to take the gloves off.
Edit:
For clarification: I know what "unsafe" and "safe" code is. It's just a question of why must we do all the extra work (ok, not THAT much extra) just to be able to use these features.
As mentioned in the interview I linked, it was an explicit design decision. C# is essentially an evolution of Java and in Java, you don't have pointers at all. But the designers wanted to allow pointers; however because C# would typically be bringing in Java developers, they felt it would be best if the default behavior be similar to Java, i.e. no pointers, while still allowing the use of pointers by explicit declaration.
So the "extra work" is deliberate to force you to think about what you are doing before you do it. By being explicit, it forces you to at least consider: "Why am I doing this? Do I really need a pointer when a reference type will suffice?"
It is largely about being verifiable. By stating unsafe
, the gloves are off - the system can no longer guarantee that your code won't run amok. In most cases it is highly desirable to stay in the safe zone.
This gets more noticeable with partial trust (addins etc), but is still valuable in regular code.
Actually the CLR makes no requirements at all about an /unsafe switch or keyword. In fact, C++/CLI (the C++ language that runs under the CLR) has no such /unsafe switch, and pointers can be used freely on the CLR.
So I would rephrase your question as "Why does C# require the use of /unsafe before pointers can be used?" And the answer to that question is as stated in other answers given here: to help the user make a conscious decision to lose the ability to run in anything less than Full Trust mode on the CLR. C++ virtually always requires Full Trust on the CLR, and C# can whenever you call code that requires Full Trust, or whenever you use pointers.
When you use an unsafe block, it has the effect of making the code unverifiable. This requires certain permissions to execute and you might not want to allow it in your output (especially if you are in a shared source environment), so there is a switch in the compiler to disallow it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With