Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C#: Benefit of explicitly stating "unsafe" / compiler option

I understand pointers and the rare need to use them in C# code. My question is: what is the reasoning behind having to explicitly state "unsafe" in a block of code. Additionally, why must a compiler option be changed to allow "unsafe" code?

Bottom Line: What in the CLR (or language specs) makes it so we can't just use pointers whenever we want (much like C and C++) without having to type "unsafe" and change the compiler option?

For clarification: I know what "unsafe" and "safe" code is. It's just a question of why must we do all the extra work (ok, not THAT much extra) just to be able to use these features.

like image 572
Inisheer Avatar asked Mar 03 '09 16:03

Inisheer


4 Answers

There's an interview with C# Creator Anders Hejlsberg that touches on the subject here. Basically, exactly what @Marc Gravell said: typesafety first, unsafety by explicit declaration.

So to answer your question: nothing in the CLR prevents it; it's a language idiom designed to allow you to work with safety gloves when dealing with types. If you want to take the gloves off, it's your choice, but you have to make the active choice to take the gloves off.

Edit:

For clarification: I know what "unsafe" and "safe" code is. It's just a question of why must we do all the extra work (ok, not THAT much extra) just to be able to use these features.

As mentioned in the interview I linked, it was an explicit design decision. C# is essentially an evolution of Java and in Java, you don't have pointers at all. But the designers wanted to allow pointers; however because C# would typically be bringing in Java developers, they felt it would be best if the default behavior be similar to Java, i.e. no pointers, while still allowing the use of pointers by explicit declaration.

So the "extra work" is deliberate to force you to think about what you are doing before you do it. By being explicit, it forces you to at least consider: "Why am I doing this? Do I really need a pointer when a reference type will suffice?"

like image 64
Randolpho Avatar answered Sep 22 '22 10:09

Randolpho


It is largely about being verifiable. By stating unsafe, the gloves are off - the system can no longer guarantee that your code won't run amok. In most cases it is highly desirable to stay in the safe zone.

This gets more noticeable with partial trust (addins etc), but is still valuable in regular code.

like image 31
Marc Gravell Avatar answered Sep 22 '22 10:09

Marc Gravell


Actually the CLR makes no requirements at all about an /unsafe switch or keyword. In fact, C++/CLI (the C++ language that runs under the CLR) has no such /unsafe switch, and pointers can be used freely on the CLR.

So I would rephrase your question as "Why does C# require the use of /unsafe before pointers can be used?" And the answer to that question is as stated in other answers given here: to help the user make a conscious decision to lose the ability to run in anything less than Full Trust mode on the CLR. C++ virtually always requires Full Trust on the CLR, and C# can whenever you call code that requires Full Trust, or whenever you use pointers.

like image 43
Andrew Arnott Avatar answered Sep 21 '22 10:09

Andrew Arnott


When you use an unsafe block, it has the effect of making the code unverifiable. This requires certain permissions to execute and you might not want to allow it in your output (especially if you are in a shared source environment), so there is a switch in the compiler to disallow it.

like image 41
casperOne Avatar answered Sep 22 '22 10:09

casperOne