Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why C# "using aliases" are not used by default? [closed]

Consider the following code.

using System.ComponentModel.DataAnnotations;

namespace Foo
{
    public class Bar
    {
        [Required, MaxLength(250)]
        public virtual string Name { get; set; }
    }
}

Unless you have a fancy IDE (that is doing all sorts lookups & static analysis behind the scenes) it's pretty ambiguous as to where "Required" & "MaxLength" actually come from. Especially when several namespaces might be imported, with similar meaning.

As a relative newbie to C# I am finding myself always having a hard time figuring out where certain things come from. Especially when looking at other code snippets on places like StackOverflow.

using DataAnnotations = System.ComponentModel.DataAnnotations;

namespace Foo
{
    public class Bar
    {
        [DataAnnotations.Required, DataAnnotations.MaxLength(250)]
        public virtual string Name { get; set; }
    }
}

Now it's very obvious where "Required" & "MaxLength" come from. You could take it another step and do something like:

using Required = System.ComponentModel.DataAnnotations.RequiredAttribute;
using MaxLength = System.ComponentModel.DataAnnotations.MaxLengthAttribute;

namespace Foo
{
    public class Bar
    {
        [Required, MaxLength(250)]
        public virtual string Name { get; set; }
    }
}

This is now very similar to how both PHP & Js ES6 works.

I'm curious as to why this isn't the default for C#? And why pretty much every other C# dev I have spoken with considers alias's bad practice? Is there some underlying performance reason perhaps?

like image 865
Brad Avatar asked Apr 13 '17 03:04

Brad


People also ask

Why is C used?

C is a general-purpose programming language and can efficiently work on enterprise applications, games, graphics, and applications requiring calculations, etc. C language has a rich library which provides a number of built-in functions. It also offers dynamic memory allocation.

Why is C used in C?

%d is used to print decimal(integer) number ,while %c is used to print character . If you try to print a character with %d format the computer will print the ASCII code of the character.

Why should you learn C?

C is very fast in terms of execution time. Programs written and compiled in C execute much faster than compared to any other programming language. C programming language is very fast in terms of execution as it does not have any additional processing overheads such as garbage collection or preventing memory leaks etc.

Why is C language named so?

Quote from wikipedia: "A successor to the programming language B, C was originally developed at Bell Labs by Dennis Ritchie between 1972 and 1973 to construct utilities running on Unix." The creators want that everyone "see" his language. So he named it "C". C is about the tone C.


1 Answers

Why does it matter where types/definitions come from?

If you're truly that concerned about what namespace a type is present in, Visual Studio has the ability to find out in several ways, the two that are my favourite are below:

  • Hover the type / declaration. This often shows you the full type-name. (Hovering a new SomeType() statement shows you the method name, which is what is applied to attributes.)
  • Press F12 / Go To Definition. Even if you don't have the source for the definition, using the F12 or Right Click -> Go To Definition will take you to the metadata file that shows you all the public members of the type. This doesn't work on keywords (out, ref, return, null, etc.) but it works on basic aliased types (int, string, etc.) and traditional types (enum, interface, class, struct, etc.). This includes the namespace, type-name, and all public API members. If there are XML docs they are included as well. If you F12 an extension-method it takes you to the class metadata for that extension method. This is extremely helpful for identifying where a method came from if you feel it was injected by something it shouldn't have been.

So now it's not really that difficult the determine what namespace that the type came from. So what about the using aliases, when do we actually need them?

Real life scenario: I have been working on a Windows Forms model for the XNA Framework. XNA Framework has a Color type, and my framework has a Color type. Now I often use both those namespaces together but only need one of the Color types to be used natively. Often times I have a list of using statements that include something like:

using XnaColor = Microsoft.Xna.Framework.Color;
using Color = Evbpc.Framework.Drawing.Color;

So this resolves an ambiguity issue.

Why aren't using aliases a default?

Probably because they're almost never necessary. We don't really need them. If you're concerned about what namespace a type comes from it's far easier to do a quick lookup than it is to alias everything and force a namespace to be defined. At that rate you may-as-well bar using statements altogether and fully-qualify everything.

The biggest two use-cases I've ever had for a using alias are:

  1. Resolve ambiguity between types. See example above.
  2. Resolve ambiguity between namespaces. Same idea as above, but I alias a whole namespace if a lot of types are duplicated.

    using XnaF = Microsoft.Xna.Framework;
    using Evbpc.Framework.Drawing;
    

If you're generating code with Visual Studio, importing types, etc., it's not going to use an alias. Visual Studio will, instead, fully-qualify the name as necessary. Ever right-click a squiggle and get A.B.Type as the only option instead of using A.B? Well that's usually a good spot for an alias.

I'll warn you though, using aliases seem to increase maintainability requirements. (This may not have numbers backed up, but I won't lie - this project that I have several aliases in causes me to forget how/what I named an alias frequently.)

Generally, in my experience, if you have to use a using alias, you are probably breaking a rule somewhere.

Why don't we even use them on a regular basis?

Because they suck. They make code harder to read (take your example of DataAnnotations.MaxLength, why do I need to read that? I don't care that MaxLength is in System.ComponentModel.DataAnnotations, I only care that it's set properly), they disorganize code (now I am forced to remember that an attribute is in System.ComponentModel.DataAnnotations instead of System.ComponentModel.DataAnnotations.Schema), and they are just generally clunky.

Take your previous example, I have an Entity Framework project which has attributes something like the following on a class:

using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;

[Key, Column(Order = 2)]
[MaxLength(128)]
public string UserId { get; set; }

[ForeignKey(nameof(UserId))]
public virtual ApplicationUser User { get; set; }

Now with your example I would have one of the following:

using DataAnnotations = System.ComponentModel.DataAnnotations;

[DataAnnotations.Key, DataAnnotations.Schema.Column(Order = 2)]
[DataAnnotations.MaxLength(128)]
public string UserId { get; set; }

[DataAnnotations.Schema.ForeignKey(nameof(UserId))]
public virtual ApplicationUser User { get; set; }

Or:

using DataAnnotations = System.ComponentModel.DataAnnotations;
using Schema = System.ComponentModel.DataAnnotations.Schema;

[DataAnnotations.Key, Schema.Column(Order = 2)]
[DataAnnotations.MaxLength(128)]
public string UserId { get; set; }

[Schema.ForeignKey(nameof(UserId))]
public virtual ApplicationUser User { get; set; }

Or worse yet:

using KeyAttribute = System.ComponentModel.DataAnnotations.KeyAttribute;
using MaxLengthAttribute = System.ComponentModel.DataAnnotations.MaxLengthAttribute;
using ColumnAttribute = System.ComponentModel.DataAnnotations.Schema.ColumnAttribute;
using ForeignKeyAttribute = System.ComponentModel.DataAnnotations.Schema.ForeignKeyAttribute;

[Key, Column(Order = 2)]
[MaxLength(128)]
public string UserId { get; set; }

[ForeignKey(nameof(UserId))]
public virtual ApplicationUser User { get; set; }

I'm sorry, but those are just terrible. This is why 'every' dev you talk to avoids them and thinks this is a bad idea. I'll just stick to intelligently[1] importing namespaces and deal with the very minute potential that types clash. Then I'll use an alias.

If you really cannot find what namespace a type is in (say you pull code from Stack Overflow) then hit up MSDN, go to the Library and search for the type. (I.e., search for KeyAttribute or MaxLengthAttribute and the first links are the API references.)

[1]: By intelligently I mean doing so with responsibility and care. Don't just blindly import / use namespaces, try to limit them as much as possible. SRP and polymorphism usually allow us to keep the using list pretty small in each file.

like image 124
Der Kommissar Avatar answered Sep 21 '22 22:09

Der Kommissar