Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is extending native objects a bad practice?

People also ask

Why is extending built in JavaScript objects not a good idea?

Extending the JavaScript built-in object is not a good idea because if browser/JS has decided that they will provide the same method that you have extended, then your method will be override and the JS implementation (which may be difference from yours) would take over.

Why is it a bad idea to modify prototypes?

The problem is that prototype can be modified in several places. For example one library will add map method to Array's prototype and your own code will add the same but with another purpose. So one implementation will be broken.


When you extend an object, you change its behaviour.

Changing the behaviour of an object that will only be used by your own code is fine. But when you change the behaviour of something that is also used by other code there is a risk you will break that other code.

When it comes adding methods to the object and array classes in javascript, the risk of breaking something is very high, due to how javascript works. Long years of experience have taught me that this kind of stuff causes all kinds of terrible bugs in javascript.

If you need custom behaviour, it is far better to define your own class (perhaps a subclass) instead of changing a native one. That way you will not break anything at all.

The ability to change how a class works without subclassing it is an important feature of any good programming language, but it is one that must be used rarely and with caution.


There's no measurable drawback, like a performance hit. At least nobody mentioned any. So this is a question of personal preference and experiences.

The main pro argument: It looks better and is more intuitive: syntax sugar. It is a type/instance specific function, so it should be specifically bound to that type/instance.

The main contra argument: Code can interfere. If lib A adds a function, it could overwrite lib B's function. This can break code very easily.

Both have a point. When you rely on two libraries that directly change your types, you will most likely end up with broken code as the expected functionality is probably not the same. I totally agree on that. Macro-libraries must not manipulate the native types. Otherwise you as a developer won't ever know what is really going on behind the scenes.

And that is the reason I dislike libs like jQuery, underscore, etc. Don't get me wrong; they are absolutely well-programmed and they work like a charm, but they are big. You use only 10% of them, and understand about 1%.

That's why I prefer an atomistic approach, where you only require what you really need. This way, you always know what happens. The micro-libraries only do what you want them to do, so they won't interfere. In the context of having the end user knowing which features are added, extending native types can be considered safe.

TL;DR When in doubt, don't extend native types. Only extend a native type if you're 100% sure, that the end user will know about and want that behavior. In no case manipulate a native type's existing functions, as it would break the existing interface.

If you decide to extend the type, use Object.defineProperty(obj, prop, desc); if you can't, use the type's prototype.


I originally came up with this question because I wanted Errors to be sendable via JSON. So, I needed a way to stringify them. error.stringify() felt way better than errorlib.stringify(error); as the second construct suggests, I'm operating on errorlib and not on error itself.


In my opinion, it's a bad practice. The major reason is integration. Quoting should.js docs:

OMG IT EXTENDS OBJECT???!?!@ Yes, yes it does, with a single getter should, and no it won't break your code

Well, how can the author know? What if my mocking framework does the same? What if my promises lib does the same?

If you're doing it in your own project then it's fine. But for a library, then it's a bad design. Underscore.js is an example of the thing done the right way:

var arr = [];
_(arr).flatten()
// or: _.flatten(arr)
// NOT: arr.flatten()

If you look at it on a case by case basis, perhaps some implementations are acceptable.

String.prototype.slice = function slice( me ){
  return me;
}; // Definite risk.

Overwriting already created methods creates more issues than it solves, which is why it is commonly stated, in many programming languages, to avoid this practice. How are Devs to know the function has been changed?

String.prototype.capitalize = function capitalize(){
  return this.charAt(0).toUpperCase() + this.slice(1);
}; // A little less risk.

In this case we are not overwriting any known core JS method, but we are extending String. One argument in this post mentioned how is the new dev to know whether this method is part of the core JS, or where to find the docs? What would happen if the core JS String object were to get a method named capitalize?

What if instead of adding names that may collide with other libraries, you used a company/app specific modifier that all the devs could understand?

String.prototype.weCapitalize = function weCapitalize(){
  return this.charAt(0).toUpperCase() + this.slice(1);
}; // marginal risk.

var myString = "hello to you.";
myString.weCapitalize();
// => Hello to you.

If you continued to extend other objects, all devs would encounter them in the wild with (in this case) we, which would notify them that it was a company/app specific extension.

This does not eliminate name collisions, but does reduce the possibility. If you determine that extending core JS objects is for you and/or your team, perhaps this is for you.


Extending prototypes of built-ins is indeed a bad idea. However, ES2015 introduced a new technique that can be utilized to obtain the desired behavior:

Utilizing WeakMaps to associate types with built-in prototypes

The following implementation extends the Number and Array prototypes without touching them at all:

// new types

const AddMonoid = {
  empty: () => 0,
  concat: (x, y) => x + y,
};

const ArrayMonoid = {
  empty: () => [],
  concat: (acc, x) => acc.concat(x),
};

const ArrayFold = {
  reduce: xs => xs.reduce(
   type(xs[0]).monoid.concat,
   type(xs[0]).monoid.empty()
)};


// the WeakMap that associates types to prototpyes

types = new WeakMap();

types.set(Number.prototype, {
  monoid: AddMonoid
});

types.set(Array.prototype, {
  monoid: ArrayMonoid,
  fold: ArrayFold
});


// auxiliary helpers to apply functions of the extended prototypes

const genericType = map => o => map.get(o.constructor.prototype);
const type = genericType(types);


// mock data

xs = [1,2,3,4,5];
ys = [[1],[2],[3],[4],[5]];


// and run

console.log("reducing an Array of Numbers:", ArrayFold.reduce(xs) );
console.log("reducing an Array of Arrays:", ArrayFold.reduce(ys) );
console.log("built-ins are unmodified:", Array.prototype.empty);

As you can see even primitive prototypes can be extended by this technique. It uses a map structure and Object identity to associate types with built-in prototypes.

My example enables a reduce function that only expects an Array as its single argument, because it can extract the information how to create an empty accumulator and how to concatenate elements with this accumulator from the elements of the Array itself.

Please note that I could have used the normal Map type, since weak references doesn't makes sense when they merely represent built-in prototypes, which are never garbage collected. However, a WeakMap isn't iterable and can't be inspected unless you have the right key. This is a desired feature, since I want to avoid any form of type reflection.


One more reason why you should not extend native Objects:

We use Magento which uses prototype.js and extends a lot of stuff on native Objects. This works fine until you decide to get new features in and that's where big troubles start.

We have introduced Webcomponents on one of our pages, so the webcomponents-lite.js decides to replace the whole (native) Event Object in IE (why?). This of course breaks prototype.js which in turn breaks Magento. (until you find the problem, you may invest a lot of hours tracing it back)

If you like trouble, keep doing it!


I can see three reasons not to do this (from within an application, at least), only two of which are addressed in existing answers here:

  1. If you do it wrong, you'll accidentally add an enumerable property to all objects of the extended type. Easily worked around using Object.defineProperty, which creates non-enumerable properties by default.
  2. You might cause a conflict with a library that you're using. Can be avoided with diligence; just check what methods the libraries you use define before adding something to a prototype, check release notes when upgrading, and test your application.
  3. You might cause a conflict with a future version of the native JavaScript environment.

Point 3 is arguably the most important one. You can make sure, through testing, that your prototype extensions don't cause any conflicts with the libraries you use, because you decide what libraries you use. The same is not true of native objects, assuming that your code runs in a browser. If you define Array.prototype.swizzle(foo, bar) today, and tomorrow Google adds Array.prototype.swizzle(bar, foo) to Chrome, you're liable to end up with some confused colleagues who wonder why .swizzle's behaviour doesn't seem to match what's documented on MDN.

(See also the story of how mootools' fiddling with prototypes they didn't own forced an ES6 method to be renamed to avoid breaking the web.)

This is avoidable by using an application-specific prefix for methods added to native objects (e.g. define Array.prototype.myappSwizzle instead of Array.prototype.swizzle), but that's kind of ugly; it's just as well solvable by using standalone utility functions instead of augmenting prototypes.