Polyfills have a PR problem

A recent Twitter discussion with an old colleague pointed to a problem the Extensible Web may have gaining traction, which is the bad reputation that polyfills have with some experienced front-end developers. They are seen as memory and CPU hogs, applied without much thought as an easy way of getting around a problem that could be better solved by a more sophisticated approach. Most of us will have experienced using a site that applies polyfills too liberally: the performance, particularly in older versions of IE, slows to a crawl, with long loading times, stuttering scrolling, and freezes.

I think the roots of this problem are twofold. Firstly, some developers do tend to use too many polyfills without enough consideration. It’s easy to understand how this can occur. Polyfills can seem like a silver bullet for achieving cross-browser compatibility, and to developers who are less experienced in front-end work, or who are working to a tight deadline, they must seem a godsend. After all, we’ll told that a Not Invented Here attitude is an anti-pattern, right? Whereas good developers reuse code wherever they can, and don’t waste time developing components outside their core competencies. In my experience of server-side development using languages such as Java and C#/.NET, projects can easily have dozens of dependencies on 3rd party libraries and components, and this approach is perhaps carried over by a lot of developers who cross-skill.

The second problem is historical. Most of the earliest polyfills where aimed squarely at working around a missing feature in IE6, 7 and 8, such as missing support for SVG, gradients, canvas, alpha-channel PNGs, etc. Unfortunately, to fix these problems they often relied on heavy scripting and proprietary features like VML, element behaviours and CSS filters that were very slow in these versions of IE, which came before JITed JavaScript and hardware acceleration became the norm in web browsers. While their emulation of new features was impressive from a technical standpoint, it usually came at too high a cost in performance.

Where the kind of polyfills/protofills envisaged by the extensible web manifesto differ from those of the past, in my opinion, are that they will take full advantage of all the enhanced performance and capabilities now provided by browsers, and they will build on new low-level APIs that let them be implemented in an efficient manner. Old browsers weren’t powerful or flexible enough to build sophisticated new features entirely in script, but modern ones are, or are at least getting there. I think advocacy around the extensible web should focus on two things:

1. These aren’t your grandfather’s polyfills

An old polyfill would use any and every trick in the book to emulate a specified API, whereas new ones are about providing a performant, usable library that also helps to explore and refine an unfinished standard. A modern polyfill/protofill should just be seen as another library, like jQuery, Angular or Underscore that provides a useful API that also happens to coincide with a specification.

2. You should measure performance, just like with any library

Any dependency you take on a 3rd party component is a trade off in terms of the benefits it brings with the costs to loading and rendering performance. Developers shouldn’t blindly use every poly/protofill and library available, but nor should they be afraid to use those that offer them real benefits. You should be measuring performance using the profiling tools available and deciding whether the trade off is worth it for the platforms you wish to target.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *