Into the Wookies

I’m currently reading Into the Woods by John Yorke, a treatise on storytelling  and story structure, by an author with extensive experience writing for British TV. It’s a well written and interesting book, one which gives you plenty of food for thought, even if you find yourself questioning the author’s theories. I encountered one such moment early on, and it irked me enough to write this post. It’s when Yorke turns his attention to the Star Wars films, while discussing the concept of “resolution” in classical story structure, and isn’t particularly complimentary in his analysis (emphasis mine):

“The resolution is the final judgement after the battle. If the heroes have overcome their demons, they are rewarded. ‘Hugh Grant’ learns to be assertive, James Bond saves the world — both get the girl. Often the story ends in some kind of sexual fulfilment — although even in mainstream cinema there are some interesting anomalies. In Star Wars Luke should really end up with Princess Leia, but she turns out to be his sister — his reward for vanquishing evil is fame instead. Such a subversion may go some way to explaining the film’s phenomenal success: it’s very sexlessness makes it digestible to children of every age — but perhaps its placing of fame on a pedestal above love says something too about the values of the society that both spawned it and still continues to nourish its success.”

Now, I’m a geek, but not an ardent fan of Star Wars. I think the prequel movies are uniformly terrible, have no interest in any of the fandom or expanded universe stuff, and of the original trilogy, I think The Empire Strikes Back is the only unreservedly great film. That said, I think Yorke’s analysis here is inaccurate and unfair. He doesn’t seem familiar with the plot of Star Wars,  and is using a  half-remembered version of it as a stick with which to beat cultural traits he dislikes, namely infantilism and glorification of fame. It’s a shame because, whatever its flaws, Star Wars does contain some interesting subversions of typical story beats that actually work against those traits.

The most obvious error in the above quote is the assertion that Luke’s reward for vanquishing evil is to be famous. This is, frankly, a bizarre claim. By the end of the first film, Luke has achieved a certain notoriety within the rebellion by blowing up the Death Star, he gets a medal and everything, but he’s no celebrity, just a successful pilot. At this point, we might expect him to continue on this trajectory, becoming a famous war hero and leading the rebellion, but that isn’t what happens. Instead, his story in films two and three takes a turn toward the smaller, more personal and spiritual.  He leaves his comrades in the rebel alliance, journeys to Dagobah to meet Yoda and learn some Jedi stuff, and eventually discovers that the enemy he is fighting is in fact his own father.

In the third film, Luke’s arc again has nothing to do with fame, and nor does he achieve any. He plays little meaningful role in the battle to blow up the second Death Star, except to jeopardise it when Darth Vader senses he is on the rebel shuttle. His story is a personal one. He overcomes the temptation of the “dark side”, of giving in to evil, and in the process succeeds in his aim of  redeeming his father. To claim this arc is inspired by or rewarded with fame is untrue — it is the exact opposite. Luke sacrifices almost everything out of paternal love for a man he has every reason to despise. At the trilogy’s conclusion, he watches, alone, as his father’s body burns on a funeral pyre, while elsewhere his friends celebrate their victory. He returns to the party, an almost unnoticed figure on its sidelines, and even there he is distracted by (literal) ghosts. If this is a depiction of fame, it’s a gloomy one, to say the least.

There is subversion here, but in a less vacuous direction than Yorke suggests. Luke, the character who begins the story as the apparent hero, becomes a darker and more ambiguous figure, preoccupied with his personal struggle, and is mostly incidental to how the world is eventually saved. Instead, his heroic and romantic role is gradually supplanted by the character of Han Solo, who himself changes, from a selfish pirate to a responsible soldier of the rebellion, during the course of the films. The romantic arc between Han and Leia that informs much of the second film also discounts the idea that the Star Wars films are sexless. It figures less in the third film, but not as a result of juvenility. Even if  elements of Return of the Jedi, such as the Ewoks, are aimed at children, the story of the protagonists by this point in the trilogy is about maturity.

When the protagonists are all reunited in the third film, their feelings of selfishness, desire, jealously and animosity that have bubbled throughout the previous movies are now sublimated by the jobs they have to do in the rebellion, in particular its final battle to destroy the second Death Star. Only Luke retains a personal agenda. Again, any idea that the Star Wars films glorify fame are undercut by how all they participate in this battle. They are clearly important lieutenants within the rebellion’s forces, but they are not commanding its fleet or giving heroic speeches, if anything their overall importance appears to have lessened since the end of the first film. Leia is just a soldier here, not a princess giving out medals. They are all following orders within plans laid out by others, assisted by many comrades. And their reward for vanquishing evil is simply the chance to enjoy their lives in peacetime, free from tyranny. Plus Han Solo gets the girl, of course.

It can be difficult to separate Star Wars, the cultural and commercial phenomenon, from the Star Wars trilogy of films, particularly when the former is so overpowering. Nonetheless, I think it does a disservice to those films, and the people who made them, not to consider them objectively, instead of in light of what they became, and what they inspired. They are messy, flawed works of art, but they deserve to be judged on their own merits, not those of the society in which they were made.

Innovation vs execution, and the myth of the big idea

Google’s recent acquisition of Nest for $3.2bn has raised some eyebrows, with some people questioning whether the valuation is sensible, and questioning how innovative Nest’s products really are. One comment I read claimed they did nothing that hadn’t been thought up years ago. In terms of the valuation, it is a high figure, and perhaps there was some amount of competition between Google and other suitors to drive it that high. In terms of Nest’s innovation, I’m not really familiar enough with their products to judge how groundbreaking they are, but I think the general complaint touches on something I’ve thought about for while: how the importance of “innovation” in technology coverage and comment is over-emphasised, and the importance of execution is under-emphasised. And also, misunderstanding about where innovation really is important in making a product.

The problem is this: There aren’t many good ideas that someone, somewhere won’t already have thought about and tried. The world is full of clever and motivated people looking for success. But ideas are intangible, and while intellectual property law gives them legal status as something you can buy, own and sell, you can’t make a world-conquering business from them without turning them into a real product, and that’s where the difficulty starts. Because building a successful product involves doing thousands of different things, in different areas, and doing them all as well as possible.

This is the execution of an idea, the act of turning it from an intangible thing into a reality, and getting it right is far, far harder than coming up with the original idea. Thomas Edison reputedly said that genius is 1% inspiration and 99% perspiration, which seems to sum up the problem. Edison’s approach certainly embodied this, as he would conduct a huge amount of experimentation to, for example, find the right material for light bulb filaments. Now, Nikola Tesla, who worked for Edison for a time, criticised this approach, claiming that Edison would waste huge effort on experimentally discovering things he could have determined through simple (to Tesla’s intellect) calculation. But even here we can see that Tesla is criticising Edison’s execution of his ideas, his wasteful perspiration, than the ideas themselves. He thought he could have executed on the idea better.

In coverage of technology, the press frequently plays up the original innovation, the “big idea”, behind a product. Nowhere is this more apparent than in the posthumous lionisation of Steve Jobs, who is referred to as the “genius” who invented the iPhone and the iPad. It’s a simple, heroic narrative, and therefore appeals to the news media,  whose job it is to turn complicated, messy reality into easily digestible stories that we can read during lunch. But the problem is, it’s wrong. The iPhone and iPad weren’t inventions, they were highly skilled executions of ideas that had long existed. Jobs’ success wasn’t coming up with the idea for a tablet computer, others, such as Microsoft, had done it long before Apple. His success was that he, with a large team of others, managed to execute on that idea in such a way as to make it a huge success.

Apple don’t exactly discourage the press narrative, of course. Their products are marketed as seamless, gleaming and indivisible units. Apple may talk a little about the fancy materials they employ, or a particular new feature or facet, but you aren’t encouraged to think of them as being a composite of parts, or the end result of a messy engineering process involving hundreds of people, or compromised by technical and economic trade-offs. They’re perfect and complete, supposedly, at least until the new model arrives. It’s easy to think of such products as springing whole into the imagination of a genius inventor, who needs only to put his vision down on paper and leave it to lesser mortals to build it.

Such geniuses may have occasionally cropped up throughout history, but they’re exceptional, in every sense of the word, and technology has long surpassed the ability of a single person, no matter their intelligence, to conceive of every part of a product like the iPhone. Instead, teams of very clever people work for a long time, doing lots of different things, and at the end, perhaps, a good product emerges. Or it doesn’t. And that’s the other source of faulty thinking about innovation. The one that lead to the iPad being dismissed by many people before it launched, because the idea had never worked in the past, so the idea itself was determined to be bad.

In fact, only the execution of the idea had been bad. Whether it was mistakes by those building them, bad marketing, limitations on the technology available, or simple bad luck, these earlier products failed. And many people attributed that failure on the inherent badness of the idea itself, and insisted that it would never be a success. In retrospect, we can see they were wrong, but that doesn’t help us unless we learn how to avoid making the same mistake in future.

To properly judge an idea, we need to differentiate between its inherent qualities and those that have become attached to it via cultural association. We shouldn’t forget or ignore the latter, they can still be very useful, but we should recognise that they can also change. Microsoft wanted to make tablet computers, but their idea of a tablet computer contained preconceptions formed by their experience with PCs and Windows that they couldn’t, or wouldn’t jettison, and that stopped them from creating a successful product.

Similarly, we also need to properly recognise those parts of a product that are a necessary consequence of its core idea, and those that are a result of a particular execution. The overwhelming majority will be of the latter type, which prompts us to question whether doing one or many of them differently would have resulted in a product being a runaway success or a miserable failure.

The matter of making particular decisions differently, or changing certain things, brings us to the final point about innovation in building technology. The process of executing on an idea itself involves coming up with a huge number of further ideas. These can include choosing a name and a colour palette, finding new solutions and workarounds for tricky technical problems, or clever ways to trade-off competing constraints. All of these ideas are examples of innovation, of a less flashy but more important type than coming up with the big idea. It is products that have a lot of this type of innovation that are truly groundbreaking, but it the type of innovation that gets less coverage in the press and recognition in larger culture. It tends to get put in the box labelled “engineering” and forgotten about.

The Oculus Rift is, I suspect, the next product that may vividly illustrate all of the above. Virtual reality was considered a winning idea, but the problems and failures of its actual manifestation in products eventually soiled it by association. Apart from a few die-hards, most people wrote it off as a joke. Now it looks like those same die-hards may get the last laugh, by building a product that does for VR what the iPad did for tablet computing, at least to some extent.

What impresses me most about Oculus is that they seem to really understand the importance of execution, sometimes to the frustration of people eager to get their hands on a headset. Palmer Luckey could have easily spun Kickstarter cash into a straightforward, commercialised version of his original prototype. It would have made money and been reviewed as the best attempt at VR that anyone had made. But it wouldn’t have been a world beater. It would have been low resolution, and made people sick, and had poor software support. It wouldn’t have changed people’s opinion that VR was a niche idea, with no prospect for broader success.

What Oculus have actually done is to expend the time, effort and (investor) money to try and get everything right. They’ve hired a large team, with expertise in hardware, software and business. They kept their promises to deliver dev kits, but they’ve resisted the urge to rush out a commercial product. Instead they’ve iterated on their technology and built up their infrastructure. They’re making considered trade-offs to ensure that no one aspect of the product, such as weight, resolution, field of view, latency, or price dominates to the detriment of the others. The results, by all accounts, are spectacular already, and they still say they have a long way to go.

Because Oculus have taken the time to get the execution right, it looks like the final product will be something very special indeed. It’s a telling contrast with many of the supposed Oculus competitors who get hyped up in the tech press occasionally. They tend to be small teams, working on prototype with a single unique selling point, whether its resolution, or FOV, that is supposedly 100x times better than the rift. The problem is, what sacrifices are they making in other areas of their products? And are they investing in the other things, like business organisation and infrastructure, that are key to success despite, not being an actual part of the product? It doesn’t appear that any of them are.

If Oculus succeed, it will be because of their ability to apply the lessons of other successful products and execute well on an idea, not because of the inherent brilliance of the idea or the genius of their founder. And the same lessons are important for anyone building anything in technology. To focus less on coming up with the world-beating idea, and instead on collaborating with others to build something world-beating a single step at a time.

JavaScript subclassing using Object.create

In my previous post, I talked about how Microsoft’s TypeScript was able to build simple class-based inheritance on top of JavaScript’s prototypal inheritance. To recap, the compiler includes a short function named extends that handles rejigging the prototype chain between the sub-class and the super-class to achieve the desired inheritance.

var __extends = this.__extends || function (d, b) {
    for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
    function __() { this.constructor = d; }
    __.prototype = b.prototype;
    d.prototype = new __();
};

The trickiness of this pattern can help us understand the impetus for one of JavaScript’s newer features, Object.create. When you first encounter this method, you might wonder why JavaScript needs another way to create objects, when it already has the object literal syntax and constructor functions? Where Object.create differs from those options is that lets you provide, as the first argument to the method, an object that will become the new object’s prototype.

Remember that there is a difference between an object’s public prototype property and its internal [[Prototype]] property. When JavaScript is looking up properties on an object, it uses the latter, but traditionally the only standardised way to control it for a new object has been to use the pattern applied by __extends. You create a new function with a public prototype property, then apply the new operator on the function to create a new object. When the new operator is used with a function, the runtime sets the [[Prototype]] property of the new object to the object referenced by the public prototype property of the function.

While this approach to controlling the [[Prototype]] works, it is a little opaque and wasteful, requiring the declaration of a new function simply for the purpose of controlling this internal property. With Object.create, the extra function is no longer required, as the [[Prototype]] can be controlled directly. A dead simple example would be.

var animal = {
        legs: 4
    },
    dog;

dog = Object.create(animal);

dog.legs == 4; // True

dog

The end result couldn’t be simpler — An object dog with a [[Prototype]] of animal.

We can extend this to reproduce the functionality of __extends without the faff of an additional function.

function SuperClass() { };
function SubClass() { };

SubClass.prototype = Object.create(SuperClass.prototype);
SubClass.prototype.constructor = SubClass;

subclass-superclass

I think you’ll agree that this is a much friendlier pattern than what __extends does, and in fact only today I found it recommended in feedback from the W3C TAG to the Web Audio working group, referred to as the “subclassing pattern”. So why didn’t Microsoft use it? Unfortunately, Object.create isn’t supported in Internet Explorer 8 and below, meaning TypeScript has to use the older pattern in order to maximise compatibility. Since __extends is compiler-generated JavaScript, its readability hardly matters anyway, as TypeScript developers will only see the class syntax of that language.

__proto__

I said above that there was traditionally no standardised way to control an object’s [[Prototype]]. However, some browsers have long supported a way of accessing and even changing it, through the __proto__ property. Although not part of any official specification, this property became a de facto standard, and gained support in all the major browsers except IE. It seems this property was controversial, as it was considered an abstraction error, and mutable prototypes were argued to cause implementation problems. There was talk of deprecating and eventually removing __proto__, while standardising equivalent capability, first through the introduction of Object.create and Object.getPrototypeOf in EcmaScript 5 and now Object.setPrototypeOf in EcmaScript 61. But __proto__ has not gone away yet, and in fact it appears from pre-release builds that Internet Explorer 11 will support it, so who knows if it will ever really die.


  1. setPrototypeOf seems locked-in for ES6, despite Brendan Eich saying in 2011 that it wasn’t going to happen. Although it appears no browser has actually implemented it yet.