Facebookulus Rift

I was as surprised by Facebook’s acquisition of Oculus, and so was everybody else, it seems. That somebody bought them isn’t a surprise, especially with Sony stepping up its competitive consumer VR efforts, but Facebook probably wasn’t anybody’s most expected suitor. They weren’t anybody’s favoured suitor either, judging by the apoplectic reaction to  the announcement amongst many (most?) gamers and VR enthusiasts. Myself, I’m ambivalent. I can understand concerns about Facebook’s policies towards its users and third-party developers, but I also think that if you put that aside, then Oculus stands to gain a lot from the deal (beyond the immediate pay day for the founders and investors, obviously). And I also think consumer VR in general stands to gain, regardless of what happens to Oculus and the Rift in the long term.

It’s easy to imagine a nightmare “Facebooked” VR where you have to sign-in to FB to use the Rift, and your experience is repeatedly interrupted by inducements to ‘like’ and ‘share’ your current activity, and invitations to go play Candy Crush VR with a guy you met once at a work conference eight years ago. In fact, there’s probably an amateur film-maker or two working on viral videos that portray this ‘Facebook VR’ doomsday scenario right now. Similarly, restrictions on the Oculus platform could place onerous demands on third party developers to use and integrate Facebook’s services, hampering their ability to innovate or build truly independent products.

However, as cynical as Facebook’s policies towards their users and their platform are, I don’t think their management is so stupid as to immediately ruin a nascent and potentially highly lucrative new platform. Remember, Facebook beat Myspace because, when it first appeared, it was a far, far better experience, with a lot less bullshit. Just like Twitter didn’t include promoted tweets until they got big, and YouTube was ads-free for years. The management of these companies know that intrusive branding, advertisements and tie-ins are toxic to the initial experience. Once the platform has achieved critical mass, then they start to introduce these elements gradually, but not before.

In the short to medium term, I don’t think Facebook will mess too much with the Rift. They’ll be smart enough to remain mostly hands-off until consumer VR really takes off, but during that time Oculus will get some huge benefits that would not have been available to them if they remained independent. Most importantly, they will have a basically infinite supply of cash with which to hire talent and build custom hardware components. The latter is of particular importance. The new wave of VR was bootstrapped as much by the mobile phone industry as it was by crowdfunded cash, but relying on the vagaries of a separate, fast-moving industry is a dangerous situation to be in. Mobile phone screen sizes and technologies are constantly changing, and not necessarily in a direction that fits with the needs of VR. Likewise components such as cameras and accelerometers.

Oculus now have the scope to order their own components to their own specifications, such as ultra-high-resolution screens. That’s good for them, but also for other, smaller VR companies. Because once there is an established manufacturing base, other VR startups may be able to piggy-back on Oculus’ suppliers, in the same way that they piggy-backed on those of the mobile manufacturers. If consumer VR became a success, specialist manufacturing would have appeared eventually, but this means it could happen far sooner than it otherwise would have.

The second benefit is that Facebook’s infrastructure also presents a great opportunity for Oculus to experiment with large-scale VR experiences. John Carmack has long talked about his dreams of a “Metaverse” – an immersive, massively-multi-user alternative world. VR hardware can provide the immersion, but you will also need some serious computing power to host the servers themselves and make them accessible worldwide, and Facebook’s experience with data-centres and large-scale availability could be very beneficial there.

Finally, Facebook’s deep pockets should allow Oculus to price their initial products more aggressively, in order to quickly build a mass market. The sooner consumer VR is a proven technology, the sooner more companies will get serious about competing, and the faster the industry will progress. The eventual winners out of all of this may be Facebook/Oculus, Sony, Microsoft, Google, Apple, or someone else entirely. The first movers are rarely the ones who dominate an industry, but they’re necessary to get things started, and the more capable their initial product, the better, because it sets the standard that their competitors have to meet.

Innovation vs execution, and the myth of the big idea

Google’s recent acquisition of Nest for $3.2bn has raised some eyebrows, with some people questioning whether the valuation is sensible, and questioning how innovative Nest’s products really are. One comment I read claimed they did nothing that hadn’t been thought up years ago. In terms of the valuation, it is a high figure, and perhaps there was some amount of competition between Google and other suitors to drive it that high. In terms of Nest’s innovation, I’m not really familiar enough with their products to judge how groundbreaking they are, but I think the general complaint touches on something I’ve thought about for while: how the importance of “innovation” in technology coverage and comment is over-emphasised, and the importance of execution is under-emphasised. And also, misunderstanding about where innovation really is important in making a product.

The problem is this: There aren’t many good ideas that someone, somewhere won’t already have thought about and tried. The world is full of clever and motivated people looking for success. But ideas are intangible, and while intellectual property law gives them legal status as something you can buy, own and sell, you can’t make a world-conquering business from them without turning them into a real product, and that’s where the difficulty starts. Because building a successful product involves doing thousands of different things, in different areas, and doing them all as well as possible.

This is the execution of an idea, the act of turning it from an intangible thing into a reality, and getting it right is far, far harder than coming up with the original idea. Thomas Edison reputedly said that genius is 1% inspiration and 99% perspiration, which seems to sum up the problem. Edison’s approach certainly embodied this, as he would conduct a huge amount of experimentation to, for example, find the right material for light bulb filaments. Now, Nikola Tesla, who worked for Edison for a time, criticised this approach, claiming that Edison would waste huge effort on experimentally discovering things he could have determined through simple (to Tesla’s intellect) calculation. But even here we can see that Tesla is criticising Edison’s execution of his ideas, his wasteful perspiration, than the ideas themselves. He thought he could have executed on the idea better.

In coverage of technology, the press frequently plays up the original innovation, the “big idea”, behind a product. Nowhere is this more apparent than in the posthumous lionisation of Steve Jobs, who is referred to as the “genius” who invented the iPhone and the iPad. It’s a simple, heroic narrative, and therefore appeals to the news media,  whose job it is to turn complicated, messy reality into easily digestible stories that we can read during lunch. But the problem is, it’s wrong. The iPhone and iPad weren’t inventions, they were highly skilled executions of ideas that had long existed. Jobs’ success wasn’t coming up with the idea for a tablet computer, others, such as Microsoft, had done it long before Apple. His success was that he, with a large team of others, managed to execute on that idea in such a way as to make it a huge success.

Apple don’t exactly discourage the press narrative, of course. Their products are marketed as seamless, gleaming and indivisible units. Apple may talk a little about the fancy materials they employ, or a particular new feature or facet, but you aren’t encouraged to think of them as being a composite of parts, or the end result of a messy engineering process involving hundreds of people, or compromised by technical and economic trade-offs. They’re perfect and complete, supposedly, at least until the new model arrives. It’s easy to think of such products as springing whole into the imagination of a genius inventor, who needs only to put his vision down on paper and leave it to lesser mortals to build it.

Such geniuses may have occasionally cropped up throughout history, but they’re exceptional, in every sense of the word, and technology has long surpassed the ability of a single person, no matter their intelligence, to conceive of every part of a product like the iPhone. Instead, teams of very clever people work for a long time, doing lots of different things, and at the end, perhaps, a good product emerges. Or it doesn’t. And that’s the other source of faulty thinking about innovation. The one that lead to the iPad being dismissed by many people before it launched, because the idea had never worked in the past, so the idea itself was determined to be bad.

In fact, only the execution of the idea had been bad. Whether it was mistakes by those building them, bad marketing, limitations on the technology available, or simple bad luck, these earlier products failed. And many people attributed that failure on the inherent badness of the idea itself, and insisted that it would never be a success. In retrospect, we can see they were wrong, but that doesn’t help us unless we learn how to avoid making the same mistake in future.

To properly judge an idea, we need to differentiate between its inherent qualities and those that have become attached to it via cultural association. We shouldn’t forget or ignore the latter, they can still be very useful, but we should recognise that they can also change. Microsoft wanted to make tablet computers, but their idea of a tablet computer contained preconceptions formed by their experience with PCs and Windows that they couldn’t, or wouldn’t jettison, and that stopped them from creating a successful product.

Similarly, we also need to properly recognise those parts of a product that are a necessary consequence of its core idea, and those that are a result of a particular execution. The overwhelming majority will be of the latter type, which prompts us to question whether doing one or many of them differently would have resulted in a product being a runaway success or a miserable failure.

The matter of making particular decisions differently, or changing certain things, brings us to the final point about innovation in building technology. The process of executing on an idea itself involves coming up with a huge number of further ideas. These can include choosing a name and a colour palette, finding new solutions and workarounds for tricky technical problems, or clever ways to trade-off competing constraints. All of these ideas are examples of innovation, of a less flashy but more important type than coming up with the big idea. It is products that have a lot of this type of innovation that are truly groundbreaking, but it the type of innovation that gets less coverage in the press and recognition in larger culture. It tends to get put in the box labelled “engineering” and forgotten about.

The Oculus Rift is, I suspect, the next product that may vividly illustrate all of the above. Virtual reality was considered a winning idea, but the problems and failures of its actual manifestation in products eventually soiled it by association. Apart from a few die-hards, most people wrote it off as a joke. Now it looks like those same die-hards may get the last laugh, by building a product that does for VR what the iPad did for tablet computing, at least to some extent.

What impresses me most about Oculus is that they seem to really understand the importance of execution, sometimes to the frustration of people eager to get their hands on a headset. Palmer Luckey could have easily spun Kickstarter cash into a straightforward, commercialised version of his original prototype. It would have made money and been reviewed as the best attempt at VR that anyone had made. But it wouldn’t have been a world beater. It would have been low resolution, and made people sick, and had poor software support. It wouldn’t have changed people’s opinion that VR was a niche idea, with no prospect for broader success.

What Oculus have actually done is to expend the time, effort and (investor) money to try and get everything right. They’ve hired a large team, with expertise in hardware, software and business. They kept their promises to deliver dev kits, but they’ve resisted the urge to rush out a commercial product. Instead they’ve iterated on their technology and built up their infrastructure. They’re making considered trade-offs to ensure that no one aspect of the product, such as weight, resolution, field of view, latency, or price dominates to the detriment of the others. The results, by all accounts, are spectacular already, and they still say they have a long way to go.

Because Oculus have taken the time to get the execution right, it looks like the final product will be something very special indeed. It’s a telling contrast with many of the supposed Oculus competitors who get hyped up in the tech press occasionally. They tend to be small teams, working on prototype with a single unique selling point, whether its resolution, or FOV, that is supposedly 100x times better than the rift. The problem is, what sacrifices are they making in other areas of their products? And are they investing in the other things, like business organisation and infrastructure, that are key to success despite, not being an actual part of the product? It doesn’t appear that any of them are.

If Oculus succeed, it will be because of their ability to apply the lessons of other successful products and execute well on an idea, not because of the inherent brilliance of the idea or the genius of their founder. And the same lessons are important for anyone building anything in technology. To focus less on coming up with the world-beating idea, and instead on collaborating with others to build something world-beating a single step at a time.

Technology Prognosticism 2014 Edition

Early January is the time when, like everyone else, I like to imagine that the western cultural method of dividing time into years based on an arbitrary marker is meaningful enough to talk about them as discrete units, wherein trends and events can be seen as belonging to a particular year, giving it a particular character different to others that precede or follow it, rather than just happening to happen at least partly within it.

All of which is a needlessly complicated way of saying the new year is a good excuse to talk about what will be hot in 2014. For geeks, that usually means ruminating about specific new technologies that we’re looking forward to playing with, and also engaging in starry eyed, nonspecific burbling about technological trends that we’re sure will disrupt established business models, hasten the singularity, and generate lots more cash for Silicon Valley entrepreneurs.

Forecasting is a notoriously unreliable pastime, especially the further forward you go, the more variables you have to consider, and the more abstract the field. Accurate long term prediction about technology, medicine, politics, economics, and sport is almost impossible, but that doesn’t stop us trying. Partly because the alternative, just accepting the universe to be a chaotic, random sea of noise, is kinda scary. But also – and I suspect this is actually the main reason – because it’s just fun.


I have recently finished my brief stint as a contractor, and taken a full time position as a UI developer for Logical Glue Ltd, a start-up at the forefront of applied machine learning and predictive analytics. Having never worked for a startup before, it promises to be challenging, exciting, educational, and all the other CV buzzwords. It also marks my first job in almost a decade of web development where I am purely working on the front-end. After many years of wrangling C#/ASP.NET code, it’s odd to be working purely on UI design, HTML/CSS and JavaScript, but not unpleasant. My career has definitely been leaning in this direction.

I expect to be working for Logical Glue for the foreseeable future, and hopefully we’ll be launching the beta version of our predictive modelling tool very soon, which will include a free option that allows developers to experiment with the platform for no cost. You can sign up for the beta programme now, and while the product descriptions may seem a little abstract right now, I assure you the real thing is very tangible. It is a powerful, fully-featured web application, built on top of AngularJS and using the latest web platform technologies. I’m very proud of what we’ve achieved so far, and I predict we’ll take things much further in 2014.

Web Components

2013 saw a lot of talk and promotion around Web Components, and having been a fan of them since the early days, that’s very pleasing. Google’s Polymer project to polyfill the specs has been instrumental in changing them from a far-future pipedream to something that can be used today. In 2014, I expect to see interesting Polymer-based projects popping up, and for Chrome and Firefox to complete their implementation of the core specs.

If Chrome and Firefox both support components natively, that will leave Safari and IE with no support at all. Right now, I would predict Apple are the most likely candidates to begin implementing web components, if only because there has recently been considerable mailing-list feedback on the specs from Apple’s Webkit engineers. Not all of it positive, by any means, but they’re clearly interested. Apple have also started implementing the web animations framework within Webkit.

For their part, Microsoft have been pretty quiet on the subject of web components, as far as I can tell. It seems likely that the Internet Explorer team is in a bit of a transitional phase at the moment, with IE11 having shipped, long-time leader Dean Hachamovitch having moved on, and all the other upheavals happening at Microsoft. Hopefully IE development doesn’t enter another fallow period, but if it does, we can at least be glad that IE11 is a very, very good browser. In rendering and DOM performance in particular, IE just blows away the competition, in my experience.


While Microsoft is usually silent on the future direction of Internet Explorer, the partially open nature of Chrome’s development process makes it a lot easier to auger its future from all the chatter. What’s clear is that Google are focusing very much on performance, in particular mobile performance. They’re betting on the shift towards mobile computing continuing unabated, with desktop PCs likely being consigned to a (largish) niche over time.

This focus on performance comes down to how to make the web’s rendering model fast on mobile devices, which means making it fast on their GPU architecture. We saw quite a bit of advice from Google’s developer advocates on this subject in 2013, such as which CSS properties are safe to animate, but in 2014 we’re likely to see a number of new techniques, such as new CSS properties, that can be used to build more performant UIs.

It also appears that Chrome will become an increasingly core part of Android. I’m not too familiar with Android as a develop platform, but it appears Google has some concerns about its current Java/Dalvik app model exposing it to continuing IP problems from Oracle and others. As such, they want to give installable, packaged Chrome apps parity with regular ones. In that sense, Android will be adopting a partial Firefox OS like approach. All of this is good news for web developers, in theory, although it remains to seen how much Google will try to use proprietary JS APIs exposing features of Chrome, Android and Google Play Services to lock in packaged apps to Android.

I expect a big push on packaged Chrome apps for Android at some point in 2014, most likely at Google I/O.

Web Frameworks

While Web Components got a lot of attention during 2013, the big story in day-to-day web development was probably the rise of AngularJS. I’ll admit I’m biased in this respect, as I’ve been using Angular in anger for the past six months, but I think there was obviously been a huge upswing in interest and usage of the framework last year, with it blowing past competitors such as Ember, Backbone, Knockout, etc. It’s not at a JQuery level of ubiquity yet, but it’s increasingly the default choice for MVC/MVVM development.

I don’t see that changing in 2014, and in the long term this year may be seen as the high water mark of Angular’s ascendancy. By 2015, Angular will likely be nearing its 2.0 release, which will see it shift from providing a component model of its own, to adding value on top of web components. In 2014 though, I expect it will dominate a great deal of commercial web application development on its own.

Other Web Platform Technologies

There are a myriad of other new features coming to the web platform. There always are. But as usual, their actual impact will be tempered by their unavailability on different browsers, particularly legacy versions of IE. Hopefully, the end of support for Windows XP may also hasten the decline of IE 8.

It’s possible we might end 2014 with the majority of deployed browsers supporting the final version of Flexbox, and with a good number of them supporting Grid Layout and Regions. EcmaScript 6 support will also be an interesting area to watch. Hopefully IE and Safari will move to match the support already appearing in Firefox and Chrome. Either way, the ES6 spec will at least be finished sometime in 2014.

Oculus Rift

If someone had told me a few years ago that one of the most anticipated pieces of technology in 2014 would be a virtual reality headset, I would have been skeptical, to say the least. VR seemed to be one of those technologies consigned to the dustbin of history, but, like the tablet, it seems like it was just waiting for someone to do it right, and by all accounts, Oculus have done that.

The story of the Rift has been a fascinating one, with so many interesting and dramatic beats you could hardly have written it better: A plucky young founder builds a prototype device in his basement  that is apparently better than those being sold for thousands by major tech companies. With the help of a crowd-funding campaign, he launches a startup to commercialise it, with the support of an industry legend, who later quits his equally legendary company to go and work for the startup.

For a while in 2013, after Oculus had sent out its developer kits, but before John Carmack announced he was joining as CTO, there was some backlash building in comment threads and forums. It was clear that the developer kits, while impressive, had limitations in resolution, tracking and latency that made them too nauseous for mainstream use. Some claimed these problems were insoluble, and claimed that fact that Carmack had gone quiet about the Rift, particularly during the 2013 Quakecon keynote, as proof that he knew Oculus was bound to fail.

That argument was undermined somewhat when Carmack announced he was becoming CTO of Oculus. And since then, all the news has sounded very positive. By all accounts, the current prototypes of the planned consumer version of the Rift has improved massively over the developer kits in every respect. The company has also scored a huge round of further investment.

What’s impressed me most about Oculus as a company is that they clearly aren’t content with just building a cool piece of hardware. 2013 was full of news stories about supposed Oculus competitors, who were building their own headsets that supposedly blew the Rift away in some particular aspect. Yet when you dug into the story a little, you quickly realised that these were nowhere near commercialisation, and often made huge trade-offs in other areas to get the headline performance in one. None of these competitors seem to have the all encompassing strategy that Oculus do, of building hardware that optimises for all aspects of the VR experience, and building a software platform that complements the hardware and lets people easily build great experiences for it. Nobody else seems as determined to build as large and experienced a team as Oculus have, with expertise in every area.

If, as seems very likely, the first commercial version of the Oculus Rift launches in 2014, it will likely relaunch VR into the popular imagination, and I’m certain that, for particular industries like hardcore gaming, it will be a very important product. What’s less clear is what its broader appeal will be. Will VR turn out to be another product, like smartphones and tablets, that was considered to be a niche product at best, but which turned out to be incredibly popular? Certainly these investors who have put so much cash into Oculus feel that it might, and it may turn out to be a very smart bet.

Personally, I think it will be very successful. In a lot of ways, VR represents the complete opposite of current hardware trends. Mobile computing is about non-immersive experiences, about giving you access to music, games, and information while engaged in other activities, or when you have a few minutes to spare. It is far less of a context shift from other activities than going to a desk, sitting down and booting up a PC and starting to browse the web or play a game.

Nobody is likely to be sitting on the bus with a VR headset on just yet (although given enough time, perhaps). Nevertheless, I think the experience of a truly immersive VR display, with the ability to put you into another time and place will be enthralling enough to make it a cultural phenomenon, at least for a while, just as movement and rhythm based games were a few years ago. Whether it will last longer in the mainstream than those is unclear. In a couple of years there may be a lot of Oculus Rifts lying on the shelf alongside Wiis, gathering dust.

I suspect its medium term success of VR will depend on three things. Firstly how Oculus and others able to evolve the tech, to make it less bulky, less conspicuous, and improve resolution and latency to make it more immersive. Secondly, the hardware that is built around it, such as input and feedback devices, and how they can add to the immersion. Thirdly, the software experiences built for it. If they are too shallow or clumsy, the technology will fail to embed in the mainstream and will retreat to its niche applications in gaming and industry, where there will always be money and interest.

Still, in the long term, I think VR is guaranteed to seep into the everyday, until it is no longer remarked upon. It will simply be another type of display technology. In some ways, I see VR as being a more exciting prospect for when I retire, hopefully some day in the distant future. Right now, mainstream VR is in its infancy, but with another forty or fifty years of development, the possibilities for completely immersive, networked experiences are mindboggling. Imagine spending your last years, not in a care home watching daytime TV, but wired into a VR experience, letting you visit different worlds and interact with others all over the world, not limited by your declining physical state.

Some people might think that sounds a little frightening, even dystopian, and perhaps it is a little, but it’s also thrilling to imagine the possibilities.


Many of the “disruptive” effects of new technology come down to automation of a process that previously involved human labor. Since the second world war, most automation has been driven by the rise of digital computers, and has involved the automation of somewhat abstract processes: calculation, accounting, written communication, etc. It’s easy to forget that the modern era of automation began not with computers, but with the dawn of the industrial revolution, and for a long time, it mostly affected mechanical processes. Mills and factories were the first places to places to be truly automated, with very disruptive effects on traditional businesses and workers in those industries.

With computers having largely saturated many areas of modern life, there is some evidence that the tide of automation may begin to roll over more mechanical processes again. Driverless cars are the most visible area of research in this new era. Since driving a car is usually thought and advertised as an individual, personal, even pleasurable, experience, it may not seemed intuitive to think of it as a mechanical process, akin to working in a factory, but it is.

When you drive a car, you are a worker operating a machine, one amongst millions, that forms part of a complex transportation system designed to move people and cargo around with mechanical efficiency. Eliminating the workers from this system, was never a historically high priority, because they are unpaid, but doing so will still provide the same efficiency improvements that automating a production line in a factory can, and so it was only a matter of time until it happened. Happily, the change will also result in millions of hours less of unpaid labor across the world, less accidents, and more free time to do things that don’t involve driving.

Google has also been in the news recently for acquiring a number of robotics startups, most notably Boston Dynamics, whose videos of their Big Dog robot have been viral hits on YouTube for the past few years. There has been much snarking about Google moving into the field of military robots, but to while Google will presumably honour Boston Dynamic’s existing contracts to the US military, which appear to be more about making robots for transportation and disaster recovery than combat, it seems unlikely that they’re moving into robotics out a strong desire to be a military supplier.

It seems more likely that Google are reckoning on robotic automation being one of the major technology trends of the 21st century, and they want to be there first. I suspect their well documented legal difficulties with patents in the mobile and telephony industries may have encouraged them to find a new industries where they can dominate from the start. If and when driverless cars and household robots become commonplace, it may or may not be Google who builds and sells them, but it seems likely they will play a central role as a hardware and software supplier and owner of valuable patents and other IP.

Of course, this shift to greater mechanical automation is unlikely to have a great effect in 2014. Much of the technology is still too primitive for commercialisation. But it’s becoming possible to see where the general trends are heading, and they’re certainly interesting. This kind of technology tends to move forward at a more measured pace than the kind of exponential increases in storage and calculation capacity that happens in computing. Nevertheless, like a glacier carving out a valley, it can result in very significant changes over the periods of decades.

The political and economic ramifications of ever greater automation may prove to be the most important result. Relatively cheap, general purpose robotic labor could replace huge swathes of human workers in unskilled jobs, just as a computers continue to squeeze out semi-skilled and skilled jobs. The result may be mass unemployment as the century continues. Some people talk about this process being a painful transition, but the greater problem is, a transition to what?

The modern globalized economic system relies on the traditional split of working, middle and upper class, with all three being large enough, and having sufficient income, to form a mass market that ensures there is money to buy cars, smartphones, computers, robots, clothes, etc. The system is often attacked for its gross inequalities: There is a huge gap between the relative income and standard of living of the three classes. But it has at least provided employment for the majority, and caused a rising tide of technological process that has raised living standards and life expectancy throughout the world.

The problem is, what happens when automation erases the jobs not only of the working class who are involved in manufacturing, but also the middle class who do administrative and service work? The upper classes’ focus on ever greater wealth extraction guarantees that these jobs will be automated if possible, but if the result is the shrink and collapse of the working and middle classes, then the mass market itself will shrink. With nobody left to buy the cars and smartphones, the companies owned by the upper class will founder, and the entire system will grind to a halt.

What comes next isn’t obvious, but I foresee three possibilities: The first is that technology progresses fast enough that we enter a sort of quasi-utopian age before things can get too bad. If the cost of producing the energy, food and other goods necessary to sustain society drops quickly enough, it would be possible to transition to an economy where a basic standard of living is provided to all people, without a concomitant requirement to have a job or try to get one. Work would become semi-voluntary, in that it would still be required to get the nicest stuff, but it would not be required, and would generally only involve so called “knowledge economy” jobs and management.

Admittedly, this may sound somewhat insane politically, given the current trend in developed countries is the exact opposite. Benefit systems are being dismantled, and welfare slashed. But this seems to have more to do with political ideology spurred on economic expedience: The economic crisis was caused by reckless deregulation, but the huge deficits it created are too good an excuse to waste for those who blindly follow the free market mantra. But perhaps if the economy really began to grind to a halt as I’ve described above, reality would trump ideology and governments would reverse course. IT may be a vain hope.

If that is the utopian option, then there is a more depressing, dystopian one, where the economy does collapse, at least partly. The geopolitical system will fragment, with many areas regressing to more primitive technological state, while only small, self-sufficient islands of modernity remain, like gated communities. Manual labor, farming, trade and barterings all still work, they’ve just been rendered redundant by the modern system. Remove that system, and it’s likely that would appear again soon enough.

This world would probably be a very strange, often dangerous place. There would still be a great deal of technology around, people wouldn’t instantly revert to living in huts, but it would not form the bedrock of social infrastructure, as it does today, instead its effect would be more chaotically distributed. Even so, technological progress would probably continue to inch forward in those islands of modernity, and they might eventually build something more akin to the utopian ideal, where technology is gradually used to lift the whole world out of poverty.

The final option I foresee is that states will grasp the nettle and simply not allow widespread automation that seriously jeopardises the stability of the global market. Entrenched industries already work to protect themselves by lobbying for legislation that protects their competitive advantage. It’s possible that governments could pass laws requiring a certain amount of human labor in particular industries, and preventing certain tasks from being automated.

Such an approach hardly seems likely in the west, but the Chinese model, where markets are still somewhat subservient to the state, often being partly owned or controlled by them, seems like it might evolve into such a system. There would doubtless still be a gradual slide towards greater automation, but such deliberate attempts to retard it might at least make the transition somewhat less painful.

This all sounds rather depressing, but on the whole I am optimistic that the eventual result, if not quite the post-scarcity utopia dreamt of in science fiction, will still be a better world, it may just take us longer to get there, depending on the route we choose.

Machine Intelligence

On the flip-side of automation is the idea of producing intelligent machines. So called strong AI has been a pipe-dream for half a century, with little obvious progress towards achieving it for much of that time. In recent years, the exponential growth of computing power has seen some progress is performing tasks like speech recognition, predictive analytics, and automated translation using machine learning. But powerful, generalized machine intelligence still seems a long way off.

However, over Christmas I read On Intelligence by Jeff Hawkins, a book published in 2005 that outlines his theory on how intelligence arises in the brain through the memory and predictive efforts of the neocortex. It’s a compelling read, and the only book on intelligence and the brain that I’ve read that really tries to tackle the idea of intelligence in a practical, systemic way. Whether it is correct or not is another matter, and the signs right now are mixed. Hawkins has created a project called Numenta that aims to develop his ideas further, and to produce a software based implementation of the cortical algorithms he believes underlie human and animal intelligence.

It’s heady stuff, mainly because of the possibilities presented if we had the ability to produce intelligence in a predictable, repeatable and scalable manner. If we accept that the main limit to human intelligence is the evolutionary need to minimise brain size and energy consumption, then strong AI, not facing such limitations, could be built with an exponentially greater capacity for reasoning. The usefulness of such intelligences in helping us to understand the world and further human progress is extraordinary.

The big problem is deciphering intelligence in the first place. Hawkins theory remains just that, and like any early attempt to synthesise a scientific framework, it likely has mistakes and misconceptions, even if it is half right to being with. If it isn’t, and generalized intelligence proves to be beyond our ability to understand reproduce, then AI may never progress much beyond what it can already do.

Machine intelligence is the wildcard in the pack of technologies that will affect us during the 21st century. It could prove to be almost inconsequential, just some useful algorithms for automating some nebulous or learning-heavy tasks. Or it could the defining advance of our age.