Early January is the time when, like everyone else, I like to imagine that the western cultural method of dividing time into years based on an arbitrary marker is meaningful enough to talk about them as discrete units, wherein trends and events can be seen as belonging to a particular year, giving it a particular character different to others that precede or follow it, rather than just happening to happen at least partly within it.
All of which is a needlessly complicated way of saying the new year is a good excuse to talk about what will be hot in 2014. For geeks, that usually means ruminating about specific new technologies that we’re looking forward to playing with, and also engaging in starry eyed, nonspecific burbling about technological trends that we’re sure will disrupt established business models, hasten the singularity, and generate lots more cash for Silicon Valley entrepreneurs.
Forecasting is a notoriously unreliable pastime, especially the further forward you go, the more variables you have to consider, and the more abstract the field. Accurate long term prediction about technology, medicine, politics, economics, and sport is almost impossible, but that doesn’t stop us trying. Partly because the alternative, just accepting the universe to be a chaotic, random sea of noise, is kinda scary. But also – and I suspect this is actually the main reason – because it’s just fun.
I expect to be working for Logical Glue for the foreseeable future, and hopefully we’ll be launching the beta version of our predictive modelling tool very soon, which will include a free option that allows developers to experiment with the platform for no cost. You can sign up for the beta programme now, and while the product descriptions may seem a little abstract right now, I assure you the real thing is very tangible. It is a powerful, fully-featured web application, built on top of AngularJS and using the latest web platform technologies. I’m very proud of what we’ve achieved so far, and I predict we’ll take things much further in 2014.
2013 saw a lot of talk and promotion around Web Components, and having been a fan of them since the early days, that’s very pleasing. Google’s Polymer project to polyfill the specs has been instrumental in changing them from a far-future pipedream to something that can be used today. In 2014, I expect to see interesting Polymer-based projects popping up, and for Chrome and Firefox to complete their implementation of the core specs.
If Chrome and Firefox both support components natively, that will leave Safari and IE with no support at all. Right now, I would predict Apple are the most likely candidates to begin implementing web components, if only because there has recently been considerable mailing-list feedback on the specs from Apple’s Webkit engineers. Not all of it positive, by any means, but they’re clearly interested. Apple have also started implementing the web animations framework within Webkit.
For their part, Microsoft have been pretty quiet on the subject of web components, as far as I can tell. It seems likely that the Internet Explorer team is in a bit of a transitional phase at the moment, with IE11 having shipped, long-time leader Dean Hachamovitch having moved on, and all the other upheavals happening at Microsoft. Hopefully IE development doesn’t enter another fallow period, but if it does, we can at least be glad that IE11 is a very, very good browser. In rendering and DOM performance in particular, IE just blows away the competition, in my experience.
While Microsoft is usually silent on the future direction of Internet Explorer, the partially open nature of Chrome’s development process makes it a lot easier to auger its future from all the chatter. What’s clear is that Google are focusing very much on performance, in particular mobile performance. They’re betting on the shift towards mobile computing continuing unabated, with desktop PCs likely being consigned to a (largish) niche over time.
This focus on performance comes down to how to make the web’s rendering model fast on mobile devices, which means making it fast on their GPU architecture. We saw quite a bit of advice from Google’s developer advocates on this subject in 2013, such as which CSS properties are safe to animate, but in 2014 we’re likely to see a number of new techniques, such as new CSS properties, that can be used to build more performant UIs.
It also appears that Chrome will become an increasingly core part of Android. I’m not too familiar with Android as a develop platform, but it appears Google has some concerns about its current Java/Dalvik app model exposing it to continuing IP problems from Oracle and others. As such, they want to give installable, packaged Chrome apps parity with regular ones. In that sense, Android will be adopting a partial Firefox OS like approach. All of this is good news for web developers, in theory, although it remains to seen how much Google will try to use proprietary JS APIs exposing features of Chrome, Android and Google Play Services to lock in packaged apps to Android.
I expect a big push on packaged Chrome apps for Android at some point in 2014, most likely at Google I/O.
While Web Components got a lot of attention during 2013, the big story in day-to-day web development was probably the rise of AngularJS. I’ll admit I’m biased in this respect, as I’ve been using Angular in anger for the past six months, but I think there was obviously been a huge upswing in interest and usage of the framework last year, with it blowing past competitors such as Ember, Backbone, Knockout, etc. It’s not at a JQuery level of ubiquity yet, but it’s increasingly the default choice for MVC/MVVM development.
I don’t see that changing in 2014, and in the long term this year may be seen as the high water mark of Angular’s ascendancy. By 2015, Angular will likely be nearing its 2.0 release, which will see it shift from providing a component model of its own, to adding value on top of web components. In 2014 though, I expect it will dominate a great deal of commercial web application development on its own.
Other Web Platform Technologies
There are a myriad of other new features coming to the web platform. There always are. But as usual, their actual impact will be tempered by their unavailability on different browsers, particularly legacy versions of IE. Hopefully, the end of support for Windows XP may also hasten the decline of IE 8.
It’s possible we might end 2014 with the majority of deployed browsers supporting the final version of Flexbox, and with a good number of them supporting Grid Layout and Regions. EcmaScript 6 support will also be an interesting area to watch. Hopefully IE and Safari will move to match the support already appearing in Firefox and Chrome. Either way, the ES6 spec will at least be finished sometime in 2014.
If someone had told me a few years ago that one of the most anticipated pieces of technology in 2014 would be a virtual reality headset, I would have been skeptical, to say the least. VR seemed to be one of those technologies consigned to the dustbin of history, but, like the tablet, it seems like it was just waiting for someone to do it right, and by all accounts, Oculus have done that.
The story of the Rift has been a fascinating one, with so many interesting and dramatic beats you could hardly have written it better: A plucky young founder builds a prototype device in his basement that is apparently better than those being sold for thousands by major tech companies. With the help of a crowd-funding campaign, he launches a startup to commercialise it, with the support of an industry legend, who later quits his equally legendary company to go and work for the startup.
For a while in 2013, after Oculus had sent out its developer kits, but before John Carmack announced he was joining as CTO, there was some backlash building in comment threads and forums. It was clear that the developer kits, while impressive, had limitations in resolution, tracking and latency that made them too nauseous for mainstream use. Some claimed these problems were insoluble, and claimed that fact that Carmack had gone quiet about the Rift, particularly during the 2013 Quakecon keynote, as proof that he knew Oculus was bound to fail.
That argument was undermined somewhat when Carmack announced he was becoming CTO of Oculus. And since then, all the news has sounded very positive. By all accounts, the current prototypes of the planned consumer version of the Rift has improved massively over the developer kits in every respect. The company has also scored a huge round of further investment.
What’s impressed me most about Oculus as a company is that they clearly aren’t content with just building a cool piece of hardware. 2013 was full of news stories about supposed Oculus competitors, who were building their own headsets that supposedly blew the Rift away in some particular aspect. Yet when you dug into the story a little, you quickly realised that these were nowhere near commercialisation, and often made huge trade-offs in other areas to get the headline performance in one. None of these competitors seem to have the all encompassing strategy that Oculus do, of building hardware that optimises for all aspects of the VR experience, and building a software platform that complements the hardware and lets people easily build great experiences for it. Nobody else seems as determined to build as large and experienced a team as Oculus have, with expertise in every area.
If, as seems very likely, the first commercial version of the Oculus Rift launches in 2014, it will likely relaunch VR into the popular imagination, and I’m certain that, for particular industries like hardcore gaming, it will be a very important product. What’s less clear is what its broader appeal will be. Will VR turn out to be another product, like smartphones and tablets, that was considered to be a niche product at best, but which turned out to be incredibly popular? Certainly these investors who have put so much cash into Oculus feel that it might, and it may turn out to be a very smart bet.
Personally, I think it will be very successful. In a lot of ways, VR represents the complete opposite of current hardware trends. Mobile computing is about non-immersive experiences, about giving you access to music, games, and information while engaged in other activities, or when you have a few minutes to spare. It is far less of a context shift from other activities than going to a desk, sitting down and booting up a PC and starting to browse the web or play a game.
Nobody is likely to be sitting on the bus with a VR headset on just yet (although given enough time, perhaps). Nevertheless, I think the experience of a truly immersive VR display, with the ability to put you into another time and place will be enthralling enough to make it a cultural phenomenon, at least for a while, just as movement and rhythm based games were a few years ago. Whether it will last longer in the mainstream than those is unclear. In a couple of years there may be a lot of Oculus Rifts lying on the shelf alongside Wiis, gathering dust.
I suspect its medium term success of VR will depend on three things. Firstly how Oculus and others able to evolve the tech, to make it less bulky, less conspicuous, and improve resolution and latency to make it more immersive. Secondly, the hardware that is built around it, such as input and feedback devices, and how they can add to the immersion. Thirdly, the software experiences built for it. If they are too shallow or clumsy, the technology will fail to embed in the mainstream and will retreat to its niche applications in gaming and industry, where there will always be money and interest.
Still, in the long term, I think VR is guaranteed to seep into the everyday, until it is no longer remarked upon. It will simply be another type of display technology. In some ways, I see VR as being a more exciting prospect for when I retire, hopefully some day in the distant future. Right now, mainstream VR is in its infancy, but with another forty or fifty years of development, the possibilities for completely immersive, networked experiences are mindboggling. Imagine spending your last years, not in a care home watching daytime TV, but wired into a VR experience, letting you visit different worlds and interact with others all over the world, not limited by your declining physical state.
Some people might think that sounds a little frightening, even dystopian, and perhaps it is a little, but it’s also thrilling to imagine the possibilities.
Many of the “disruptive” effects of new technology come down to automation of a process that previously involved human labor. Since the second world war, most automation has been driven by the rise of digital computers, and has involved the automation of somewhat abstract processes: calculation, accounting, written communication, etc. It’s easy to forget that the modern era of automation began not with computers, but with the dawn of the industrial revolution, and for a long time, it mostly affected mechanical processes. Mills and factories were the first places to places to be truly automated, with very disruptive effects on traditional businesses and workers in those industries.
With computers having largely saturated many areas of modern life, there is some evidence that the tide of automation may begin to roll over more mechanical processes again. Driverless cars are the most visible area of research in this new era. Since driving a car is usually thought and advertised as an individual, personal, even pleasurable, experience, it may not seemed intuitive to think of it as a mechanical process, akin to working in a factory, but it is.
When you drive a car, you are a worker operating a machine, one amongst millions, that forms part of a complex transportation system designed to move people and cargo around with mechanical efficiency. Eliminating the workers from this system, was never a historically high priority, because they are unpaid, but doing so will still provide the same efficiency improvements that automating a production line in a factory can, and so it was only a matter of time until it happened. Happily, the change will also result in millions of hours less of unpaid labor across the world, less accidents, and more free time to do things that don’t involve driving.
Google has also been in the news recently for acquiring a number of robotics startups, most notably Boston Dynamics, whose videos of their Big Dog robot have been viral hits on YouTube for the past few years. There has been much snarking about Google moving into the field of military robots, but to while Google will presumably honour Boston Dynamic’s existing contracts to the US military, which appear to be more about making robots for transportation and disaster recovery than combat, it seems unlikely that they’re moving into robotics out a strong desire to be a military supplier.
It seems more likely that Google are reckoning on robotic automation being one of the major technology trends of the 21st century, and they want to be there first. I suspect their well documented legal difficulties with patents in the mobile and telephony industries may have encouraged them to find a new industries where they can dominate from the start. If and when driverless cars and household robots become commonplace, it may or may not be Google who builds and sells them, but it seems likely they will play a central role as a hardware and software supplier and owner of valuable patents and other IP.
Of course, this shift to greater mechanical automation is unlikely to have a great effect in 2014. Much of the technology is still too primitive for commercialisation. But it’s becoming possible to see where the general trends are heading, and they’re certainly interesting. This kind of technology tends to move forward at a more measured pace than the kind of exponential increases in storage and calculation capacity that happens in computing. Nevertheless, like a glacier carving out a valley, it can result in very significant changes over the periods of decades.
The political and economic ramifications of ever greater automation may prove to be the most important result. Relatively cheap, general purpose robotic labor could replace huge swathes of human workers in unskilled jobs, just as a computers continue to squeeze out semi-skilled and skilled jobs. The result may be mass unemployment as the century continues. Some people talk about this process being a painful transition, but the greater problem is, a transition to what?
The modern globalized economic system relies on the traditional split of working, middle and upper class, with all three being large enough, and having sufficient income, to form a mass market that ensures there is money to buy cars, smartphones, computers, robots, clothes, etc. The system is often attacked for its gross inequalities: There is a huge gap between the relative income and standard of living of the three classes. But it has at least provided employment for the majority, and caused a rising tide of technological process that has raised living standards and life expectancy throughout the world.
The problem is, what happens when automation erases the jobs not only of the working class who are involved in manufacturing, but also the middle class who do administrative and service work? The upper classes’ focus on ever greater wealth extraction guarantees that these jobs will be automated if possible, but if the result is the shrink and collapse of the working and middle classes, then the mass market itself will shrink. With nobody left to buy the cars and smartphones, the companies owned by the upper class will founder, and the entire system will grind to a halt.
What comes next isn’t obvious, but I foresee three possibilities: The first is that technology progresses fast enough that we enter a sort of quasi-utopian age before things can get too bad. If the cost of producing the energy, food and other goods necessary to sustain society drops quickly enough, it would be possible to transition to an economy where a basic standard of living is provided to all people, without a concomitant requirement to have a job or try to get one. Work would become semi-voluntary, in that it would still be required to get the nicest stuff, but it would not be required, and would generally only involve so called “knowledge economy” jobs and management.
Admittedly, this may sound somewhat insane politically, given the current trend in developed countries is the exact opposite. Benefit systems are being dismantled, and welfare slashed. But this seems to have more to do with political ideology spurred on economic expedience: The economic crisis was caused by reckless deregulation, but the huge deficits it created are too good an excuse to waste for those who blindly follow the free market mantra. But perhaps if the economy really began to grind to a halt as I’ve described above, reality would trump ideology and governments would reverse course. IT may be a vain hope.
If that is the utopian option, then there is a more depressing, dystopian one, where the economy does collapse, at least partly. The geopolitical system will fragment, with many areas regressing to more primitive technological state, while only small, self-sufficient islands of modernity remain, like gated communities. Manual labor, farming, trade and barterings all still work, they’ve just been rendered redundant by the modern system. Remove that system, and it’s likely that would appear again soon enough.
This world would probably be a very strange, often dangerous place. There would still be a great deal of technology around, people wouldn’t instantly revert to living in huts, but it would not form the bedrock of social infrastructure, as it does today, instead its effect would be more chaotically distributed. Even so, technological progress would probably continue to inch forward in those islands of modernity, and they might eventually build something more akin to the utopian ideal, where technology is gradually used to lift the whole world out of poverty.
The final option I foresee is that states will grasp the nettle and simply not allow widespread automation that seriously jeopardises the stability of the global market. Entrenched industries already work to protect themselves by lobbying for legislation that protects their competitive advantage. It’s possible that governments could pass laws requiring a certain amount of human labor in particular industries, and preventing certain tasks from being automated.
Such an approach hardly seems likely in the west, but the Chinese model, where markets are still somewhat subservient to the state, often being partly owned or controlled by them, seems like it might evolve into such a system. There would doubtless still be a gradual slide towards greater automation, but such deliberate attempts to retard it might at least make the transition somewhat less painful.
This all sounds rather depressing, but on the whole I am optimistic that the eventual result, if not quite the post-scarcity utopia dreamt of in science fiction, will still be a better world, it may just take us longer to get there, depending on the route we choose.
On the flip-side of automation is the idea of producing intelligent machines. So called strong AI has been a pipe-dream for half a century, with little obvious progress towards achieving it for much of that time. In recent years, the exponential growth of computing power has seen some progress is performing tasks like speech recognition, predictive analytics, and automated translation using machine learning. But powerful, generalized machine intelligence still seems a long way off.
However, over Christmas I read On Intelligence by Jeff Hawkins, a book published in 2005 that outlines his theory on how intelligence arises in the brain through the memory and predictive efforts of the neocortex. It’s a compelling read, and the only book on intelligence and the brain that I’ve read that really tries to tackle the idea of intelligence in a practical, systemic way. Whether it is correct or not is another matter, and the signs right now are mixed. Hawkins has created a project called Numenta that aims to develop his ideas further, and to produce a software based implementation of the cortical algorithms he believes underlie human and animal intelligence.
It’s heady stuff, mainly because of the possibilities presented if we had the ability to produce intelligence in a predictable, repeatable and scalable manner. If we accept that the main limit to human intelligence is the evolutionary need to minimise brain size and energy consumption, then strong AI, not facing such limitations, could be built with an exponentially greater capacity for reasoning. The usefulness of such intelligences in helping us to understand the world and further human progress is extraordinary.
The big problem is deciphering intelligence in the first place. Hawkins theory remains just that, and like any early attempt to synthesise a scientific framework, it likely has mistakes and misconceptions, even if it is half right to being with. If it isn’t, and generalized intelligence proves to be beyond our ability to understand reproduce, then AI may never progress much beyond what it can already do.
Machine intelligence is the wildcard in the pack of technologies that will affect us during the 21st century. It could prove to be almost inconsequential, just some useful algorithms for automating some nebulous or learning-heavy tasks. Or it could the defining advance of our age.