Yesterday Google announced a big update to its Google Now product, which will add contextualized information to every app on your phone. For example if Spotify is playing a song, simple asking your phone what the artists real name is will product an accurate result. No need for flipping apps and typing stuff into a web search.
Its the latest upgrade to Google Now, the company’s voice search product that has been light years ahead of Apple’s cute talking yet not very functional voice assistant Siri.
The reason Google is so far ahead is that the computing horsepower behind Google Now is vastly superior to Apple’s Siri, both in terms of ability to handle complex tasks and how, exactly, the software is interpreting what is being said.
Google’s product is a learning, near-living, piece of software that relies on super complex ‘neural network’ learning algorithms to try and understand what a user is saying and in what context. Siri is a primitive list of basic commands mapped to actions.
The latest addition to Google’s Now product comes thanks to its $500 million acquisition of artificial intelligence specialist DeepMind. The new version of Now will understand what context you’re speaking in when asking your phone to do something rather than you having to explicitly tell your phone where you are. While Google Now knows, Apple’s Siri just does what its told with no real thinking going on behind the scenes.
Apple clearly feels threatened by this development, enough so that it was compelled to ‘leak’ a story to insider blog 9to5Mac that it, too, is developing a competitor to Google Now.
The system is codenamed “Proactive,” and will leverage the company’s suite of iOS apps, including Siri, Contacts, Calendar and Passbook, as well as other apps, to provide timely information based on how they use their devices.
Google Now has had similar features for nearly three years, highlighting that Apple lacks the technical skills to put together a world beating artificial intelligence project on its iDevices. Apple is, after all, a design company not a tech company.
Worryingly the new app is supposed to include an augmented reality component, where users can take a picture of something in real life and have the phone provide information about it, tying into its lackluster Maps application.
Sounds good in theory but augmented reality is hard. Just ask Color, the once hyped augmented reality app that was supposed to do just that but instead fizzled into oblivion due to the immense technical challenges of figuring out exactly where you are and what you’re looking at. The software works fine for obvious places, say looking at the Statue of Liberty, but is ineffective if you were say looking at a relatively unknown piece of art and wanted to know about the artist. There’s just too many items in the world right now to catalog, even with the immense computing power of today’s cutting edge data centers.
Apple’s messaging in the leak says the new app will react to a person’s app usage, so if a person opens Facebook when they wake up in the morning, Proactive will note that usage and provide a widget in the morning for that user to quickly get into Facebook.
The Google engineers are no doubt laughing about this in Mountain View. Such a ‘proactive’ feature amounts to machine learning 101.
If Proactive is announced as the marquee feature of iOS9, to be announced next month, Apple users and investors should be scared.
Such simple artificial intelligence based on the already terrible Apple Maps product means that Apple’s software is three to four generations behind Google’s.
In short, Apple has a huge software problem on its hands.
Stay Connected