On October 4th, roughly one year after the introduction of its branded line of hardware products, Google unveiled a second iteration of “Made by Google” hardware. This was a major product launch, but more than that, the presenters repeatedly hammered home Google’s AI first messaging mantra with proof points in the form of a second generation branded product line built around AI and machine learning.
The company’s hardware strategy is clear. Google believes it is uniquely positioned to blend AI+Software+Hardware to deliver innovative products that will win in the marketplace, even if they are late to market. This second generation of Google hardware provides abundant proof that the company can bring uniquely differentiated features to existing product categories, and maybe even create some new ones.
In this post I offer some observations on Google’s product strategy and review some of the key technologies and product announcements coming out of the hardware event.
The Magic Intersection According to Rick Osterloh, Google’s head of hardware engineering, smartphones have matured into near commodity status and annual “big leaps” based solely on hardware are no longer possible. Osterloh said that innovation in smartphones and other devices will come from advanced engineering at the intersection of AI, hardware and software. I call this the “magic intersection” and Google believes its unique competence in engineering products at the intersection will allow it to differentiate and win in the hardware space, even if the company is late to market (e.g., smart speakers) or launching products in an established product category (e.g., smartphones).
AutoML Neural nets are fiendishly difficult to design, and skilled practitioners are in short supply. To partially address this shortage, Google has developed a meta-machine learning technology which uses reinforcement algorithms to design neural nets. Dubbed “AutoML” the technology is now more accurate and resource efficient than human generated models in image classification and object detection.
One aspect glossed over by Google is the issue of transparency. Neural nets are astonishingly successful at pattern recognition, but they are largely opaque; no one knows precisely how or why they succeed. This problem is likely to be magnified when we have models designed by other models.
ML Everywhere Machine learning and AI are most obviously embodied in the Google Assistant. But the technology is much more pervasive than that and is “under the hood” solving problems throughout the Google product line:
–Automatically adjusting picture quality in the Pixel camera
–Reducing WiFi congestion in Google’s router
–Improving speech recognition in Google Home through “neural beam forming”
–Automatically adjusting sound quality in the Google Home Max speaker (“Smart Sound”) to account for placement and surroundings
On-device ML Moving machine learning onto the device is critically important as cloud based platforms confront consumer concerns over the privacy, security and control of their data. We are still early in the infusion of machine learning into all things digital, but Google is already preemptively working to defuse this growing backlash by having the models run on-device. Doing this also helps it counter the Apple narrative in which Apple is the protector of consumer privacy while Google is the privacy invader, indiscriminately vacuuming up user data for its own ends.
Two products introduced at the Oct. 4 event already feature on-device machine learning: –The Pixel phone has a “now playing” feature that uses on-device machine learning to identify songs. –Google Clips uses on-device ML to capture spontaneous, candid video snippets.
On device machine learning is a trend that will accelerate and I expect Google to integrate purpose built silicon into its consumer hardware to optimize locally running machine learning models.
Voice Match The Google Assistant has been trained with over 50 million samples from all kinds of speakers and ambient environments. Google claims that this enormous data set has helped it to develop the world’s best speech recognition. With Voice Match this best in class speech recognition gets personalized. Based on what the company called a breakthrough from earlier in the year, Voice Match allows users to train Google Home to recognize their voice, allowing the device to tailor answers to specific people.
Neither Alexa nor Siri have this feature, so for now, it’s a differentiator for Google Assistant and Google Home.
Google Clips This was the one truly innovative, novel product unveiled. Google Clips is a hands-free camera that uses on-device ML to capture spontaneous, candid video snippets. It “captures the moment, so you can be in the moment”.
The initial version is tied to presumed benign use cases for parents and pet owners. Does that mean it ships out of the box with fixed machine learning models tuned to recognize children and various kinds of pets? We’ll need to wait for more details from Google. For now, the use cases remain murky. Clips has the feel of “throwing something against the wall” rather than functioning as an integral piece of the product line up. Still, the street finds its use for things, and although the $249 price point isn’t inexpensive, it’s probably low enough to encourage at least some adoption and experimentation. Google has learned from the Glass fiasco so the device deliberately looks like a camera and exposes an indicator light so everyone knows, to some extent at least, what it is and people don’t feel that they’re being surreptitiously photographed.
Active Edge When it comes to UI innovations we think of Apple. But with Active Edge Google actually comes up with an interesting, innovative solution for summoning Google Assistant on the Pixel smartphones. Users simply squeeze the phone to access Google Assistant. Machine learning, once again under the hood, plays a role here and distinguishes between intentional and unintentional squeezing.
“Lens It” Google Lens, announced at Google I/O back in May, will be available as a “preview” on Pixel phones. It works by combining image classification, object detection and Google’s knowledge graph. When a user sees an object of interest they can “lens it” by simply pointing the smartphone camera at it and tapping the lens icon. By doing so, they trigger contextually relevant information and actions. Deeper integration into Google Assistant is promised in the near future.
Pixel Buds Ear buds ($159) providing, among other things, real time translation for 40 languages. Go here for an impressive demo.
Why is Google Doing Hardware?
Google’s core business is search advertising, so why is it in hardware? Margins are thin, competition is fierce, and Google’s track record in consumer hardware is less than stellar. Furthermore, Google has a global ecosystem of hardware partners around Android. Why compete with your own ecosystem?
The answer is, I think, clearly evident in Sundar Pichai’s recognition that AI is a major inflection point in computing. AI changes search and therefore poses a challenge to Google’s conventional search oriented business model. The ultimate contours of that change are unknown, but glimmers are appearing. At a minimum, we can see quasi-intelligent ML based agents animating more and more devices at home, in the car and at work. As they advance, they will simply answer our questions directly or even before we ask them. Traditional web search will gradually diminish in importance.
Google can’t count on an ecosystem in this emerging landscape. The Android hardware ecosystem is fragmented and only loosely aligned with Google’s business objectives. The biggest Android OEM, Samsung, is frankly unaligned and in competition with Google. The others operate on razor thin margins and lack financial and technical resources to innovate AI-first products. They can’t even be counted on to push essential Android OS updates to their users. More importantly, what is the win/win business model for the ecosystem and Google in an AI first world where search advertising is no longer a reliable and ubiquitous revenue driver?
In this light, I believe we can see why Google is building its own line of branded hardware. Google is acting now to intercept the transition to a post-web search world. How it will monetize products in that world is uncertain and it’s doubtful the company will be able to replicate the market share dominance it achieved in search. But it’s obvious that Apple, Amazon, and others are moving fast to define and capture markets in this new AI first world. Google has to be there. And it needs to control its own destiny by building its own products at the magic intersection of AI+Software+Hardware.