The news was the big surprise saved for the end of a two-hour keynote at the search giant’s annual Google IO event in the heart of Silicon Valley.
“We have started building tensor processing units…TPUs are an order of magnitude higher performance per Watt than commercial FPGAs and GPUs, they powered the AlphaGo system,” said Sundar Pichai, Google’s chief executive, citing the Google computer that beat a human Go champion.
The accelerators have been running in Google’s data centers for more than a year, according to a blog by Norm Jouppi, a distinguished hardware engineer at Google. “TPUs already power many applications at Google, including RankBrain, used to improve the relevancy of search results and Street View, to improve the accuracy and quality of our maps and navigation,” he said.
The chips ride a module that plugs into a hard drive slot on server racks. Engineers had them running just 22 days after they tested first silicon said Jouppi, who previously helped design servers and processors at Hewlett Packard and Digital Equipment.
Given the nature of AI algorithms, “the chip [can] be more tolerant of reduced computational precision, which means it requires fewer transistors per operation…[and thus] can squeeze [in] more operations per second,” he said. The project started several years ago. Google has been hiring engineers with semiconductor expertise for some time. However, it managed to keep secret what they were working on, despite the fact the chips are already running in systems.
The company is not the first to design an accelerator specifically for AI. Nervana Systems is preparing a cloud service that will be based on its own AI accelerators. Movidius has its own merchant chip for embedded applications, and recently announced plans for a high-end version.
The news comes amid a broad debate in the computing industry over the last few years about how best to accelerate emerging AI algorithms such as convolutional neural networks. To date, Microsoft and Baidu have opted to use FPGA accelerators for their cloud services. Facebook designed a GPU accelerator and made it open source.
The algorithms caught fire when they showed about three years ago the ability to recognize images as well or better than humans. Google’s demonstration at playing Go was another key milestone given the complexity of the game.
In one game of the match, “move 37 was the most beautiful move due to its creativity,” said Pichai. “We normally don’t associate computers with making creative choices, so this is a significant achievement in AI, he said, noting the human Go champion has since used the move in other games.
Google released no details about the new chips. Pichai said the the search giant’s TensorFlow algorithms which he said has one become of the most popular projects on GitHub.
Don’t expect Google to provide merchant versions of the chips. Access to TPU hardware “will be one of biggest differentiators for the Google Cloud Platform,” he added.
Pichai gave examples of how Google is using AI to make robotic arms more accurate. It is also working on an expert system to help prevent diabetic blindness through early diagnosis.
“We live in an extraordinary period for computing…The real test is whether humans can achieve more with AI assisting them so things previously thought impossible may become possible,” he concluded.
Google claimed the TPUs are three process generations ahead of the competition, said Kevin Krewell, senior analyst with Tirias Research. “The TPUs are “likely optimized for a specific math precision possibly 16-bit floating point or even lower precision integer math,” Krewell said.
“It seems the TPU is focused on the inference part of CNN and not the training side,” Krewell said. “Inference only requires less complex math and it appears Google has optimized that part of the equation.
“On the training side, the requirements include very larger data sets which the TPU may not be optimized for. In this regard, Nvidia’s Pascal/P100 may still be an appealing product for Google,” he added.
Beyond the surprise news of the TPUs, the annual Google IO was in many ways about the search giant playing catch up with rivals Amazon, Apple and Facebook’s Oculus in areas from virtual reality to smart homes and watches.
In VR, Google will make its own hardware, and has announced a reference design for VR headsets and controllers others cane make using extensions in Android N. A beta version of the operating system is available now with the first VR hardware using it coming in the fall.
Google calls its approach to VR Daydream and worked with handset and chip vendors to define a specification for smartphones that are Daydream-ready. Phones compliant with the spec are expected this fall from the likes of HTC, Huawei, LG, Samsung and Xiaomi.
Android N will support VR latencies as low as 20 milliseconds, Google claimed. The company is working with game and movie developers to release VR titles for Android N. It will also support Daydream VR in its own services including new and existing YouTube videos, Google Photos and Street View in Google Maps.
Overall, Android N will pack 250 new features, including support for Vulkan, the graphics API also used by desktops and game consoles. It sports file-based encryption, and a faster runtime and new JIT compiler to load apps faster while using less memory.
Separately, Google announced it will ship this fall its own voice-based controller called Home, competing with the Amazon Echo. Home will act as a gateway controlling delivery of digital music and video to speakers and TVs. It has its own built-in speakers, links to home devices like Nest thermostats and can process natural language Google search requests.
In addition, Google announced Android Wear 2.0, a version that better mixes and matches data from various applications and is ready for cellular-enabled watches.
Finally, Google also previewed Duo, its answer to Apple’s FaceTime video calling app. It uses features in the WebRTC standard to show video of a caller before a user picks up a call and will be available on both Android and iOS this summer.
— Rick Merritt, Silicon Valley Bureau Chief, EE Times
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.