Google Delves Deeper Into Machine Learning with TPU 3.0

Article By : Rick Merritt

TPU 3.0 center of attention as Google unveils a list of new projects enabled by deep learning, including driver-less cars by Waymo

SAN JOSE, Calif. — Google announced at an annual event here a laundry list of ways it is expanding its use of deep learning and a new TPU 3.0 chip driving them. Perhaps the most surprising of new AI-powered products, its sister company Waymo said it will launch a driver-less ride-hailing service in Phoenix later this year.

In a keynote, Google chief executive Sundar Pichai addressed rising concerns about the negative impacts of machine learning and the tech industry in general. He discussed new initiatives and examples of how Google aims to make a positive difference in everything from accessibility to fake news and smartphone addiction.

“Technology can be a positive force, but we can’t be wide eyed about the innovations technology creates. Very real questions are being raised about the impact of advances and the role they play–the path forward has to be calibrated carefully,” he said.

The slate of new Google applications using machine learning include:

  • Smart displays from JBL, Lenovo and LG using the Google Assistant
  • An improved Google Assistant that parses more complex queries
  • Computer vision capabilities integrated into camera apps in smartphones from 11 vendors
  • A new set of machine learning APIs in the next generation of Android
  • An extension of autocorrect that can suggest whole sentences or phrases

The most surprising of these is Waymo will launch a ride-hailing service later this year in Phoenix using self-driving cars.

“That’s just the beginning We are building a better driver for ride hailing, logistics and personal cars. Our technology is an enabler for all these industries, and we will partner with many companies,” said John Krafcik, chief executive of Waymo, a division of Google’s parent company, Alphabet.

Test users in Phoenix have been riding in Waymo’s self-driving cars for some time, Krafcik said. To date its fleet has driven more than six million miles on public roads and five billion miles in simulations, he added.

Waymo partnered in 2013 with Google’s machine learning unit, Google  Brain, to apply deep learning to reduce errors detecting pedestrians by 100x. Using Google’s TensorFlow framework and TPUs it now trains models 15x faster and has developed models to eliminate sensor noise caused by snow.

Google has deployed a new version of its TPUs that use liquid cooling, a first for a Google data center, to boost system performance eight-fold to “well over 10 petaflops,” Pichai said.

Liquid cooled TPU 3.0 system

The TPU 3.0 brings liquid cooling to Google’s data centers. (Images: Google)


Few insights on Google’s TPU 3.0

Google did not describe in detail its TPU 3.0 chips and systems based on them. Instead it focused mainly on how to use the TPU 2.0 systems still in an early phase of being offered as a cloud service.

The company declined requests for an interview, suggesting the TPU 3.0 is still in a research phase. Indeed, Google is still taking flak from rivals such as Nvidia that its TPU 2.0 cloud service is still only available with a limited but growing set of reference models.

The good news for Google is it can show solid and improving results on its TPU Cloud. It detailed a handful of results training neural network models across image and speech recognition jobs running in a few hours to a few days for costs less than $500 and in several cases less than $50 — far below the weeks and thousands of dollars such jobs required in the recent past.

The sector is moving quickly. Google integrated in its Cloud TPU service advances in algorithms shown by Fast.AI in the recent Dawnbench competition at Stanford. The progressive training and aggressive learning techniques helped drive down the cost of training a ResNet-50 model to $25 from $59 previously.

Google TPU Pods

Google’s TPU pods use a proprietary interconnect to speed AI results.


Google Assistant gets conversational

Google announced a handful of capabilities that could help its Assistant catch up with Amazon’s Alexa that dominates the emerging sector.

For example, the next-generation Assistant will support extended conversations without needing to hear “Hey Google” wake words before each query. In addition, it will parse multiple questions in a single request.

Google demonstrated a Lenovo smart display integrating the Assistant to act as a voice-controlled TV, cooking assistant and digital picture frame. Such products will ship from multiple companies in July. By contrast, Amazon has released multiple smart displays using Alexa, but has not opened its designs to third parties yet.

In tandem with the displays, Google Assistant will support answers with text, images and videos as well as audio replies. Demos showed the capabilities running on smartphones. It will be available in Android this summer and iOS later this year.

Google is also working with more than eight companies such as Starbucks to support voice-enabled services over the Web such as ordering food. The Assistant will also be integrated with Google Maps this summer.

To date more than 500 million devices now use Assistant, most of them handsets. Google said it is supported by 40 car brands and 5,000 consumer devices and will be available in 30 languages in 80 countries by the end of the year.

Taking a somewhat frightening step forward, Pichai demoed the Assistant making calls on a user’s behalf to book a haircut and a table at a restaurant. The demos presented a lifelike Assistant that could understand difficult accents and free-flowing conversations while not revealing the requests were coming from an automated system.

The Assistant may soon start calling businesses to update Google’s search engine on details such as a company’s holiday hours.

“We have many examples of calls where things don’t go as expected, but the Assistant understands and handles the call gracefully. We want to get the expectations right for users and businesses…We’re going to work hard to get this right,” Pichai said.

Lenovo Smart Display with Google Assistant

Lenovo and at least two others will ship smart speakers with the Google Assistant in July, potentially catching a lead over Amazon.

A smarter Android moves into beta

Android P, the next generation of Google’s 10-year old mobile operating system, will have AI at its core, developers promised. Access to the features will initially be through its new ML Kit, a set of five APIs including image recognition and face detection.

Google is testing AI capabilities in Android P to optimize a handset’s battery life as much as 30 percent by anticipating usage patterns. It also is applying it to automate display brightness settings.

Android P also supports a new interface and navigation gestures. It is now in public beta and initially will be used in handsets from eight smartphone makers including OnePlus, Oppo/Vivo, Sony and Xiaomi.

Separately, Pichai announced an initiative on what he called digital wellbeing. “People feel tethered to their devices, increasing pressure to respond right away and stay updated,” he said.

A host of software features aim to encourage users to spend less time starring at handsets. They include fewer, smarter notifications and more ways to quiet or silence phones.

Google also announced a redesign of its Google News aggregation service coming next week. “News is core to our mission, and we have a tremendous responsibility to support quality journalism,” he said.

Google Maps with Google Assistant

Google Maps will support the Assistant, computer vision on smartphone cameras and maybe even AR to take navigation beyond today’s blue dot.

— Rick Merritt, Silicon Valley Bureau Chief, EE Times

Subscribe to Newsletter

Test Qr code text s ss