Google I/O – First Summary: Making AI The Core of Everything

This year from May 8 to May 10 Google is hosting the 10th annual Google I/O developer conference in Mountain View, CA.

As 3SS is visiting the event with a small team, I want to try to give a short overview about the most interesting topics presented and discussed so far. Due to the number of concurrent sessions and vast overall information available, this overview can only reflect on a small part.

 

 

The first day traditionally has been kicked off with keynote speeches outlining the past, present and future of Google. As expected based on last year’s keynote and the last 12 months’ activities at Google, the key message was: the future will revolve around Artificial Intelligence.

As outlined in the yearly investor letter by Google’s parent company Alphabet, Google sees AI as the  fundamental force behind the current and upcoming changes in technology. “Gaining access to technology makes an impact on life” was one of the statements by CEO Sundar Pichai in his opening speech. Google wants to enable as many people as possible to have that access.

There was an important message targeting developers: AI as a technology should be accessible for anyone and with the ML Kit, Google is providing a core product which should make using the technology nearly effortless – enabling AI capabilities not only in the cloud but also on devices, allowing a secure use of sensitive user data locally too.

 

Google Assistant and and Google Lens

Voice and Visual are the two areas where Google sees a shift in the interaction between devices and users, and is consequently pushing forward with its two products Google Assistant (GA) and Google Lens.

The biggest consumer-facing application based on AI and voice is the Google Assistant and Google is planning to extend its use-cases and capabilities significantly throughout the next year. Latest announcements point to deeper integration coming in Android P, so Google Assistant is on the way to becoming the hub of the interaction of users with services from multiple devices. The long-term evolution might be that apps will merely be the “executors” of tasks with Android and Assistant firmly positioned as the user’s command center for discovery and interaction.

Improvements which GA will deliver for this include the support for 30 languages by end-2018 and “continued conversations” in which GA understands context and language in a dialog without requiring “Hey Google” to preface every input phrase. Together with new and improved voices and tones this leads to a more natural conversation style.

Assistant will also bring extended visual appearance on devices, providing more information and interaction across all devices as well as new “smart-display” devices that are being introduced.

One big feature shown was the capability of Google Assistant to make appointments with businesses that don’t have an electronic booking system by conducting a live phone call interaction. The service is called “Google Duplex” and will basically connect digital and non-digital capabilities – this might be a game-changer in how assistants can be of service to consumers.

 

 

Visual input and processing drive Google Lens – which has not yet reached the level of awareness and adoption as Assistant. With the advancements in AI based image recognition and processing, this area will also see big improvements: In partnership with selected CE manufacturers Google Lens is now being integrated directly into the camera app. New features that will be rolled-out in the coming weeks include:

  • Smart text selection: Copy & paste text directly from the camera image
  • Smart match: find things which match a selected object within an image
  • Real time results: show search results appropriate to an image or parts of an image directly in the camera image

In combination with Google Assistant, Google Lens has the potential to become a very powerful method for user interaction in the future –basically there will be no need for any additional UI if the combination of speech and image input works for the user.

 

Google Maps and Google News

Also for AI is also adding functionality to Google Maps and making it more natural to use: Maps will get the new section “For You” in which recommendations based on location and user behavior will be shown. It will also include a “Your Match” score that will indicate how well Google thinks the location will fit to the user – and it will explain the criteria used in making this determination.

In the future maps will also introduce a visual positioning system (VPS). This will use the camera image and landmarks and angles of the camera to detect position and direction. It will show additional information and routing directly on the camera image.

Another theme which was repeatedly picked up during the different talks was the “digital well-being” and how Google wants to help to improve this. For Google this means: How can technology impact everyday life in a positive way. This includes helping users to find and keep the right ‘life balance’ of app-usage and screen-time as well as providing trustworthy, unfiltered information.

To this end, Google is launching a completely updated Google News experience. This will result in recommendation and summaries created by AI but also unfiltered news-feeds which are the same for all users as well as the possibility to understand the full story behind a news item by gathering information from different sources, locations and perspectives in real-time. Subscribing with Google allows users to subscribe to a large number of publications through Google News. The stated goal of Google: Building and supporting high quality journalism.

 

Android P

The other main topic at any Google conference is of course: Android. And this year the question: What’s new with Android P?

With Android P – the public beta was launched today (May 8, 2018) – Google brings many improvements around three key topics: intelligence, simplicity and (again) digital well-being.

It will introduce AI based adaptive battery and brightness adjustments which will learn from the user’s behavior and find the best settings. No longer will it only try to predict the next applications that a user is going to launch, but also the next actions within that app. For this “app actions” and “slices” can be used by developers to provide deep-links into actions and functions of their applications as well as previews and snippets of the app within search and Assistant.

UX improvements will include a better performance of screen rotation and volume control.

The new UI will follow the idea of “gestures over buttons” for the home screen and allow the user to more easily access the relevant functionalities.

 

 

Google has also rolled out a new version of Material design, providing a number of new tools, updated design and easier branding possibilities and most notably: all material components are now available as open-source for usage in apps.

To help users manage and control their time spent in apps and on their mobiles in general it will provide a dashboard with usage data but even more importantly, it will allow consumers to set time limits for app usage and enable an improved “do not disturb” mode that will “mute” all notifications (also visuals, vibrations etc).

Many more changes were presented in the developer keynote later in the day and in the general keynote. Multiple improvements and additions to Jetpack will make development easier; app-binaries will become smaller by dynamic loading of libraries, and components and services around Firebase will allow better monitoring, adjustment and engagement with users.

Perhaps the most important news for anyone providing apps is that, starting in August, with Android P, it will be mandatory to have a minimum   of API Level 26 for all new apps. New updates for existing apps must do so by end-2018!

 

 

Besides the topics mentioned above, many other items were covered in the keynotes from day one: New capabilities and services in Google Cloud, more use-cases for AI in Google’s products and Android, more Assistant use-cases, more Android P details. And there were a lot of updates on tensor-flow, ML-Kit, ARCore and and and….too much info to detail here.

 

So – what’s the conclusion?

Google is clearly continuing to making AI the core of all Google products and services and has taken big steps in also providing it on devices and not only in the cloud. Additionally, the practical use-cases and integration details coming with Android P are increasing. With the new releases of tools and services it will be even easier for developers to make use of the technology. Also app-development and distribution will become more and more streamlined, allowing developers to focus on building a great user-experience while worrying less about the technology behind it all. Eventually this will result in more people being able to put their ideas into reality than ever.

 

Author: Stefan Blickensdörfer, Technical Director at 3SS, www.3ss.tv

 

The full video coverage of Google I/O can be found at:
https://www.youtube.com/playlist?list=PLOU2XLYxmsIInFRc3M44HUTQc3b_YJ4-Y

We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.

Read about how we use cookies in our Cookie Policy. If you continue to use this site, you consent to our use of cookies.