AI takes centre stage in your daily life
Gmail that writes itself to an Assistant that books a restaurant, Google I/O brought a ton of excitement to its products, almost all powered by its investments in AI.
Google held their annual developer conference known as Google I/O. The updates came thick and fast, upcoming mobile devices and new products and services. But the star of the show was Google's assistant who made two phone calls, one to book a hair appointment for Lisa, and the second made a booking at a restaurant where Google Assistant was able to demonstrate intelligence in the conversation and respond to natural language, even reframing things to drive an outcome. It's proving the most advanced AI we've seen in any of the voice platforms to date. It's years ahead of its peers from Microsoft and Amazon. Opening an exciting new chapter, particularly when you think about what the this means for the future of communications.
CEO Sundar Pichai kicked things off by discussing how AI helps everyone, especially improving healthcare diagnoses and predictions and accessibility. But we just wanted to see more cool demos.
In the onstage demos, Google Assistant called (yes, by phone) a hairdresser and booked an appointment for between 10am and 2pm on a Tuesday. To me, however, the exciting part was missing, Google Assistant could potentially also answer the call.
So, in theory, Google Assistant could call Google Assistant, allowing for all interactivity between partners driven by AI, to the point where that booking could be a schedule that turns up in the salon's G Suite, all negotiated by Google Assistant, and plugging directly into a scheduling or booking system.
Taking that one step further, why do we even have the voice component in between, this could just become an API call, negotiating between two parties for a meeting, booking an appointment or a restaurant, it could even potentially, do essential negotiations for the terms of an agreement,
The possibilities of this are endless. It will be quite exciting to see what people (mainly developers) do with this in the coming years, particularly as Google Assistant plugs naturally into the Android ecosystem.
As a natural part of your device or OS, it can quickly leap Google's developments in laptops, tablets, and even reproduce to different ecosystems life cars, TV, and smart homes.
There was even an announcement at I/O for a new Google dongle, much like the original Chromecast, targeted at the developers with the idea of building an operating system that people can have on their Smart TVs. You could have all the functionality of AI-powered Android on TV; you know Google Assistant would be there.
Android Auto is coming out in larger scales, all of a sudden the Google ecosystem will be everywhere with Google Assistant right there.
In time we will reflect an acceleration point in the use of voice and natural language interfaces, and open the door for stronger plugins between our physical and digital selves.
You might think about the risks and downside, mainly from the social perspective, but it means most the users and businesses will be empowered to entirely new productivity levels.
Personally, I'm looking forward to the Android P update. While I won't be trying the public beta, I'll wait for the public release and grab myself a Pixel 2 with my Google Home devices.
Can't wait to have a client that will let me apply it to their organisation.