Mark Stewart, CTO of Mubaloo discusses how commercial artificial intelligence platforms are becoming mainstream solutions in the digital landscape.
AI-enabled assistants and chatbots such as Amazon Alexa (built into the Amazon Echo speaker) or Google Assistant are becoming ever more popular and are predicted to becoming a staple in the digital user landscape this year. At the same time, companies are starting to implement chatbot technology to provide better customer service or resolve internal issues.
Whilst the terms ‘chatbot’ and ‘AI assistant’ are often used interchangeably, there are distinctive differences between the two. Chatbots are rule-based assistants that operate in a linear way, meaning that they respond to a query with a programmed answer. AI-enabled assistants also respond to queries but are also capable of learning, meaning that they build up information on a user as time goes on. This also gives them the added advantage of being able to provide different responses depending on the scenario.
Today’s bots and AI-enabled assistants are allowing companies to develop their own experiences that can be accessed easily within the chat environment.
What these two technologies have in common is that the sophistication of their performance depends on the availability of commercial service applications that the chatbots and AI-enabled assistants can tap into, i.e. connecting to providers of services and goods to complete a user’s query. Just as apps enabled mobile phones to become multifunctional devices, today’s bots and AI-enabled assistants are allowing companies to develop their own experiences that can be accessed easily within the chat environment.
We are in a place similar to a few years ago, with bots showing the same potential apps did with the release of iOS 4 in 2008. This was followed by the launch of the App Store and the Google Play store soon thereafter. In the ensuing app era, every company was eager to launch their own app, engage their audience, and mark their presence in this new media and sales channel. Eight years on, more than 60bn apps have been downloaded. Today, the population of commercial AI application platforms are gaining momentum at a similar speed and we’re seeing a new ecosystem of developers, service providers and third party services evolve.
The platforms that enable their respective bots with commercial extensions, just the way the App Store and Google Play are doing with apps, are Amazon Voice Service (AVS) for Alexa, Google Actions for the Google Pixel assistant (and its Amazon Echo equivalent Google Home) and SiriKit for Apple’s Siri. Currently, Amazon’s Alexa boasts 7,000 skills, growing by the hundreds each month, whereas Google Actions for developers is still in the early stages, having launched a few weeks ago. Apple is taking a slightly different approach with SiriKit, whereby its latest operating system iOS 10 enables Siri to communicate directly with apps installed on the user’s phone. Additionally, Apple are also looking to build new services into Siri directly, such as taxi bookings and personal payments.
Currently, all three tech giants are racing to provide the user with the most immersive experience, and just as with the App Store and Google Play, whoever offers the assistant with the most resources to draw on (i.e. has the most business services connected), will become the popular bot choice for consumers. The next few months will see which service platform provider manages to take pole position in this exciting environment and provide the most ‘enabled’ assistant of them all by attracting businesses to their platforms, thereby expanding the scope of their AI offering. Given that all assistants focus on slightly different areas, a multi-brand, multi-device landscape may prevail for the time being though, with users being able to choose the device that suits their current needs best.
Amazon is the provider set to make the most headlines for the time being, with a continuous drive towards innovation for Alexa. With hundreds of skills being added each month, Alexa could well become the most commercially-driven assistant. Amazon is also making headway creating an ecosystem for the user with an allure similar to it’s original Amazon website. Apart from connecting to areas of the home such as lighting, it is set to further enter the user’s daily life. Recently, Amazon announced that going forward, Alexa would be made available in Ford cars with the Sync3 system, meaning that drivers can access Alexa when driving (and, for example, continue to listen to an audiobook they started at home, or add items to their shopping list) and vice versa. Ford users in future can therefore, ask Alexa to lock and unlock their car remotely, as well as checking various other data such as remaining power for electric cars.
So what will the future hold for AI-enabled assistants? It is predicted that in future, AI-enabled assistants will not only action commands, but actually gain some autonomy in running the user’s life. Today, assistants can book a cab, make mathematical calculations and play a song, but the user has to request them to do so, essentially operating the device by command themselves. In future, AI-enabled assistants should become proactive, not reactive, adding a new layer of sophistication to the user experience, especially as they start interacting with IoT devices in the user’s environment. This means that Amazon Alexa will soon autonomously put milk on your shopping list if the connected fridge has sent a signal.
We are entering a very exciting period where the assistants of the present and future are able to execute tasks, request services and provide assistance tailored to the user’s needs. What will be interesting to see is how success will be measured by the platforms themselves-since app downloads will become a thing of the past, new benchmarks for measuring commercial success will have to be set.