Artificial intelligence (AI) is changing the way we work and live. At CloudFactory, our customers use data to train and optimize algorithms that power AI-powered products and services. We’ve worked on everything from AI for self-driving cars to political campaigns. Our work has shown us that the power of AI is limited only by the imagination of the humans who design the systems it powers.

Although the entire AI space is booming right now, we’ve seen significant advancement in the area of natural language processing (NLP) for virtual assistants. Sure, virtual assistants are nothing new. They’ve been around for years now, with apps like Apple’s Siri and Microsoft’s Cortana.

However, their range of capabilities has been limited and far from perfect. Siri is one of the earliest personal-assistant technologies, yet bugs and shortcomings persist. For instance, Siri can tell you the weather forecast and the results of last night’s baseball game. It can play a song for you on request or add a reminder to your calendar. Yet, when it comes to searching for nearby restaurants or theater showtimes, Siri can get a little confused.

Intelligent virtual assistants are in demand

Although AI-powered virtual assistants have limited capabilities in comparison to a well-trained human counterpart, they are nevertheless impressive, in-demand, and growing more useful. More enterprises are using virtual assistants to increase productivity and streamline processes. They can perform an array of tasks, from communicating with customers via chatbots to automating routine tasks like scheduling meetings and editing content.

In fact, some predict the market for intelligent virtual assistants will be worth over $12.28 billion by 2024. Gartner estimates that by 2019, as much as 20% of all human interactions with smartphones will be conducted via virtual assistant.

Also, if the recent activities of top tech companies are any indication, virtual assistant technologies may in fact live up to their potential. Several huge companies, including Facebook, Google, Apple, Microsoft, and Amazon, have invested in NLP and other techniques to create intelligent virtual assistants and launch their own products.

Who’s leading the virtual assistant race?

In the race to develop the most useful virtual assistant, there are some clear leaders. Google has Assistant. Microsoft has Cortana. Amazon has Alexa, and Apple has Siri. However, there are also many promising AI startups in the space.

For instance, x.ai has developed Amy, a virtual assistant that can schedule meetings for you. It integrates with your email inbox. Whenever you use informal language, such as “Let’s grab coffee this weekend,” its natural language processing picks up on it and adds it to your calendar.

As for who is winning, it’s too early to tell. Each virtual-assistant system has strengths and weaknesses, but none can perform an all-encompassing range of tasks at a level that rivals what a human can do. Virtual assistants can give responses to questions and complete basic tasks, but they can’t hold a conversation.

Intelligent virtual assistants rely on humans

As such, humans are the key to developing the datasets and algorithms required to train these systems so they can mimic human intelligence and “learn” how to answer questions and solve problems. Facebook and Microsoft have already taken strides to pair human and artificial intelligence together to create a more powerful virtual assistant.

When Facebook first launched M, the virtual assistant companion to its Messenger texting app, they used human agents. The program, only available to an exclusive group of beta testers, interjects into chat threads and offers suggestions based on the conversation. (Editor's Note: In January 2017, Facebook cancelled its M project. In a statement, the company said, "We're taking these useful insights to power other AI projects at Facebook.")

Microsoft’s Project Mélange is one of several initiatives to make AI technology more human. The project’s research focuses on multilingual speech and code mixing to enable virtual assistants to switch between languages easily in conversation. The efforts could improve how the technology perceives regional accents and nuances in language when communicating with humans.

Other companies have focused on making responses sound more human. Amazon developers recently upgraded Alexa’s speech synthesis markup language tags so the speech pattern is less monotone and robotic and more life-like and conversational. For example, the updated coding allows Alexa to pause, whisper, and vary the pace and volume of speech. Some NLP ventures are hiring writers, poets, and other creatives to participate in making machines sound more alive and human.

What’s next?

Despite recent advances, inserting humanity into machines remains no simple feat. Natural language processing will be essential to understand and improve voice-enabled commands, and it will require more than just data, coding, and better voiceovers. It will need humans behind the scenes to provide the data that trains and optimizes the algorithms that power these systems.

So far, all of these virtual assistants are great at a handful of tasks. They can help you with scheduling meetings, browsing the web for answers to your questions, and calling up a friend, for example. However, their range of tasks is minimal in comparison to a trained human. In the future, the goal will be to advance the technology so that one virtual assistant will not only be able to handle a wide array of tasks, but it will be able to do it in a way that is surprisingly human.

New Call-to-action

NLP Computer Vision AI & Machine Learning Computer Software

Get the latest updates on CloudFactory by subscribing to our blog