AI Under the Hood: Interactions

Interactions provides Intelligent Virtual Assistants that seamlessly assimilate conversational AI and human understanding to enable businesses to engage with their customers in highly productive and satisfying conversations. With flexible products and solutions designed to meet the growing demand for unified, optichannel customer care, Interactions is delivering unprecedented improvements in the customer experience and significant cost savings for some of the largest brands in the world.

The company recently launched Trustera, a real-time, audio-sensitive redaction platform. Trustera preemptively identifies and protects sensitive information like credit card numbers and solves the biggest compliance challenge in today’s contact-center environment: protecting a customer’s Payment Card Information (PCI) anywhere it appears during a call. The platform is designed to make the customer experience more trustworthy, secure and seamless.

The platform is built on nearly 20 years of Interactions’ Intelligent Virtual Assistant (IVA) excellence, 125 patents, billions of conversations and years of success at Fortune 25 companies. Leveraging speech recognition and advanced machine learning, Trustera recognizes sensitive data within 200 milliseconds of it being spoken and immediately responds by redacting it. This capability is especially critical given that 44% of data breaches include payment card information (PCI) or personal identifiable information (PII).

“Every day, millions of customers give their personal information to the companies they do business with—yet, there are no real safeguards in place to protect that information. We built Trustera to fix this unacceptable status quo,” said Mike Iacobucci, CEO of Interactions. “Trustera is ushering in a new, much-needed standard for contact center security. It’s the only solution on the market that prevents fraud at the source for both companies and consumers, bolstering brand loyalty and customer trust in the process.”

Mahnoosh Mehrabani, Ph.D.

We asked our friends over at Interactions to do a deep dive into their technology. Mahnoosh Mehrabani, Ph.D., Interactions’ Sr. Principal Scientist shared some fascinating information about how Interactions’ Intelligent Virtual Assistants (IVAs) leverage advanced natural language understanding (NLU) models for “speech recognition” and “advanced machine learning.” The company uses NLU models to help some of today’s largest brands to understand customer speech and respond appropriately.

Today, the best NLU models rely on deep neural networks (DNN). The billions of parameters powering these highly accurate state-of-the-art NLU models are trained using gigantic volumes of data that produce semantic outputs such as intent or sentiment. While these systems are incredibly effective, they require expensive, and often unsustainable, amounts of supervised data. In contrast, few-shot learning, which is a new generation of scalable machine learning methods, produces NLU models of comparable quality without the dependence on large datasets.

Mahnoosh has prepared extensive PowerPoint slides outlining the technical details of existing methods of few-shot learning and highlights potential applications for rapid NLU model development. In her slides, she also outlines the drawbacks to current methods and future research directions. The slides will provide technical details behind few-shot learning as an emerging technology that helps deliver better experiences to conversational AI and users.

When you request “representative” at a customer service line and get directed to a live agent, you probably have NLU to thank. NLU is a crucial piece of conversational AI that transforms human language—whether it be text or spoken—into digestible semantic information for machine comprehension. Interactions, a leading provider of Intelligent Virtual Assistants (IVAs), leverages advanced NLU models to help some of the largest multinational brands understand customer speech and deliver unparalleled user experience.

Today, the best NLU models rely on DNNs. The billions of parameters powering these highly accurate state-of-the-art NLU models are trained using gigantic volumes of data that produce semantic outputs such as intent or sentiment.

Through the years, Interactions has leveraged DNN-based NLU technology using large volumes of contact center specific speech data tagged with customized enterprise-driven intents through a unique human-assisted understanding process. While these systems are incredibly effective, they require expensive—and often unsustainable—amounts of supervised data. In contrast, a new generation of scalable machine learning methods—few-shot learning—produces NLU models of comparable quality without the dependence on large datasets. These methods use just a handful of examples to train, thereby broadening the use of NLU to applications in which large collections of labeled data might not be available.

In the customer service industry, few-shot learning can be especially helpful for offering customers the ability to speak in their own words instead of having to navigate clunky predetermined menus or being repeatedly misunderstood. These methods can train models with comparable accuracy to large supervised data-driven models at much faster rates. Few-shot learning provides an opportunity to quickly bootstrap and customize NLU to specific applications and vertical-specific vocabulary. This unique capability helps deliver superior user experience across industries like retail, healthcare, insurance and more.

In the MLConf session below, Mahnoosh reviews some of the existing methods for few-shot learning and highlight their potential applications for rapid NLU model development. She also discusses drawbacks to current methods and additional research directions needed to ensure that the small number of examples used to train a large number of parameters do not result in overfitted models that struggle to generalize. You’ll gain an understanding of the current landscape of few-shot learning in conversational AI, as well as the shortcomings of these techniques. As we grow NLU models and their applications, few-shot learning is an indelible part of rapidly delivering better experiences to conversational AI end users—and Mahnoosh unveils the technical details behind this emerging technology.

Mahnoosh also passed along two recent peer-reviewed research papers that she published with her Interactions colleagues that explain some technical aspects of their Intelligent Virtual Assistant technology.

Contributed by Daniel D. Gutierrez, Managing Editor and Resident Data Scientist for insideAI News. In addition to being a tech journalist, Daniel also is a consultant in data scientist, author, educator and sits on a number of advisory boards for various start-up companies. 

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW