Quantcast
Channel: Self-Service – Aspect Blogs
Viewing all articles
Browse latest Browse all 153

Learning about Learning and the Natural vs. Artificial Intelligence Debate

$
0
0

Last month, the Radisson Blu Edwardian hotel chain in London launched Edward, a new chatbot in the messaging channel that provides guests at the chain with a virtual host who can handle routine questions and route requests for services to appropriate staff departments while freeing up the front desk staff for interactions that more directly require a human touch. Learning about Learning

One of the unique things about Edward is that he is the first chatbot to leverage Aspect NLU, a linguistics-based natural language understanding engine, to syntactically and semantically analyze a sentence in order to extract the meaning and intent of an incoming freeform text. This contrasts with something like the early Facebook Messenger chatbot Assist, which provides an aggregated interface for multiple tasks, but requires you to choose items from menus via their numbers or one-letter responses from a specific selection. At the time of this writing, Edward classifies natural language text into one of 170 different topics, and 6 different intents (asking a question, requesting a service, etc) – leading to nearly 200 use cases – in order to serve the guests’ needs. And he has done this so far without leveraging machine learning.

Machine Learning vs. Linguistics-Based Approaches

Machine Learning is a powerful tool for classification tasks, and in the era of Big Data, it has provided useful and insightful tools for many problems. In a typical supervised task, machine learning is used to consume thousands of examples and their classifications in order to create a model consistent with those examples, which can then be used to classify previously unseen data. A binary classifier simply needs to answer a single question: does an entity belong to, or not belong to, a class? An example of this is the spam filter on your email inbox, working to classify whether or not a given email is spam. Many such filters start with a basic model and continue to learn as you confirm or reject their classifications.

Learning about Learning 2A harder task is multi-class classification, where an item must be sorted into one of several competing categories. The challenge in these problems is to provide enough training data to illustrate the distinctive features of each of those categories. IBM’s GWYN framework, empowered by Watson and deployed in the new 1-800-Flowers.com digital ordering system, asks multiple questions in order to suggest possible flowers from the catalog. Users’ natural language answers lead to the eventual selections because the bot uses a model trained on a corpus of flower-ordering text to lead to a reduced selection of possibilities from the catalog. Relevant text to train on, however, can be difficult to come by; in the case of 200 use cases in the hospitality domain, unless all interaction between the front desk and guests in the past few years has been recorded, transcribed, and hand-tagged over hundreds of man-hours, that data could be nearly impossible to acquire. Edward’s natural language capabilities, on the other hand, are a generalized model tuned to this domain in a few months’ collaboration between the customer and a small development team.

Transparency and Control vs “Black Box”

Another interesting comparison between a proscriptive (rules-based) versus a descriptive (trained) system is the question of transparency and control. Edward’s internal logic is completely inspectable and easily modifiable by stakeholders. The product of many machine learning frameworks, on the other hand, is a “black box” that does not expose an easily-understood model to the observer; in the case of online learning such as what was employed by Microsoft’s chatbot Tay, the lack of direct control can have unintended but far-reaching consequences.

The Right Tool for the Right Job

Routinely tested on a corpus containing over 28,000 different domain-appropriate utterances, ranging from fully-inflected sentences to shorthand phrases, Edward achieves perfect scores. Data collection and analysis with live users is ongoing right now. The takeaway message from this might be that we need to remember the maxim regarding using the right tool for the right job. Machine learning brings powerful data-driven insights in tasks where data is plentiful and modeling by hand will not scale to real-life problems; top-down models may still rule specialized spaces. Different approaches have different strengths, and leveraging them in cooperation is the best way to approach these complex tasks.

The post Learning about Learning and the Natural vs. Artificial Intelligence Debate appeared first on Aspect Blogs.


Viewing all articles
Browse latest Browse all 153

Trending Articles