The recent interest in chatbot development has led to a variety of natural language processing (NLP) services, like Microsoft’s LUIS, Google’s Dialogflow, Facebook’s wit.ai, and IBM’s Watson, to name a few. These services all rely on identifying intents and entities when processing user input. The intent can be thought of as what the user wants to do or the information the user wants to convey. Entities are specific things associated with the intent. For example, a user saying, “I want to schedule a bath for my puppy.” might be associated with the intent “schedule pet service” with the entities “bath” and “puppy” identified as the specific service and pet type, respectively. This model works really well for a chatbot that controls a system, like home lighting or security system, or for one that can handle a small set of queries, like for FAQs or customer support. But we found it increasingly challenging to use when applying it to our survey chatbot.
Imagine the following hypothetical chatbot exchange.
Bot: Let me know if there is something we can do to make your experience better.
User: I’d like it if the store was laid out better and your employees smiled more.
What would be the proper intent to assign to the user’s response? She is providing feedback for both the physical store and the employees. Does the user even express intent when providing this type of feedback? By using multiple intent-entity identifying applications, e.g., one for store and one for employee improvements, we were able to produce a workable way to analyze this response, but it felt like we weren’t applying the right tool to the job.
Our search for other tools exposed us to a variety of models that were developed to solve different academic NLP problems, like reading comprehension and inference. Fortunately for us, AllenNLP was introduced around this time, which provided an easy way for us to access some of these cutting edge models. We also starting relying more on spaCy from Explosion AI. We had been using spaCy to parse sentences, but after trying out their Prodigy product we found that we could train classification models that better fit our use case. We haven’t had to develop our own custom language models . . . yet. We’ve been able to produce good results by creatively combining input from a few different models. Don’t let yourself get locked into one tool; keep looking for ways to improve your analysis.