Chatbots and voice assistants play a big role in simplifying the communication with our devices. It is human nature to communicate using the voice and with the expansion of Artificial Intelligence, particularly Natural Language Processing, engineers can create chatbots that can understand human language, as well as the context of conversations to give relevant responses. In this article, I will show you how easy it is to train an NLP based chatbot and integrate it into your application.
There are plenty of branches of Artificial Intelligence out there, but in this article we will exclusively cover Natural Language Processing, because it is crucial for Intelligent Chatbots and it dramatically improves communication with them.
By definition:
Natural Language Processing is the ability of a computer to process the same language - spoken or written - that humans use in normal discourse.
A Conversational User Interface (CUI) is a digital interface where a conversation is put as the main tool for interaction with a user. The CUI is more social and natural in so far as the user “messages”, “asks”, “agrees” or “disagrees” instead of “navigates” or “browses.”
So why do we need Natural Language Processing in chatbots? To put it simply, NLP makes communication easier and more natural for the user, and therefore it attracts more people and makes technology more useful and attractive for businesses and for users in general. The role that NLP plays in chatbot implementation is shown in the picture below.
The basic type of chatbot is a menu/button-based chatbot. This chatbot is based on decision tree hierarchies presented to the user in the form of buttons. A user is required to make several selections by choosing the buttons that will lead to the answer. This type of chatbot is good for answering frequently asked questions and is the easiest for implementation, but when scenarios are more complex with more variables and it is harder to predict how the user will react, these chatbots usually fail.
Another advanced type of chatbot is keyword recognition based chatbots. Their main difference is that they can listen to what users type, recognize the keywords, and give an appropriate response based on this input.
The most advanced type of chatbot these days is a contextual chatbot. In addition to Natural Language Processing, they also utilize another branch of Artificial Intelligence known as Machine Learning (ML). ML helps to remember conversations with specific users and learn from these conversations, gather data, and improve automatically over time. This feature dramatically improves communication with a chatbot, as it shortens the time of conversation and makes interaction easier as soon as the bot starts to understand users’ needs.
There are some tools shipped as services which are available for working with Artificial Intelligence including NLP. Most of them have APIs for integration into your application and some kind of a tool, usually UI, to create, train, and manage your NLP instance.
The most well-known are:
All of these solutions would be suitable, but in this article, I would like to concentrate on IBM Watson, because it has a free lite plan, a user-friendly interface, and a lot of SDKs. For me, it also appeared more straightforward and easy to start with.
Now let's see how we can train and integrate the assistant. First of all, have a look at the high-level architecture of any system which works with IBM Watson.
As you can see, you can integrate it to some popular messaging platforms or any other application since IBM has SDKs for many programming languages (including JavaScript). Then your client connects to Watson Assistant, which is an IBM cloud service, which in turn connects to other IBM services to provide a set of tools for chatbot creation. As a developer you should only care about connecting to Watson Assistant and calling its endpoints. This process will be described in the next session. Now let's concentrate on creating a chatbot instance and training it.
Before we begin, I will introduce you to the key components of an IBM NLP chatbot: intents, entities, and dialogs.
The names are more or less self-explanatory, but to make things clear here are a few examples of intents:
You are the one who defines the intents, so you can call them whatever you want, but you will have to give examples to Watson so it can learn which questions should be associated with given intents. For example, the chatbot can learn that the question "Can I buy a smartphone?" leads to the intent "buy_a_smartphone". The idea is that you can give more than one example, and once you give enough examples, the chatbot will learn to recognize other questions which might mean this intent automatically.
The entities are a bit simpler. The examples of entities are: laptop, weather, monitor, Ikea.
You can teach the chatbot to recognize synonyms of laptop as a laptop, or if someone mistypes any particular word Watson should understand it anyway and map it to the correct "entity".
The third key component is a dialog. The dialog is the definition of the way the chatbot should behave- the way it should respond when it recognizes particular intent or/and entity and the actions it should take after it receives user input.
Here are the steps you should take to create an instance of Watson assistant:
The definition of the node looks like this. If the assistant recognizes (your conditions), the assistant responds (some response) and then the assistant should either wait for a reply or jump to another specified node.
Now let's use a "Try it" button and see how it works. I will intentionally try to use complicated sentences that I didn't define in examples.
As you can see, it was able to understand everything correctly even though I phrased it in a complicated way.
I will show you how to do it in Node.js, as it's extremely easy. Just follow these steps:
{% c-block language="js" %}
npm install body-parser ibm-watson --save-dev
{% c-block-end %}
{% c-block language="js" %}
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
{% c-block-end %}
{% c-block language="js" %}
const bodyParser = require('body-parser');
/* parser for post requests */
const AssistantV2 = require('ibm-watson/assistant/v2');
/* watson sdk */
const { IamAuthenticator } = require('ibm-watson/auth');
app.use(bodyParser.json());
{% c-block-end %}
{% c-block language="js" %}
const credentials = {
API_KEY: 'your_api_key',
URL: 'https://api.eu-gb.assistant.watson.cloud.ibm.com',
VERSION: '2019-02-28',
ASSISTANT_ID: 'your_assisntant_id'
}
{% c-block-end %}
{% c-block language="js" %}
const authenticator = new IamAuthenticator({
apikey: credentials.API_KEY
})
{% c-block-end %}
{% c-block language="js" %}
const authenticator = new IamAuthenticator({
apikey: credentials.API_KEY
})
{% c-block-end %}
{% c-block language="js" %}
app.get('/session_id', function (req, res) {
assistant.createSession(
{
assistantId: credentials.ASSISTANT_ID,
},
function(error, response) {
if (error) {
return res.send(error);
} else {
return res.send(response);
}
}
);
})
{% c-block-end %}
{% c-block language="js" %}
app.post('/message', function (req, res) {
if (!req.body.sessionId || !req.body.message) {
res.status(400).send("bad request")
}
var payload = {
assistantId: credentials.ASSISTANT_ID,
sessionId: req.body.sessionId,
input: {
message_type : 'text',
text : req.body.message
}
};
assistant.message(payload, function(err, data) {
if (err) {
return res.status(err.code || 500).json(err)
}
return res.json(data)
})
})
{% c-block-end %}
And that's it! Now you are able to connect to your chatbot engine through your application! First you need to get a session ID through /session_id endpoint, and then use this session_id when sending a message through /message endpoint!
Let’s see how it works in postman. I’m sending a POST request to /message endpoint with 'sessionId' and 'message' in the body and the response is:
In the response you get all information that you need. You get the intents and entities that Watson recognized (with confidence value) and you get the response that was defined in a particular node. Now, as a developer, you can do whatever you want with this information- you can re-direct the user, change content on your page, or just simply display the answer.
For this example, I would wait for the positive or negative answer, give the user a relevant response, and re-direct him to the right section of the terms and conditions page.
The examples I showed here are quite primitive, but their main purpose was to show how the intelligent assistant works. Once you create a lot of different intents, entities and dialog paths, this bot can serve as a replacement for any type of customer support or just a fun feature, and in case it doesn't know the answer it can re-direct you to a real human.
I hope I could show you how easily it can be done and that using some types of artificial intelligence in your app doesn't require any specific knowledge.
What do you think? Leave me a message in the contact form! 😃