© Abhishek Singh, Karthik Ramasubramanian, Shrey Shivam  2019
A. Singh et al.Building an Enterprise Chatbothttps://doi.org/10.1007/978-1-4842-5034-1_9

9. Deployment and a Continuous Improvement Framework

Abhishek Singh1 , Karthik Ramasubramanian1 and Shrey Shivam2
(1)
New Delhi, Delhi, India
(2)
Donegal, Donegal, Ireland
 

In the previous chapters, we designed a basic chatbot framework from scratch and explored integration options with third-party services and other backend systems. We also explained how to expose the IRIS chatbot framework as a Spring Boot REST API.

In this chapter, we will discuss different ways in which IRIS can be deployed on a remote server. We will also discuss how to integrate IRIS with Alexa in less than 5 minutes. At the end of the chapter, we will discuss how IRIS can be extended to be part of a continuous improvement framework by implementing a self-learning module and bringing a human into the loop.

Deployment to the Cloud

The IRIS framework exposed via RESTful APIs can be deployed to a remote server in multiple ways. In this section, we will discuss three different ways.

As a Stand-Alone Spring Boot JAR on AWS EC2

This is the most basic installation and deployment of a Spring Boot JAR. We follow a few steps for the JAR to run on the EC2 machine on port 8080

We log into the AWS account and select EC2 from the list of services seen in Figure 9-1. EC2 stands for elastic compute cloud servers provided by AWS. More details on EC2 can be found at https://aws.amazon.com/ec2/ .
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig1_HTML.jpg
Figure 9-1

Different AWS services

We launch an EC2 instance from the EC2 dashboard as shown in Figure 9-2.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig2_HTML.jpg
Figure 9-2

Launching an EC2 instance wizard on AWS

Launching an EC2 instance requires seven steps:
  1. 1.

    We choose an AMI (Amazon Machine Image). We use Amazon Linux 2 AMI (HVM), SSD Volume Type 64 bit x86.

     
  2. 2.

    We choose the instance type. We select t2.micro (also free tier eligible if you are using this service of AWS for the first time). The t2.micro instance has one vCPUs and 1 GB of memory, which is enough for the APIs to run.

     
  3. 3.

    The next step requires configuring instance details. We can use a checklist to protect against accidental termination. This step is optional.

     
  4. 4.

    We add storage details in the next step. By default, we get 8GB of SSD, and the volume is attached to the instance. However, we can add more volumes or increase the storage of the default volume if we want. This step is also optional, and 8GB of storage is enough for deployment for the demo.

     
  5. 5.

    We add tags to instances and storage volume for better management of EC2 resources. This is also optional.

     
  6. 6.

    This step, as shown in Figure 9-3, requires configuring a security group. A security group is a set of firewall rules that control the traffic for an instance.

     
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig3_HTML.jpg
Figure 9-3

EC2 security group inbound rules configuration

We want to expose port 80 to be accessed from everywhere and port 22, which is a secure shell port, to be accessed only from our local machine.
  1. 7.

    We review the configuration and launch the instance. Each EC2 instance requires a key-pair PEM file that we need to log into the instance securely. We will be asked to generate a new file, or we can use an existing one.

    Now once the instance is launched, it will have a public DNS name or IPv4 Public IP that we can use to log in.

     
  2. 1.
    The login command to log in from any Unix machine:
    ssh -i chatbot-iris.pem [email protected]
     
  3. 2.
    Once we log in, we can then copy our Spring Boot JAR from local using the SCP command:
    scp -i chatbot-iris.pem /path/to/iris.jar [email protected]:/path/to/your/jarfile
     
  4. 3.
    Once the JAR is copied, we can run the JAR by issuing the command
    java -jar path/to/your/jarfile.jar fully.qualified.package.Application
     
  5. 4.

    By default, the server starts on port 8080. However, if we want to change the port details, we can set the server.port as a system property using command line options such as -DServer.port=8090 or add application.properties in src/main/resources/ with server.port=8090.

     
If we used maven to build our code, we could also use
mvn spring-boot:run

As a Docker Container on AWS EC2

Docker performs operating-system-level virtualization. Docker is used to run software packages called containers. Docker makes it easier from an operations perspective because it packages the code, libraries, and runtime components together as Docker images that can be deployed with a lot of ease. For more details on Docker, visit www.docker.com/ .

We perform the following steps to run an application on Docker on EC2:
  1. 1.
    We update the installed packages and package cache on the instance:
    sudo yum update -y
     
  2. 2.
    We install the most recent Docker Community Edition package:
    sudo amazon-linux-extras install docker
     
  3. 3.
    We start the Docker service:
    sudo service docker start
     
  4. 4.
    We add the ec2-user to the Docker group in order to execute Docker commands without using
    sudo - sudo usermod -a -G docker ec2-user
     
  5. 5.

    We log out and log back in again to pick up the new Docker group permissions. To do so, we close the current SSH terminal window and reconnect to an instance in a new one. The new SSH session will have the appropriate Docker group permissions.

     
  6. 6.

    We verify that the ec2-user can run Docker commands without sudo.

     
  7. 7.
    We create a Dockerfile in the root directory of the code base. A Dockerfile is a manifest that describes the base image to use for the Docker image and whatever is installed and running on it. This dockerfile uses the openjdk:8-jdk-alpine image because we are building an image of a Java application. The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers. The ARG instruction defines a variable that users can pass at build-time to the builder. The JAR is named as app.jar, and an ENTRYPOINT allows us to configure a container that will run as an executable. It contains the command to run the JAR:
    FROM openjdk:8-jdk-alpine
    VOLUME /tmp
    ARG JAR_FILE
    ADD ${JAR_FILE} app.jar
    ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
     
  8. 8.
    We build a Docker image by issuing the following command:
    docker build -t iris --build-arg JAR_FILE=”JAR_NAME”.
    The following is the output from the build command executed on a machine:
    Sending build context to Docker daemon   21.9MB
    Step 1/5 : FROM openjdk:8-jdk-alpine
    8-jdk-alpine: Pulling from library/openjdk
    bdf0201b3a05: Pull complete
    9e12771959ad: Pull complete
    c4efe34cda6e: Pull complete
    Digest: sha256:2a52fedf1d4ab53323e16a032cadca89aac47024a8228dea7f862dbccf169e1e
    Status: Downloaded newer image for openjdk:8-jdk-alpine
     ---> 3675b9f543c5
    Step 2/5 : VOLUME /tmp
     ---> Running in dc2934059ab8
    Removing intermediate container dc2934059ab8
     ---> 0c3b61b6f027
    Step 3/5 : ARG JAR_FILE
     ---> Running in 36701bf0a68e
    Removing intermediate container 36701bf0a68e
     ---> da1c1f51c29d
    Step 4/5 : ADD ${JAR_FILE} app.jar
     ---> 0aacdba5baf0
    Step 5/5 : ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
     ---> Running in f40f7a276e18
    Removing intermediate container f40f7a276e18
     ---> 493abfce6e8c
    Successfully built 493abfce6e8c
    Successfully tagged iris:latest
     
  9. 9.
    We run the newly created Docker image via the following command:
    docker run -t -i -p 80:80 iris
     

As an ECS Service

In the previous two methods, you saw that you could deploy and run the Spring Boot JAR as a standalone service or by installing Docker and using it to run the API in the Docker container. This method discusses a service of AWS called ECS (Elastic Container Service). See Figure 9-4.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig4_HTML.jpg
Figure 9-4

Diagram of ECS objects and how they relate

Amazon ECS makes it easy to deploy, manage, and scale Docker containers running applications, services, and batch processes. Amazon ECS places containers across your cluster based on your resource needs and is integrated with familiar features like elastic load balancing, EC2 security groups, EBS volumes, and IAM roles. More details on ECS can be found at https://aws.amazon.com/ecs/ .

There are various steps required in running Docker images on ECS. Here’s a walkthrough of the deployment process via the AWS management console:
  1. 1.
    When discussing how to deploy a JAR as a Docker container on AWS EC2, we created a Docker image. We need to add this previously created Docker image to ECR. Amazon Elastic Container Registry (ECR) is a fully-managed container registry that makes it easy for developers to store, manage, and deploy container images.
    ../images/478492_1_En_9_Chapter/478492_1_En_9_Fig5_HTML.jpg
    Figure 9-5

    Creating an ECR repository in AWS

     
Once the repository is created as shown in Figure 9-5, the next step is to tag the Docker image so we can push the image to this repository:
docker tag iris:latest aws_account_id.dkr.ecr.us-east-1.amazonaws.com/iris:latest
Then we run the following command to push this image to ECR repository:
docker push aws_account_id.dkr.ecr.us-east-1.amazonaws.com/iris:latest
More details on pushing Docker images to ECR can be found at https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html .
  1. 2.
    Then we need to define the container definition. In the ECS service in the AWS management console, under Get Started, we can choose a container definition to use. We need to provide the ECR repository URL and Docker image name and tag, as shown in Figure 9-6.
    ../images/478492_1_En_9_Chapter/478492_1_En_9_Fig6_HTML.jpg
    Figure 9-6

    Container definition configuration

     
  2. 3.
    We define the task definition. A task definition is a blueprint for an application and describes one or more containers through attributes. Some attributes are configured at the task level, but the majority of attributes are configured per container. In Figure 9-7, we create a task definition for IRIS.
    ../images/478492_1_En_9_Chapter/478492_1_En_9_Fig7_HTML.jpg
    Figure 9-7

    Task definition

     
  3. 4.
    Define a service. A service allows us to run and maintain a specified number (the desired count) of simultaneous instances of a task definition in an ECS cluster. See Figure 9-8.
    ../images/478492_1_En_9_Chapter/478492_1_En_9_Fig8_HTML.jpg
    Figure 9-8

    Service definition details

     
  4. 5.
    We configure a cluster. The infrastructure in a Fargate cluster is fully managed by AWS. Our containers run without us managing and configuring individual Amazon EC2 instances. See Figure 9-9.
    ../images/478492_1_En_9_Chapter/478492_1_En_9_Fig9_HTML.jpg
    Figure 9-9

    Cluster configuration details

     
  5. 6.

    Once we review and click Create, we should see the progress of the creation of ECS. Once the cluster is set up and task definitions are complete, the Spring Boot service should be up and running.

     
Figure 9-10 is an example of a basic ECS cluster creation using the task definition outlined above.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig10_HTML.jpg
Figure 9-10

The ECS Launch Status screen

Smart IRIS Alexa Skill Creation in Less Than 5 Minutes

We will explore how IRIS can be integrated with Amazon Alexa in a few easy steps. In order to do so, the first step is to log into Alexa Developer Console at https://developer.amazon.com/alexa and create a skill. Creating a skill requires providing a skill name and the default language. We will choose a custom model to add to the skill. See Figure 9-11.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig11_HTML.jpg
Figure 9-11

Creating the Alexa skill name

The custom model has a few checklists to be provided for the skill to work:
  • Invocation name

  • Intents, sample, and slots

  • Building an interaction model

  • Setting up a web service endpoint

In our example use case of IRIS, since we already have custom defined different possible intents, intent slots, and dialogs modeled as a state machine, we aim to redirect the user’s utterance on Alexa to the IRIS backend API so that it can process the utterance and respond.

We use the invocation name as Iris. This will enable users to invoke this skill by asking Alexa to “ask” Iris. See Figure 9-12.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig12_HTML.jpg
Figure 9-12

Adding the skill invocation name

Next, we define a custom intent and a custom slot type so that all of the user’s utterances are matched to this intent and slot type. The aim is to redirect the utterance to IRIS and not do any intent classification-related processing on the Alexa layer.

We first create a custom slot type called IrisCustomSlotType; see Figure 9-13.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig13_HTML.jpg
Figure 9-13

A custom slot type

Now, we define a custom intent named as IrisAllIntent. This intent has a slot called utteranceSlot. The most important thing is {utteranceSlot}. We intend to fit all user utterances into this slot, as shown in Figure 9-14. It is a regex, which means the entire utterance value is under the utteranceSlot slot. This will be used later while reading the user utterance when Alexa requests the IRIS HTTPS endpoint.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig14_HTML.jpg
Figure 9-14

The utteranceSlot in IrisAllIntent

The utteranceSlot is defined to be of slot type IrisCustomSlotType, as shown in Figure 9-15.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig15_HTML.jpg
Figure 9-15

Custom slot and its type created for IRIS integration

At this stage, we should have the intents, slots, and slot types created in the interaction model of our Alexa skill, as shown in Figure 9-16. In the figure, you can see that there are other built-in intents as well what is present for a standard interaction with Alexa devices such as stop or cancel.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig16_HTML.jpg
Figure 9-16

Interaction model screen

Once we have defined all of the required attributes for our custom interaction model, we can build the model. Building the model requires us to click the Build Model button, which saves the skill manifest and builds the model. See Figure 9-17.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig17_HTML.jpg
Figure 9-17

The progress of the build process of the IRIS custom interaction model

The last thing to complete the setup is to provide an HTTPS endpoint of IRIS to host all the intent classifications and response generation logic, and receive a POST request from Alexa. See Figure 9-18.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig18_HTML.jpg
Figure 9-18

The skill’s service endpoint configuration

Let’s now test the setup by using the simulator available on the developer console, as shown in Figure 9-19. We ask Alexa to ask Iris for “weather in Dublin.” The response from Iris is as follows, which can be heard in the voice of Alexa.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig19_HTML.jpg
Figure 9-19

The simulator available on the Alexa developer console

The request details sent from Alexa to the IRIS API are
 {
      "version": "1.0",
      "session": {
             "new": true,
             "sessionId": "amzn1.echo-api.session.XXXXXXX-2b66-XXXX-XXX-XXXXXXXXX",
             "application": {
                    "applicationId": "amzn1.ask.skill.XXXXXX-e624-XXXXXX-XXX-XXXXXXX"
             },
             "user": {
                    "userId": "amzn1.ask.account.XXXXXXXX"
             }
      },
      "context": {
             "AudioPlayer": {
                    "playerActivity": "IDLE"
             },
             "System": {
                    "application": {
                          "applicationId": "amzn1.ask.skill.XXXXXXX-XXX-XXXXX-XXXX-XXXXXXXXX"
                    },
                    "user": {
                          "userId": "amzn1.ask.account.XXXXXX"
                    },
                    "device": {
                          "deviceId": "amzn1.ask.device.XXXXX",
                          "supportedInterfaces": {
                                 "AudioPlayer": {}
                          }
                    },
                    "apiEndpoint": "https://api.eu.amazonalexa.com",
                    "apiAccessToken": "XXXXXXXX"
             },
             "Viewport": {
                    "experiences": [
                          {
                                 "arcMinuteWidth": 246,
                                 "arcMinuteHeight": 144,
                                 "canRotate": false,
                                 "canResize": false
                          }
                    ],
                    "shape": "RECTANGLE",
                    "pixelWidth": 1024,
                    "pixelHeight": 600,
                    "dpi": 160,
                    "currentPixelWidth": 1024,
                    "currentPixelHeight": 600,
                    "touch": [
                          "SINGLE"
                    ]
             }
      },
      "request": {
             "type": "IntentRequest",
             "requestId": "amzn1.echo-api.request.XXXXXXXXXXX",
             "timestamp": "2019-04-21T11:22:56Z",
             "locale": "en-US",
             "intent": {
                    "name": "irisAllIntent",
                    "confirmationStatus": "NONE",
                    "slots": {
                          "utteranceSlot": {
                                 "name": "utteranceSlot",
                                 "value": "weather in Dublin",
                                 "resolutions": {
                                    "resolutionsPerAuthority": [
                                          {
                                                "authority": "amzn1.er-authority.echo-sdk.amzn1.ask.skill.XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXX.IrisCustomSlotType",
                                                "status": {
                                                      "code": "ER_SUCCESS_NO_MATCH"
                                                }
                                          }
                                    ]
                              },
                              "confirmationStatus": "NONE",
                                 "source": "USER"
                        }
                  }
            }
      }
}
Response from IRIS:
{
      "body": {
             "version": "string",
             "response": {
                    "outputSpeech": {
                          "type": "PlainText",
                          "text": "It seems to be clear sky at the moment in dublin. The temperature is 19.13 degrees. Humidity is close to 48. I wish I were human to feel it. Anyways, what else do you want to know from me? "
                    },
                    "type": "_DEFAULT_RESPONSE"
             },
             "sessionAttributes": {}
       }
}
In Chapter 6, we created a NodeJS application that provides an external API endpoint for Facebook Messenger. For integrating with Alexa, we need to add the Alexa endpoint as described below:
// new API endpoint 'alexa' that expects a POST request
app.post('/alexa', function(req, res) {
// receive the user's utterance by reading the utteranceSlot value from JSON
   var text = req.body.request.intent.slots.utteranceSlot.value;
   var session = req.body.session.user.userId;
   var timestamp = req.body.request.timestamp;
/* GET request parameters to the IRIS backend service. 'Message' param contains user utterance
*/
   var params = {
       message: text,
       sender: session,
       seq: 100,
       timestamp: 1524326401
   };
   var esc = encodeURIComponent;
   var query = Object.keys(params)
                   .map(k => esc(k) + '=' + esc(params[k]))
                   .join('&');
//url is the IRIS API url
   fetch(url +query).then(response => {
       response.json().then(json => {
           var alexaResp = {
               "version": "string",
               "sessionAttributes": {},
               "response": {
                   "outputSpeech": {
                       "type": "PlainText",
                       "text": json.message,
                       "ssml": "<speak>"+json.message+"</speak>"
                   }
               }
           }
           res.json(alexaResp);
           });
       })
       .catch(error => {
       var alexaResp = {
           "version": "string",
           "sessionAttributes": {},
           "response": {
               "outputSpeech": {
                   "type": "PlainText",
                   "text": "Sorry, My team is having bad day to get this information to you. Please try again in some time.",
                   "ssml": "<speak>Sorry, My team is having bad day to get this information to you. Please try again in some time.</speak>"
               }
           }
       }
       res.json(alexaResp);
   });
});

The details on hosting a custom skill as a web service are available at https://developer.amazon.com/docs/custom-skills/host-a-custom-skill-as-a-web-service.html .

Continuous Improvement Framework

In practical cases, it is very possible that a user’s utterances are not classified or understood by our intent engine due to several reasons such as the utterance being an external intent not part of the intent engine or the intent engine not confident due to the low intent match score. In a production environment, it is observed that there are a decent number of user utterances that are either misunderstood or not understood by the engine at all. We propose a framework that can help IRIS to become smarter and more intelligent towards mimicking a natural human conversation.

In the self-learning module , we propose three improvement components, shown in Figure 9-20:
  • Intent confirmation (double-check)

  • Next intent prediction

  • A human in the loop

../images/478492_1_En_9_Chapter/478492_1_En_9_Fig20_HTML.jpg
Figure 9-20

IRIS functional components for continuous improvement

Intent Confirmation (Double-Check)

Let’s take an example of a user’s utterance of “life insurance,” which may match with one of the possible intent scenarios; see Figure 9-21.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig21_HTML.jpg
Figure 9-21

Intent matches and corresponding scores

When we match the user utterance against the list of possible intents shown in Figure 9-21, we get a list of intents and respective match scores. The intent engine module of IRIS returns with an intent match only when the match score is above 0.75. We also call this as the minimum threshold score below which an intent match is not considered in response. In the example of “life insurance,” LIFE_INSURANCE_QUOTE_INTENT is returned in response from the intent engine.

An optimization to this implementation could be to introduce a minimum match score that is below the threshold score but relevant enough for further processing. We previously stated that the minimum threshold score is the score below which an intent match is not returned in response from the intent classification engine. A minimum match score is the score above which an intent is considered for further processing if it does not match the minimum threshold score.

Let’s understand with another example of a user utterance: “life insurance cost,” for which the match score is shown in Figure 9-22.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig22_HTML.jpg
Figure 9-22

Another example showing different intent matches and corresponding scores

In this example, the scores are below the minimum threshold score and in the current implementation the user utterance will default to search since no explicit intent was returned by the intent classification engine. If we consider a minimum match score of 0.5, intent LIFE_INSURANCE_QUOTE_INTENT could be considered for further confirmation.

These scores of 0.75 (minimum threshold score) and 0.5 (minimum match score) should be derived from training and test datasets, and could also change later on based on actual user utterance data and performance of intent classification engine.

Hence, we could make changes in IRIS to prompt for confirmation if the utterance was classified between 0.5 to 0.75.

With the above implementation, a typical conversation can then flow in the following ways:
User: Hello
IRIS: Hello
User: life insurance cost
IRIS: I could not understand your query correctly.  Do you want a life insurance quote?
User: Yes
IRIS: Sure, I can help you with that. May I know your age?

Predict Next Intent

This functionality enables IRIS to use logs to understand and predict the possible next best intent of the user. Figure 9-23 shows user logs.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig23_HTML.jpg
Figure 9-23

User logs with intent and sequence

In a simple summarization of INTENT-A, shown in Figure 9-24, we find that INTENT-B is the most frequent next intent overall.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig24_HTML.jpg
Figure 9-24

Log summarization to predict next intent sequence

We can also use techniques of path prediction, association rules, and frequent itemsets to obtain the most predictable next user intent.

A typical conversation with IRIS without the predicted intent may happen as follows:
User: Hi
IRIS: Hello
User: I want a life insurance quote
IRIS: Sure, I will help you with that. May I know your age?
User: My age is 24
IRIS: Have you smoked in the last 12 months?
User: no
IRIS: What's your height?
User: my height is 181 cms
IRIS: What's your weight?
User: 163 pounds
IRIS: Great news! You are eligible for an accelerated UW decision.
User: Thanks. I would like to buy life insurance policy.
IRIS: Sure. Can you provide me your email ID so that I start with the application process?
User: user@gmail.com
....
With predicted next intent capability, this conversation would happen like:
User: Hi
IRIS: Hello
User: I want a life insurance quote
IRIS: Sure, I will help you with that. May I know your age?
User: My age is 24
IRIS: Have you smoked in the last 12 months?
User: no
IRIS: What's your height?
User: my height is 181 cms
IRIS: What's your weight?
User: 163 pounds
IRIS: Great news! You are eligible for an accelerated UW decision. May I know your email ID so that I can start with the application process?
User: user@gmail.com
....

A Human in the Loop

The third improvement to the framework we introduced in this chapter was a human in the loop. Figure 9-20 shows various functional components for continuous improvement. Regardless of the techniques we use in making IRIS understand intent in a better way, there will always be some conversation that IRIS will not be able to understand. This is for the simple reason that IRIS does not have all the information of the universe and will always be designed to fulfill only a known set of functionalities.

We know that IRIS is designed to perform certain operations like calculating insurance eligibility, providing account balance, claim status, etc. Let’s assume that a certain percentage of users are asking IRIS for a change of address of their insurance policy. This is not supported by IRIS today, and it is challenging for machines to interpret this kind of new information.

Let’s assume some of the user utterances are as shown in Figure 9-25.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig25_HTML.jpg
Figure 9-25

Utterances and counts

With a human in the loop, these utterances can be analyzed further. In Figure 9-26, the first two utterances can be classified into one intent. Since this intent is 66% of the total logs, IRIS can be enhanced to support ADDRESS_CHANGE depending on the product decision. In some cases, an utterance can be mapped to an existing intent such as the last utterance. This will help the intent classification engine to further classify intents better due to the availability of more dataset.
../images/478492_1_En_9_Chapter/478492_1_En_9_Fig26_HTML.jpg
Figure 9-26

Utterances , intents, and counts

Utterances such as questions around non-related things such as users asking about cricket match scores or details on Brexit or train timings will also happen. They are logs that do not need further processing and will be ignored by subject matter experts enhancing the IRIS feedback loop.

Summary

In this concluding chapter, we discussed the various ways to deploy a chatbot into the cloud, we demonstrated a 5-minute introduction to integrating IRIS with Alexa, and we discussed the continuous improvement of IRIS through feedback loops via log files and humans in the loop.

In this book, we have kept a fine balance with three pillars: business context, theoretical foundations on machines’ handling of natural languages, and real-world development of a chatbot from scratch. We believe these three pillars will help build a truly enterprise-grade chatbot, with a well-defined ROI. Additionally, we also focused on ethical concerns in using personal data and how countries in European Union have agreed upon the GDPR regulations to safeguard people’s privacy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset