How it works...

We need to create an API to take the inputs from end users and generate the output. The end user will upload a CSV file with the inputs, and API returns the prediction output back to the user.

In step 1, we added schema for the input data. User input should follow the schema structure in which we trained the model except that the Exited label is not added because that is the expected task for the trained model. In step 2, we have created TransformProcess from Schema that was created in step 1.

In step 3, we used TransformProcess from step 2 to create a record reader instance. This is to load the data from the dataset. 

We expect the end users to upload batches of inputs to generate outcomes. So, an iterator needs to be created as per step 5 to traverse through the entire set of input records. We set the preprocessor for the iterator using the pretrained model from step 4. Also, we used a batchSize value of 1. If you have more input samples, you can specify a reasonable batch size.

In step 6, we used a file path named modelFilePath to represent the model file location. We pass this as a command-line argument from the Spring application. Thereby you can configure your own custom path where the model file is persisted. After step 7, a shaded JAR with all DL4J dependencies will be created and saved in the local Maven repository. You can also view the JAR file in the project target repository. 

Dependencies of customer retention API are added to the pom.xml file of the Spring Boot project, as shown here:

<dependency>
<groupId>com.javadeeplearningcookbook.app</groupId>
<artifactId>cookbookapp</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>

Once you have created a shaded JAR for the API by following step 7, the Spring Boot project will be able to fetch the dependencies from your local repository. So, you need to build the API project first before importing the Spring Boot project. Also, make sure to add the model file path as a VM argument, as mentioned in step 8.

In a nutshell, these are the steps required to run the use case:

  1. Import and build the Customer Churn API project: https://github.com/PacktPublishing/Java-Deep-Learning-Cookbook/blob/master/03_Building_Deep_Neural_Networks_for_Binary_classification/sourceCode/cookbookapp/.
  2. Run the main example to train the model and persist the model file: https://github.com/PacktPublishing/Java-Deep-Learning-Cookbook/blob/master/03_Building_Deep_Neural_Networks_for_Binary_classification/sourceCode/cookbookapp/src/main/java/com/javadeeplearningcookbook/examples/CustomerRetentionPredictionExample.java.
  1. Build the customer churn API project: https://github.com/PacktPublishing/Java-Deep-Learning-Cookbook/blob/master/03_Building_Deep_Neural_Networks_for_Binary_classification/sourceCode/cookbookapp/.
  2. Run the Spring Boot project by running the Starter here (with the earlier mentioned VM arguments): https://github.com/PacktPublishing/Java-Deep-Learning-Cookbook/blob/master/03_Building_Deep_Neural_Networks_for_Binary_classification/sourceCode/spring-dl4j/src/main/java/com/springdl4j/springdl4j/SpringDl4jApplication.java.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset