How it works...

In Step 1 to Step 7, we built a single neural network model to see how to use a labelled image dataset to train our model and predict the actual label for an unseen image.

In Step 1, we built a linear stack of layers with the sequential model using Keras. We defined the three layers: one input layer, one hidden layer, and one output layer. We provided input_shape=1024 to the input layer since we have 32 x 32 images. We used the relu activation function in the first and second layers. Because ours is a multi-class classification problem, we used softmax as the activation function for our output layer. 

In Step 2, we compiled the model with loss='categorical_crossentropy' and optimizer='adam'. In Step 3, we fitted our model to our train data and validated it on our test data.

In Step 4 and Step 5, we plotted the model accuracy and the loss metric for every epoch.

In Step 6 and Step 7, we reused a plot_confusion_matrix() function from the scikit-learn website to plot our confusion matrix both numerically and visually.

From Step 8 onward, we ensembled multiple models. We wrote three custom functions:

  • train_models(): To train and compile our model using sequential layers.
  • ensemble_predictions(): To ensemble the predictions and find the maximum value across classes for all observations.
  • evaluate_models(): To calculate the accuracy score for every model.

In Step 11, we fitted all the models. We set the no_of_models variable to 50. We trained our models in a loop by calling the train_models() function. We then passed x_train and y_train to the train_models() function for every model built at every iteration. We also called evaluate_models(), which returned the accuracy scores of each model built. We then appended all the accuracy scores. 

In Step 12, we plotted the accuracy scores for all the models.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset