Image modification

Looking at the 64 images at the beginning of this chapter reveals some clues as to what's going on. We notice that images of the sandals, sneakers, and ankle boots seem to have a specific pattern. In all pictures involving these fashion items, the toe has always been pictured pointing in the left direction. On the other hand, in the images downloaded from the internet for the three footwear fashion items, we notice that the toe has been pictured pointing in the right direction. To address this, let's modify images of the 20 fashion items with a flop function that will make the toes point in the left direction, and then we can again assess the classification performance of the model:

# Images with prediction probabilities, predicted class, and actual class setwd("~/Desktop/image20")
temp = list.files(pattern = "*.jpg")
mypic <- list()
for (i in 1:length(temp)) {mypic[[i]] <- readImage(temp[[i]])}
for (i in 1:length(temp)) {mypic[[i]] <- flop(mypic[[i]])}
for (i in 1:length(temp)) {mypic[[i]] <- channel(mypic[[i]], "gray")}
for (i in 1:length(temp)) {mypic[[i]] <- 1-mypic[[i]]}
for (i in 1:length(temp)) {mypic[[i]] <- resize(mypic[[i]], 28, 28)}
predictions <- predict_classes(model, newx)
probabilities <- predict_proba(model, newx)
probs <- round(probabilities, 2)
par(mfrow = c(5, 4), mar = rep(0, 4))
for(i in 1:length(temp)) {plot(mypic[[i]])
legend("topleft", legend = max(probs[i,]),
bty = "",text.col = "black",cex = 1.2)
legend("top", legend = predictions[i],
bty = "",text.col = "darkred", cex = 1.2)
legend("topright", legend = newy[i],
bty = "",text.col = "darkgreen", cex = 1.2) }

The following screenshot shows the prediction probabilities, predicted class, and actual class after applying the flop (model-one) function:

As observed from the preceding plot, after changing the orientation of the images of the fashion items, we now get correct classifications by the model for sandals, sneakers, and ankle boots. With 16 correct classifications out of 20, the accuracy improves to 80%, compared to a figure of 50% that we obtained earlier. Note that this improvement in accuracy comes from the same model. The only thing that we did here was to observe how the original data was collected and then maintain consistency with the new image data being used. Next, let's work on the modification of the deep network architecture and see whether we can improve results further.

Before prediction models are used for generalizing the results to new data, it is a good idea to review how data was originally collected and then maintain consistency in terms of the format for the new data.

We encourage you to experiment a bit further to explore and see what happens if certain percentages of images in the fashion-MNIST data are changed to their mirror images. Can this help to generalize even better without a need to make changes to the new data?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset