The current section provides steps to use pretrained models:
- Load tensorflow in R:
require(tensorflow)
- Assign the slim library from TensorFlow:
slimobj = tf$contrib$slim
The slim library in TensorFlow is used to maintain complex neural network models in terms of definition, training, and evaluation.
- Reset graph in TensorFlow:
tf$reset_default_graph()
- Define input images:
# Resizing the images input.img= tf$placeholder(tf$float32, shape(NULL, NULL, NULL, 3)) scaled.img = tf$image$resize_images(input.img, shape(224,224))
- Redefine the VGG16 network:
# Define VGG16 network library(magrittr) VGG16.model<-function(slim, input.image){ vgg16.network = slim$conv2d(input.image, 64, shape(3,3), scope='vgg_16/conv1/conv1_1') %>% slim$conv2d(64, shape(3,3), scope='vgg_16/conv1/conv1_2') %>% slim$max_pool2d( shape(2, 2), scope='vgg_16/pool1') %>% slim$conv2d(128, shape(3,3), scope='vgg_16/conv2/conv2_1') %>% slim$conv2d(128, shape(3,3), scope='vgg_16/conv2/conv2_2') %>% slim$max_pool2d( shape(2, 2), scope='vgg_16/pool2') %>% slim$conv2d(256, shape(3,3), scope='vgg_16/conv3/conv3_1') %>% slim$conv2d(256, shape(3,3), scope='vgg_16/conv3/conv3_2') %>% slim$conv2d(256, shape(3,3), scope='vgg_16/conv3/conv3_3') %>% slim$max_pool2d(shape(2, 2), scope='vgg_16/pool3') %>% slim$conv2d(512, shape(3,3), scope='vgg_16/conv4/conv4_1') %>% slim$conv2d(512, shape(3,3), scope='vgg_16/conv4/conv4_2') %>% slim$conv2d(512, shape(3,3), scope='vgg_16/conv4/conv4_3') %>% slim$max_pool2d(shape(2, 2), scope='vgg_16/pool4') %>% slim$conv2d(512, shape(3,3), scope='vgg_16/conv5/conv5_1') %>% slim$conv2d(512, shape(3,3), scope='vgg_16/conv5/conv5_2') %>% slim$conv2d(512, shape(3,3), scope='vgg_16/conv5/conv5_3') %>% slim$max_pool2d(shape(2, 2), scope='vgg_16/pool5') %>% slim$conv2d(4096, shape(7, 7), padding='VALID', scope='vgg_16/fc6') %>% slim$conv2d(4096, shape(1, 1), scope='vgg_16/fc7') %>% slim$conv2d(1000, shape(1, 1), scope='vgg_16/fc8') %>% tf$squeeze(shape(1, 2), name='vgg_16/fc8/squeezed') return(vgg16.network) }
- The preceding function defines the network architecture used for the VGG16 network. The network can be assigned using the following script:
vgg16.network<-VGG16.model(slim, input.image = scaled.img)
- Load the VGG16 weights vgg_16_2016_08_28.tar.gz downloaded in the Getting started section:
# Restore the weights restorer = tf$train$Saver() sess = tf$Session() restorer$restore(sess, 'vgg_16.ckpt')
- Download a sample test image. Let's download an example image from a testImgURL location as shown in following script:
# Evaluating using VGG16 network testImgURL<-"http://farm4.static.flickr.com/3155/2591264041_273abea408.jpg" img.test<-tempfile() download.file(testImgURL,img.test, mode="wb") read.image <- readJPEG(img.test) # Clean-up the temp file file.remove(img.test)
The preceding script downloads the following image from URL mention in variable testImgURL. The following is the downloaded image:
Sample image used to evaluate imagenet
- Determine the class using the VGG16 pretrained model:
## Evaluate size = dim(read.image) imgs = array(255*read.image, dim = c(1, size[1], size[2], size[3])) VGG16_eval = sess$run(vgg16.network, dict(images = imgs)) probs = exp(VGG16_eval)/sum(exp(VGG16_eval))
The maximum probability achieved is 0.62 for class 672, which refers to the category--mountain bike, all-terrain bike, off-roader--in the VGG16 trained dataset.