How to do it...

  1. The standard autoencoder code of TensorFlow can easily be extended to the sparse autoencoder module by updating the cost function. This section will introduce the autoencoder package of R, which comes with built-in functionality to run the sparse autoencoder:
### Setting-up parameter
nl<-3
N.hidden<-100
unit.type<-"logistic"
lambda<-0.001
rho<-0.01
beta<-6
max.iterations<-2000
epsilon<-0.001

### Running sparse autoencoder
spe_ae_obj <- autoencode(X.train=trainData, X.test = validData, nl=nl, N.hidden=N.hidden, unit.type=unit.type,lambda=lambda,beta=beta, epsilon=epsilon,rho=rho,max.iterations=max.iterations, rescale.flag = T)

The major parameters in the autoencode functions are as follows:

  • nl: This is the number of layers including the input and output layer (the default is three).
  • N.hidden: This is the vector with the number of neurons in each hidden layer.
  • unit.type: This is the type of activation function to be used.
  • lambda: This is the regularization parameter.
  • rho: This is the sparsity parameter.
  • beta: This is the penalty for the sparsity term.
  • max.iterations: This is the maximum number of iterations.
  • epsilon: This is the parameter for weight initialization. The weights are initialized using Gaussian distribution ~N(0, epsilon2).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset