Example 1 – a simple example

Let's take a look at our first example. This is a minimal example in which we will define a two-layer neural network and train it on a single data point. We are intentionally making this example verbose so that we can walk through each step together to improve our understanding:

var net = new Net<double>();

The InputLayer variable declares size of input. As shown in the preceding code, we use two-dimensional data. Three-dimensional volumes (width, height, and depth) are required, but if you're not dealing with images then we can leave the first two dimensions (width and height) at a size of 1, as we have done in the following example:

net.AddLayer(new InputLayer(1, 1, 2));

Declare a fully-connected layer comprising 20 neurons, as follows:

net.AddLayer(new FullyConnLayer(20));

Next, we need to declare a Rectified Linear Unit non-linearity (ReLU) layer, as follows:

net.AddLayer(new ReluLayer());

Then, declare a fully-connected layer that will be used by the SoftmaxLayer with the following code:

net.AddLayer(new FullyConnLayer(10));

Declare the linear classifier on top of the previous hidden layer, as follows:

net.AddLayer(new SoftmaxLayer(10));
var x = BuilderInstance.Volume.From(new[] { 0.3, -0.5 }, new Shape(2));

We then need to move forward with a random data point through the network, as follows:

var prob = net.Forward(x);

prob is a volume. Volumes have property weights that store the raw data, and weight gradients that store gradients. The following code prints approximately 0.50101, as follows:

Console.WriteLine("probability that x is class 0: " + prob.Get(0));

Next, we need to train the network, specifying that x is class zero and using a stochastic gradient descent trainer, shown as follows:

var trainer = new SgdTrainer(net)
{
LearningRate = 0.01, L2Decay = 0.001
};
trainer.Train(x,BuilderInstance.Volume.From(new[]{ 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 }, new Shape(1, 1, 10, 1)));
var prob2 = net.Forward(x);
Console.WriteLine("probability that x is class 0: " + prob2.Get(0));

The output should now be 0.50374, which is slightly higher than the previous value of 0.50101. This is because the network weights have been adjusted by the trainer to give a higher probability to the class we trained the network with (which was zero).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset