Summary

In this chapter, we saw an overview of ANNs. Neural networks implementation is simple, but the internals are pretty complex. We can summarize neural network as a universal mathematical function approximation. Any set of inputs which produce outputs can be made a black box mathematical function through a neural network, and the applications are enormous in the recent years.

We saw the following in this chapter:

  • Neural network is a machine learning technique and is data-driven
  • AI, machine learning, and neural networks are different paradigms of making machines work like humans
  • Neural networks can be used for both supervised and unsupervised machine learning
  • Weights, biases, and activation functions are important concepts in neural networks
  • Neural networks are nonlinear and non-parametric
  • Neural networks are very fast in prediction and are most accurate in comparison with other machine learning models
  • There are input, hidden, and output layers in any neural network architecture
  • Neural networks are based on building MLP, and we understood the basis for neural networks: weights, bias, activation functions, feed-forward, and backpropagation processing
  • Forward and backpropagation are techniques to derive a neural network model

Neural networks can be implemented through many programming languages, namely Python, R, MATLAB, C, and Java, among others. The focus of this book will be building applications using R. DNN and AI systems are evolving on the basis of neural networks. In the forthcoming chapter, we will drill through different types of neural networks and their various applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset