Handling Big Data Using Large-Scale Deep Learning

Training a neural network is a computationally intensive process that requires a significant amount of time. As the size of the data increases and the neural network gets deep, training deep learning models become more complex and requires more computing power and memory. To train our models efficiently, we can use a modern system with GPU capabilities. Deep learning libraries in R provide support for training models on multiple GPUs to accelerate the training process. We can also use cloud computing to build deep learning models. Cloud infrastructure scales efficiently and allows users to prototype them faster at a cheaper cost and optimized performance. The pay-per-use model offered by most of the cloud-based solutions makes life much more comfortable as one can quickly scale. This chapter will help you to gain an understanding of how to create a scalable deep learning environment on various cloud platforms. You will also learn how to use MXNet to build different neural networks and accelerate training deep learning models.

In this chapter, we will cover the following recipes:

  • Deep learning on Amazon Web Services
  • Deep learning on Microsoft Azure
  • Deep learning on Google Cloud Platform
  • Accelerating with MXNet
  • Implementing a deep neural network using MXNet
  • Forecasting with MXNet

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset