Chapter 9. Mass Text Data Processing

In this chapter, we will cover:

  • Data preprocessing (extract, clean, and format conversion) using Hadoop Streaming and Python
  • Data de-duplication using Hadoop Streaming
  • Loading large datasets to an Apache HBase data store using importtsv and bulkload tools
  • Creating TF and TF-IDF vectors for the text data
  • Clustering the text data
  • Topic discovery using Latent Dirichlet Allocation (LDA)
  • Document classification using Mahout Naive Bayes classifier

Introduction

Hadoop MapReduce together with the supportive set of projects makes for a good framework choice to process large text datasets and to perform ETL-type operations.

In this chapter, we'll be exploring how to use Hadoop Streaming to perform data preprocessing operations such as data extraction, format conversion, and de-duplication. We'll also use HBase as the data store to load the data and will explore mechanisms to perform large data loads to HBase with minimal overhead. Towards the end of the chapter, we'll look in at performing text analytics operations using the Apache Mahout algorithms.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset