In this chapter, we will cover:
importtsv
and bulkload
toolsHadoop MapReduce together with the supportive set of projects makes for a good framework choice to process large text datasets and to perform ETL-type operations.
In this chapter, we'll be exploring how to use Hadoop Streaming to perform data preprocessing operations such as data extraction, format conversion, and de-duplication. We'll also use HBase as the data store to load the data and will explore mechanisms to perform large data loads to HBase with minimal overhead. Towards the end of the chapter, we'll look in at performing text analytics operations using the Apache Mahout algorithms.