Other BigData Tools andTechnologies | 179
Hadoop HDFS for analysis. In this way, a company can track the users’ main area about prod-
uct searching, for example, mobile in electronics, sports shoes and gym equipment in sports
area, etc.
Flume is used to move the log data generated by application servers into HDFS at a higher
speed.
7.4.4 Components of Flume
• Event: Event is the single log entry or unit of data which we transport further.
• Source: Source is the component by which data enters Flume workflows.
• Sink: For transporting data to the desired destination, Sink is responsible.
• Channel: Channel is nothing but a duct between the Sink and Source.
• Agent: Agent is what we have known as any JVM that runs Flume.
• Client: Client transmits the event to the source that operates with the agent.
7.4.5 Congure Flume to Ingest Web Log Data
from a Local Directory to HDFS
Apache web server logs are generally stored in les on local machines running the server. In this
exercise, we will push the web log les into a local spool directory and then use Flume to collect
the data.
What is Spool Directory: This source lets you insert data by placing les into a ‘spooling’ directory on
disk. This type of source will place the specied directory for new les, and will parse events out
of new les as they appear. After a given le has been fully read into the channel, it is renamed to
indicate completion (or optionally deleted) as ‘lename.completed’.
Both the local and HDFS directories must exist before using the spooling directory source.
Inthis case, we will be using spool directory as our source and HDFS as destination.
The path of the web server log in the local file system is /data/weblog/*.*
• Creating an HDFS directory for Flume ingested data.
hadoop fs -mkdir/flume_weblog_sink
• Create the spool directory inside the desktop, which will act as our web log simulator and
store data files for Flume to ingest. On the local file system, create ‘flume_weblog_sink’ direc-
tory using the below command.
sudomkdir/Desktop/flume_weblog_sink
• Now configure Flume in ‘weblog.conf’ file as shown below.
– Create a configuration file named as ‘weblog.conf’ to ingest by Flume.
M07 Big Data Simplified XXXX 01.indd 179 5/17/2019 2:50:12 PM